You Have Data Rights: So Why Are They So Hard To Exercise?
Internet Exchange
internet.exchangepoint.tech
2025-03-13 16:05:04
In our main article today, Public Interest Technologist and Senior Product Manager for Permission Slip at Consumer Reports, Sukhi Gulati-Gilbert, breaks down why data rights are so hard to use, how companies make opting out a nightmare, and what privacy tools can do to help consumers take back contr...
In
our main article today
, Public Interest Technologist and Senior Product Manager for Permission Slip at Consumer Reports, Sukhi Gulati-Gilbert, breaks down why data rights are so hard to use, how companies make opting out a nightmare, and what privacy tools can do to help consumers take back control.
But first...
The Importance of Women in Technology and Combating Digital Misogyny at UN's CSW69
The
69th session
of the UN Commission on the Status of Women (CSW69) is underway, marking the 30th anniversary of
the Beijing Declaration
. This year, technology is at the forefront of discussions, with a strong focus on digital misogyny and the importance of women's participation in the digital economy.
At the start of the session, 193 UN member states adopted
a Political Declaration
addressing modern challenges to gender equality, including access to STEM education for women, equitable participation in the digital economy, and improved gender-specific data collection to inform policy. As part of these discussions, UN Secretary-General António Guterres
highlighted the role of emerging technologies
, including artificial intelligence, in perpetuating violence and abuse against women.
Events Highlighting Digital Rights and Gender Equity
CSW69 is hosting a range of official sessions and fringe events tackling these issues head-on, bringing together feminists, digital rights advocates, and policymakers. Upcoming events include:
Lecturas feministas de las violencias digitales en contra de defensoras de derechos humanos. La ausencia de respuestas efectivas 30 años después de Beijing –
March 17, 10:30 ET
.
https://limesurvey.apc.org/index.php/281746
The Silicon Valley Consensus fuels an AI infrastructure arms race driven by tech giants, financiers, and energy firms, overbuilding and overinvesting in a speculative bubble that prioritizes profit and power over real innovation.
https://thetechbubble.substack.com/p/the-silicon-valley-consensus-and
By Sukhi Gulati-Gilbert, Public Interest Technologist and Senior Product Manager for Permission Slip at Consumer Reports. Views are her own and not indicative of Consumer Reports’ position.
Much of today’s internet is powered by a massive and opaque exchange of personal data. In the time it takes to load a web page, advertisers have likely already
auctioned off your attention
. Advertisers determine the value of ad space based on what they know about the viewer. Individual profiles for this purpose are
aggregated from sources
ranging from public property records to retail transactions and, of course, internet activity. This data is often collected without meaningful user consent and can be used in ways that are sensitive and invasive. Recently,
the FTC took action
against data brokers for using consumer location data to unlawfully sell inferences about medical conditions and religious beliefs. When every click contributes to an inscrutable sea of personal data, it can be overwhelming to take back control.
State privacy laws in the United States have started to address dangers of the data economy by granting residents data rights. However, these rights are often difficult and frustrating to exercise. Improving digital privacy rights requires stronger legislation, new technical systems, and consumers making their voice heard. Until then, privacy tooling can help bridge the gap between data rights in law and protecting consumer privacy in practice.
The Current State of Data Rights Requests
Inconsistently Protected
In the United States, existing data rights are mostly enshrined in state-level legislation.
Nineteen states
so far have signed some form of digital privacy rights into law. The seminal piece of U.S. privacy legislation was the California Consumer Privacy Act (CCPA), which went into effect in 2020. The law grants residents an array of data rights. Perhaps most notably: the right to access what personal data companies are collecting, the right to request deletion of that data, and the right to opt out of the data being sold or shared. Though state laws vary in their protections,
all nineteen contain
some form of access, deletion and opt out rights. It is encouraging to see more states adopting privacy laws, although many Americans continue to lack any protected data rights.
Onerous to Execute
Unfortunately, simply having privacy rights is not enough if consumers cannot execute those rights effectively and easily.
Accounts
of individuals sending out rights requests have revealed the process to be incredibly
confusing
. To start, individuals must identify how to submit a data rights request - a process which varies by company. A
2020 Consumer Reports study
found that on 42.5% of sites tested, at least one in three testers were unable to find a “Do Not Sell” link. Once the link (or alternate submission method) has been located, it can still feel tedious or even deliberately difficult. An example is the common data broker practice of requiring users to
find their information
on the site in order to submit the opt-out request. Once the user begins searching for themselves, they are often asked follow up questions like: “Has this individual ever lived in New York?” “Are they under 30?” Going through a multi-step search significantly lengthens the time it takes to submit a data rights request and puts the burden of locating relevant information on the consumer instead of on the data broker. To make matters worse, consumers may answer the questions truthfully without realizing that doing so risks exposing even more personal data to the company.
Even in the case where companies have created an easy or intuitive data rights request process, the volume of requests for an individual to send is high. There are hundreds of data brokers in California's
data broker registry
and many companies in addition to data brokers that consumers may have shared data with. An individual could end up managing requests with hundreds of entities to effectively reduce their digital footprint.
Lacking Assurance
A now infamous
leaked Facebook memo
explained, “We do not have an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose.” For years before privacy regulation came into effect, systems were designed to maximize data collection, not to protect user privacy. Retrofitting them for privacy compliance is difficult and expensive, making it entirely possible that companies claiming to fulfill data rights requests have not truly deleted or protected user data. Currently, those exercising their data rights do not receive any verifiable proof that their request has been fulfilled or that the data they submitted with the requests was not used for purposes other than fulfilling their request. The process requires taking on good faith that a company has complied when they say they have, even though we know that the problem can often be technically intractable.
An Interim Solution: Privacy Tooling
Given how onerous it can be to send out data rights requests manually, privacy automation tools have emerged to help consumers. Many state laws include a provision that allows individuals to delegate a third party - referred to as an “authorized agent” - to submit requests on their behalf. An authorized agent could be an individual, an organization, or an automated tool. Agents can be a powerful force to simplify the user experience by allowing consumers to interface with one agent rather than dealing with multiple companies individually.
Permission Slip by Consumer Reports
is one of a few applications leveraging this concept to make it easier for consumers to exercise their data rights. Authorized agents in the market vary in their approach but the value proposition is similar: they take on the tedious task of sending out data rights requests and tracking their progress so users don’t have to. These tools also enable collective action. At Consumer Reports, for example, we follow up with companies that fail to respond to requests at scale and may even file complaints with enforcement agencies.
Authorized agents are one set of tools working to make privacy more accessible, but they aren’t the only ones.
Global Privacy Control
is a browser tool users can configure to proactively signal to websites that they do not want their data sold or shared. There are also state-led efforts to simplify user interfaces. Recently, California’s
DELETE Act
went into effect, requiring the state to create a system that lets residents delete their data from all registered data brokers with a single request.
Moving Forward: Making Data Rights More Effective
Reducing the consumer burden of exercising data rights is important, but we should also continue to iterate on data rights by making them more consistent, standard and effective. Everyone has a part to play:
For Policymakers:
Standardize Data Rights Requests:
Data rights requests should be standardized so that they can be submitted via an API. This will enable both the public and private sectors to create one-stop tools for consumers to submit data rights requests that are more intuitive and usable than what companies have incentive to create.
For Technologists:
Invest in Data Traceability:
Decades of innovation have gone into creating systems that are optimized for extracting value from user data. To catch up, we’ll have to create usable, open source systems that make it easy for companies to incorporate data traceability. Without reliable and transparent understanding of data flows, we cannot achieve verifiable assurance that rights have been effectively exercised.
For Consumers:
Send Out Data Rights Requests:
However imperfect, submitting requests can dramatically reduce your digital footprint and exert real demands on companies with potential to spur increased investment in privacy programs. Given how long the process can take, you may consider employing an authorized agent to help or prioritizing requests to the worst offenders.
File Complaints:
If your state has a privacy law and a company is not replying to your data rights requests, file a complaint with your state’s attorney general.
Of course, we should also not lose sight of a world beyond data privacy rights. While they can help us mitigate the harm when surveillance is the default, we should continue to advocate for reimagined systems and business models where privacy is the default instead. In the meantime, the emergence of data rights for Americans is a step in the right direction. Let’s make the most of them.
Our team at Include Security
is often asked to examine applications coded in languages that are usually considered “unsafe”, such as C and C++, due to their lack of memory safety functionality. Critical aspects of reviewing such code include identifying where bounds-checking, input validation, and pointer handling/dereferencing are happening and verifying they’re not exploitable. These types of vulnerabilities are often disregarded by developers using memory safe languages.
In 2023 the NSA published a paper on
Software Memory Safety
that included Delphi/Object Pascal in a list of “memory safe” languages. The paper caveats the statement by saying:
Most memory safe languages recognize that software sometimes needs to perform an unsafe memory management function to accomplish certain tasks. As a result, classes or functions are available that are recognized as non-memory safe and allow the programmer to perform a potentially unsafe memory management task.
With that in mind, our team wanted to demonstrate how memory management could go wrong in Delphi despite being included on the aforementioned list and provide readers with a few tips on how to avoid introducing memory-related vulnerabilities in their Delphi code.
In this blog post, we take the first steps of investigating memory corruption in Delphi by constructing several simple proof-of-concept code examples that demonstrate memory corruption vulnerability patterns.
What Is Delphi?
Delphi is the name of a set of
software development tools
, as well as a dialect of the
Object Pascal programming language
. Delphi was originally developed by Borland, and is now developed by Embarcadero Technologies
https://en.wikipedia.org/wiki/Delphi_(software)
. We chose to target Delphi as it is still used by some important companies, yes even top 10 Internet companies! These days Delphi and Object Pascal may be less popular than other languages, but they are still used in popular software as seen in the very extensive
awesome-pascal
repository list and by many companies as shown by Embarcadero
in a variety of case-studies
.
There’s also a free and open source IDE named
Lazarus
, which uses the
Free Pascal
compiler, and aims to be Delphi compatible.
Memory Corruption and Memory Safety
We’d like to take a look at how memory corruption vulnerabilities could be introduced in languages other than C/C++, where such vulnerabilities are often discussed. This blog post takes the first steps in investigating what memory corruption vulnerabilities might look like in Delphi code by writing several proof-of-concept demonstrations of the types of memory corruption that often lead to vulnerabilities.
Delphi has been claimed to be a memory-safe language
in some contexts
but we consider it similar to C++ in regards to memory safety. Object Pascal and Delphi
support arbitrary untyped pointers
and
unsafe pointer arithmetic
, which can intentionally or not lead to memory corruption. But rather than simply show that dereferencing an arbitrary pointer value could cause a crash, we wanted to demonstrate a couple of memory corruption patterns that commonly lead to vulnerabilities in software. The following examples were compiled in the RAD Studio Delphi IDE with all of the default compiler options.
Stack-Based Buffer Overflow
Let’s start by trying to write a simple stack-based buffer overflow in Delphi. Note that in these following examples, the code was compiled in Win32 mode, which was the default, though the general concepts apply to other platforms as well. Here’s our first attempt at a stack-based buffer overflow:
procedure Overflow1;
var
ar: Array[0..9] of Byte; // Fixed-length array on the stack
i: Integer;
begin
for i := 0 to 999 do
begin
ar[i] := $41; // Try to overflow the array
end;
end; // If overflow happens, returns to $41414141
We define an array of 10 bytes, and then try to write 1000 values to the array. When we compile and run this code, it raises an exception but doesn’t crash:
Why didn’t it crash? Since the array
ar
is defined with a static length, the compiler can insert code that does bounds-checking at runtime whenever the array is indexed. Let’s take a look at the compiled procedure (disassembled in Ghidra):
The code tests that the index is less than or equal to 9 and if it’s not, it calls a function that raises the exception (CMP EAX, 0x9, JBE, CALL).
But wait, this was the application compiled in debug mode. What happens if we compile the application in release mode?
Ah! In release mode, the compiler didn’t include the array bounds check, and the code overwrote the return address on the stack. Shown above is the Delphi debugger after returning to
$41414141
. Here’s the release build code, again disassembled in Ghidra:
No bounds check in sight. Why not? The “
Range checking
” (which is what caught the overflow in debug mode) and “
Overflow checking
” (which checks for integer overflows) compiler settings are disabled by default in release mode:
So here is a lesson: consider turning on all the “Runtime errors” flags in release mode when hardening a Delphi build. Of course, the added checks are likely disabled by default to avoid performance impacts.
But how easy is it to overflow a buffer with Range checking enabled? Well, the official Delphi documentation warns about memory corruption in a few of its system library routines:
Note that these are just the system library routines that clearly warn about memory corruption in their documentation; it’s not a comprehensive list of dangerous routines in Delphi.
Here’s an example that uses
Move
to cause a stack buffer overflow:
procedure Overflow2;
var
ar1: Array[0..9] of Byte; // Smaller array on the stack
ar2: Array[0..999] of Byte; // Larger array on the stack
i: Integer;
begin
for i := 0 to 999 do
begin
ar2[i] := $41; // Fill ar2 with $41
end;
Move(ar2, ar1, SizeOf(ar2)); // Oops should have been SizeOf(ar1)
end; // Returns to $41414141
This time, we create two stack buffers, fill the bigger one with
$41
, then use
Move
to copy the bigger array into the smaller array. When we run this code, even the debug build with Range checking enabled overflows the stack buffer and returns to
$41414141
:
The Heap and Use After Free
Let’s take a look at a couple examples of how heap-based vulnerabilities might be introduced. In these examples it was easy to cause the default heap implementation to allocate the same memory after a previous allocation had been freed by specifying allocations of the same size.
In this first example, we allocate a string on the heap, assign a value to it, free the string, then allocate another string which shares the same memory as the previous string. This demonstrates how reading uninitialized memory might lead to an information disclosure vulnerability. In this case,
SetLength
was used to set the length of a string without initializing memory.
In this example,
Heap1c
first calls
Heap1a
, which dynamically constructs a string, causing memory to for it to be allocated on the heap. The memory is freed as the string goes out of scope. Next,
Heap1c
calls
Heap1b
.
Heap1b
calls
SetLength
, which allocates memory for a string without initializing the heap memory, then reads the contents of the string revealing the string constructed in
Heap1a
.
procedure Heap1a;
var
s: String; // Unicode string variable on the stack, contents on the heap
begin
s := 'Super Secret'; // Assign a value to the string
s := s + ' String!'; // Appending to the string re-allocates heap memory
end; // Memory for s is freed as it goes out of scope
procedure Heap1b;
var
s: String; // Unicode string variable on the stack, contents on the heap
begin
SetLength(s, 20); // Trigger re-allocation, does not initialize memory
ShowMessage(s); // Shows 'Super Secret String!'
end;
procedure Heap1c;
begin
Heap1a;
Heap1b;
end;
When the above code is run, the
ShowMessage
call produces the “Super Secret String!” in a dialog:
In this second heap example, memory is allocated for an object on the heap, then freed, then another object is allocated using the same heap memory. This is similar to how the same memory region was re-used by the strings in the previous example. In this case, the freed object is written to, modifying the second object. This represents a use-after-free vulnerability, where an attacker might be able to modify an object, either to obtain code execution or otherwise modify control flow.
TmyFirstClass
and
TmySecondClass
are two classes that contain a similar amount of data. In the procedure
Heap2
,
obj1
, an instance of
TMyFirstClass
is created then immediately freed. Next,
obj2
, an instance of
TmySecondClass
is created and read. Then, the freed
obj1
is written to. Finally,
obj2
is read again showing that it was modified by the access to
obj1
.
type
[…]
TMyFirstClass = class(TObject)
public
ar: Array[0..7] of Byte;
end;
TMySecondClass = class(TObject)
public
n1: Integer;
n2: Integer;
end;
[…]
implementation
[...]
procedure Heap2;
var
obj1: TMyFirstClass;
obj2: TMySecondClass;
begin
obj1 := TMyFirstClass.Create; // Create obj1
obj1.Free; // Free obj1
obj2 := TMySecondClass.Create; // Create obj2 (occupies the same memory obj1 did)
ShowMessage('obj2.n2: ' + IntToStr(obj2.n2)); // Shows 0 - uninitialized memory
obj1.ar[4] := $41; // Write to obj1 after it has been freed, actually modifying obj2
ShowMessage('obj2.n2: ' + IntToStr(obj2.n2)); // Shows 65 - the value has been overwritten
obj2.Free; // Free obj2
end;
In the first screenshot, the dialog shows that
obj2.n2
was equal to 0:
Then, after
obj1
was written to, the second dialog shows that the value of
obj2.n2
was set to 65 (the decimal representation of
$41
):
Conclusion
These examples only scratch the surface of how memory corruption vulnerabilities might happen in Delphi code; future research could investigate more potentially dangerous library routines in official or common third-party libraries, how FreePascal behaves compared to Delphi, especially on different platforms (Win64, Linux, etc.), or
how different heap implementations
work to explore exploitability of heap memory corruption.
Based on what we covered in this blog post, here are some suggestions for Delphi developers:
Avoid dangerous routines such as
FillChar
,
BlockRead
,
BlockWrite
, and
Move
; whenever they must be used, make sure to carefully check sizes using routines such as
SizeOf
.
Consider enabling the “Runtime errors” flags in the compiler options.
Be cautious when dynamically creating and freeing objects, paying attention to potentially unexpected code paths that could result in using objects after they have been freed.
Make sure to initialize newly allocated memory before it is read.
In general, don’t assume that Delphi as a language is inherently safer than other languages such as C/C++.
Hopefully these examples begin to demystify Delphi and Object Pascal, and also demonstrate that though memory corruption concepts are most commonly discussed in the context of C/C++, familiar vulnerabilities can be found in other languages as well.
Appendix: Listing of Unit1.pas
unit Unit1;
interface
uses
Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics,
Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls;
type
TForm1 = class(TForm)
Button1: TButton;
Button2: TButton;
Button3: TButton;
Button4: TButton;
procedure Button1Click(Sender: TObject);
procedure Button2Click(Sender: TObject);
procedure Button3Click(Sender: TObject);
procedure Button4Click(Sender: TObject);
private
{ Private declarations }
public
{ Public declarations }
end;
TMyFirstClass = class(TObject)
public
ar: Array[0..7] of Byte;
end;
TMySecondClass = class(TObject)
public
n1: Integer;
n2: Integer;
end;
var
Form1: TForm1;
implementation
{$R *.dfm}
procedure Overflow1;
var
ar: Array[0..9] of Byte; // Fixed-length array on the stack
i: Integer;
begin
for i := 0 to 999 do
begin
ar[i] := $41; // Raises an exception if dynamic bounds-checking is enabled
end;
end; // If overflow happens, returns to $41414141
procedure Overflow2;
var
ar1: Array[0..9] of Byte; // Smaller array on the stack
ar2: Array[0..999] of Byte; // Larger array on the stack
i: Integer;
begin
for i := 0 to 999 do
begin
ar2[i] := $41; // Fill ar2 with $41
end;
Move(ar2, ar1, SizeOf(ar2)); // Oops should have been SizeOf(ar1)
end; // Returns to $41414141
procedure Heap1a;
var
s: String; // Unicode string variable on the stack, contents on the heap
begin
s := 'Super Secret'; // Assign a value to the string
s := s + ' String!'; // Appending to the string re-allocates heap memory
end; // Memory for s is freed as it goes out of scope
procedure Heap1b;
var
s: String; // Unicode string variable on the stack, contents on the heap
begin
SetLength(s, 20); // Trigger re-allocation, does not initialize memory
ShowMessage(s); // Shows 'Super Secret String!'
end;
procedure Heap1c;
begin
Heap1a;
Heap1b;
end;
procedure Heap2;
var
obj1: TMyFirstClass;
obj2: TMySecondClass;
begin
obj1 := TMyFirstClass.Create; // Create obj1
obj1.Free; // Free obj1
obj2 := TMySecondClass.Create; // Create obj2 (occupies the same memory obj1 did)
ShowMessage('obj2.n2: ' + IntToStr(obj2.n2)); // Shows 0 - uninitialized memory
obj1.ar[4] := $41; // Write to obj1 after it has been freed, actually modifying obj2
ShowMessage('obj2.n2: ' + IntToStr(obj2.n2)); // Shows 65 - the value has been overwritten
obj2.Free; // Free obj2
end;
procedure TForm1.Button1Click(Sender: TObject);
begin
Overflow1;
end;
procedure TForm1.Button2Click(Sender: TObject);
begin
Overflow2;
end;
procedure TForm1.Button3Click(Sender: TObject);
begin
Heap1c;
end;
procedure TForm1.Button4Click(Sender: TObject);
begin
Heap2;
end;
end.
This post is about
ATProto
,
Bluesky
, and to a certain extent
All The
Other
ATProto
Apps
. I want to take the opportunity to look at things from a different angle and share how my view of ATProto has changed over time.
Wait!! First I want to address the elephant in the room: your title for this post is really click bait-y. How do you know what I think ATProto is?
Um... Hello, uh... Charlie. Obviously I don't
really
know what you think. It's just that the title "ATProto might not be what you think" didn't roll of the tongue the same way, so I shortened it for marketing's sake.
I'm not really going to try to definitively answer this question, but I want to talk about the question itself.
Some people, correctly in my opinion, say that this question is nuanced. Other people say things like "it's pretty obvious that Bluesky isn't decentralized". I think part of the trouble is that there are a lot of questions that we're actually trying to answer when we ask "Is Bluesky decentralized?".
Some possible "questions within the question" people might actually want answered might include:
Is Bluesky decentralized?
Is ATProto decentralized?
Can I "own" my data? ( whatever that means )
Can good posts be censored against our will?
Will Bluesky be any better than Twitter in the long run?
Can I keep my followers and identity if I need to switch to an alternative for some reason?
What happens of the company behind Bluesky goes rogue?
What about if Elon Musk buys it?
Will I be happier on Bluesky than on Twitter?
Is "decentralized" Bluesky just an overcomplicated solution to problems I don't have just like blockchains?
This question of "decentralized-ness" is really just a front to a bunch of other vague questions we might be asking. And that's OK. It's a lot harder to talk about all the specifics of this "threat model" that we have half-formed in our head than it is to say, "is it decentralized?".
We just have to realize the limits of the question and realize it means different things to different people.
Again, in this post I'm not going to directly ask or answer that question, but I'm going to look at if from some different angles.
Centralized Views Over Decentralized Data
I've seen a lot of different kinds of apps popping up around ATProto lately and I think one thing that isn't immediately obvious is that most of them are, in a large way, just typical, centralized web applications.
No they're not! They're on ATProto so they're... decentralized or something!
Oh, you're still here? Well, don't take everything so seriously, hear me out.
In order use the app, you have to access a centralized webserver run by the developers. Whenever that app shows you some data, it is querying from its own, centralized database. If that central infrastructure goes down, you can't access the app or do anything useful with it.
So all of the decentralization is useless!? Noooooo!!
Not necessarily. While the
app
is centralized, the
data
isn't completely centralized. In ATProto your data is written to your own PDS ( Personal Data Server ), and that PDS can be hosted by anyone. So if the app goes down, you can still have your data on your PDS.
This is really good compared to sites like Twitter or Discord, where your data is locked up in the platform and if it weren't for laws in certain countries giving you a right to that data, you might never be able to get your data out again.
Yeah, and even still they can make it pretty difficult.
Exactly. They want your data locked up. With ATProto there's a lot more going for you when it comes to having your data always accessible to you, including providing a standardized API to get that data.
Even if Bluesky runs most of the PDSes today, it doesn't have to be that way, and time may change that.
So the PDS gives us decentralized data, and the apps give us central views over that data.
The Importance of the PDS
This is something I want to put a bit more focus on: how important the PDS is.
Giving people their own PDS is
soooo
crucial to having a free ( as in freedom ) internet. This needs to be something that I can:
Self host or have somebody else host for me
Download
all
of my data from, whenever I want to, and
Grant other apps read and write access to
That last piece is crucial to the existence of all of these different ATProto apps. We are giving people their own data store and then letting them connect that to all kinds of different tools and experiences.
The potential here is enormous! We
need
personal data stores.
Because of them, instead of us having just
one
app with a way for us to "own" our data,
all
of these apps now let us own our data. And making this basic ability, this right to control our own data, mainstream, is crucial to getting people's internet freedoms back.
The Importance of Bluesky
I think that the importance of being "mainstream" needs a little bit of attention, too. Bluesky is becoming a much more "normal" thing than most other "decentralized" apps ever end up.
Bluesky set out to make an uncompromising Twitter-like experience, but with important freedoms built-in to its protocol. That lack of compromise on user experience has been crucial to its securing 30 million+ users.
But didn't they kind of lie about it being "decentralized"?
I think they did pretty good at explaining what they were doing the best they could. Again, lanugage is kind of failing to be as expressive as we want it to be here.
Even if Bluesky
had
blatantly lied about its being decentralized, and I'm not saying they did, I think that getting so many normal people using an app with fundamentally way more open and freedom preserving tech, is really important.
I firmly believe that we can make decentralized apps that have great experiences and are really useful to normal people. I think it's a huge shame that almost nothing like that exists yet. Bluesky, whether it's 100% decentralized or not, has started to spread the narrative that not only are good decentralized apps possible, but that they are important.
The network that they are accumulating is also triggering an avalanche of development looking to capitalize on its potential. And these development efforts are inheriting the all-important personal data store.
But the Centralization!
OK, so we have decentralized PDSes, fine, but you said the apps are centralized. What does it matter if I have my data on my PDS if I can't do anything with it because the app is gone?
Omnivore is an open source read-it-later app. They had a free cloud service running for a while, they've got 14.4k stars on GitHub, and a lot of people loved and used that app.
But eventually the team behind it got acquired and users were given short notice that the service was going down for good. The project would remain Open Source on GitHub but all of the normal people who aren't going to self-host the app won't be able to use it anymore.
Users were given the opportunity to request a zip archive of all of their saved posts in a JSON format, but as it stands there isn't necessarily anything good to do with it yet.
That really sucks! But how would ATProto have prevented that? If the company goes away, the app goes away.
Well, if Omnivore was an ATProto app, and they shut down their server, all of the users would still have their data on their PDS in a standardized format. At that point, any other developers can come in and run another app, or even the same app if it was Open Source like omnivore, and access
all of the existing data already on your PDS
when you login.
Ohhhh, and unlike Omnivore, because all the data is still on the PDSes, all the users would have to do to get setup again is login to the new app. They wouldn't even have to make a new account or anything!
Exactly! That makes it
so
much easier for somebody to come in and "rescue" the app, so-to-speak.
Even if nobody stepped up to run the full-blown service that would run your RSS subscriptions etc., it would be possible to make a free website that would let you login to your ATProto account and
view
all of your Omnivore data, so that at least you'd have a way to use the data left-over.
That's all made possible by the standardized personal data store.
Identity
And that brings us to another one of the
super important
things that ATProto has done: they've decentralized the login provider.
Wait a second! ATProto isn't the first one to do that.
No, they aren't, but again, I go back to the "mainstream" factor. While there are other things like
Solid OIDC
and
Web Sign-in
that let you use your own domain to login, ATProto has made it so much more "normal".
When somebody lets you "Login with ATProto", unlike the "Login with Google" button, it actually lets you login with your
very own
PDS.
Using your domain as a handle is
also
super important. It means that just by putting in our handle we are informing the service, transparently though DNS, who is in charge of logging us in.
The "Login with Google" button has been so
useful
and yet so horrible for the freedom of the web. Why does google get to be the gatekeeper to all of our web logins?
We need an alternative, but it also needs to be easy, and by making handles domains, and making it so that normal people can use and understand it, they have made it possible for an actually decentralized social login button.
Linking
Identity
to your
Personal Data Store
and using
Domains as Handles
is a
crucial
combination that is really starting to unlock web freedom.
More PDS Providers
But Bluesky hosts almost everybody's PDS. That's why it's so easy. How do we know that there will ever be anybody else that hosts PDSes for people? Decentralized login doesn't help if everybody only uses one provider.
That's a good point, and we
do
need more people hosting PDSes. But I have good reason to believe that there will be more services hosting PDSes very soon.
The reason is for onboarding.
When most people make an ATProto app today, they usually just let you login with Bluesky. Bluesky is already providing the PDS and most people who are interested in your app is probably interested because they're already on Bluesky.
But that's not true for everybody. Us at Muni Town, as well as a couple other ATProto project developers I've talked to, are thinking about hosting PDSes so that users don't
need
a Bluesky account to join.
We want people to be able to register directly for
our
app. They shouldn't
need
to login with Bluesky, but if they want to, that's fine.
If we host PDSes, then we can let users sign up directly for our app.
Our app will give them a domain handle, just like Bluesky does, and the cool part about it is that they can use their handle from our app to login to any ATProto app, including Bluesky!
Whoah! That's neat!
Isn't it! What's great about this is that desire for independent registration provides a natural incentive for more apps to start running PDS services.
The more serious apps that start getting built on ATProto, that don't want to require a Bluesky account, the more services that will likely start offering PDS hosting.
And all of these apps will let you bring your own PDS to any of the other apps!
No Relay Necessary!
OK, that's pretty cool, but what about the relay? I hear a lot of people talk about how the relay is too centralized and that it's too expensive for anybody to ever really run their own.
There's a lot of debate around the Bluesky relay and whether or not it's posible to "scale down" ATProto.
Luckily, with all the attention that Bluesky and ATProto have gotten lately, there's a lot of interest in running alternative relays just in case things go bad, and I think it may be possible that funding for a public good relay might materialize to protect from the potential failure or compromise of the Bluesky relay.
But I also want to draw attention to another non-obvious thing about ATProto: the relay is not required.
Let's do a thought experiment. Let's say we were making our own read-it-later app called Herbivore.
Remember what I said about ATProto apps basically being normal web apps? Well, technically it's possible to make Herbivore so that you login with ATProto and then after that is 100% typical and centralized.
All the data is stored on the Herbivore server and basically completely ignores ATProto after you're logged in. This would take advantage of the identity side of things, but not the PDS.
Now lets add one more feature: all of our data is "backed up" in realtime to our PDS.
At that point we have a basically "normal" web application, where we can login with ATProto and all of our data is stored on our PDS, but there's no need for the relay.
Interesting... What's the catch?
The catch is how it interacts with other apps. For example, say there is another app called Carnivore. Its job is to help us delete old Herbivore data that we don't want anymore.
If we login to Carnivore and use it to delete a record off of our PDS, without the relay, Herbivore won't know immediately that our PDS was modified. So if we log back into Herbivore we'll still see the data that we deleted using Carnivore.
Well that's a problem.
Maybe sometimes, but not all the time.
For example, maybe Herbivore could just check to see if our PDS is in sync whenever we login to it. Or maybe there could be a periodic refresh.
Herbivore could expose an API endpoint that other apps could use to ask it to refresh it's data.
Or even better, maybe it could use the relay when it's available, but still fallback to another refresh mechansim if the relay is not available.
Could Herbivore run it's own relay that only subscribes to Herbivore users?
I'm not sure! I think that there's room for exploration here.
They already have the
Jetstream
, which is like a filtered relay, but it still depends on Bluesky's relay service.
Maybe we need some protocol or PDS changes to make it feasible to do lighter-weight subscriptions to individual users on their PDS, or maybe there are things we could try already that we haven't thought of.
This is something I'd love to see more investigation into.
Local First + ATProto
And that brings me to the solution that Muni Town has started down with
roomy.chat
: combining local-first tech with ATProto.
We're making a toolkit for building apps that can work offline and efficiently sync their data to other peers or services, including their ATProto PDS.
We'll still have a central service that will help users sync chat messages and other data with each-other in realtime, but it's also a service that communities can run for themselves and they can easily move their data from one provider to another.
The user data is stored it our own kind of "personal data store" that, unlike the ATProto PDS, is designed to be efficiently synced peer-to-peer.
The great part, is that we can still leverage the advantages ATProto regarding identity and the PDS as a place where all of you data is backed up and always accessible to you.
Not a One Trick Pony
A lot of what I'm trying to get at with this post is that there is more than one way to leverage ATProto, and that there are some pretty major things it has started to do right that we really need right now.
We're used to the idea that there's more than one way to make a web app, and the same is true even if you are building it on ATProto. It hasn't set a lot in stone, it's just given us some bricks that we can all share.
The
"AppView"
is a component of the ATProto architecture that you are given nearly free reign on. It can be any kind of thing you want, and I think there's all kinds of unexplored possibilities there.
You might even be able to make an AppView with a meaningful
ActivityPub
integration, or possibly borrow ideas about inboxes and outboxes as an alternative to relays.
So maybe ATProto lets you make centralized
and
decentralized apps.
Yep. I've had my eye on ATProto for over a year, and when I found it I definitely wasn't sold on it in any respect. I'm definitely still not betting everything on it.
Gradually, though, I am seeing some of the principles that I really believe are important actually holding up better than I thought they would in ATProto. ATProto wasn't what I thought it was.
It's still not perfect. Our whole Local First + ATProto idea doesn't even use it in a way that is commonly accepted as a good idea. We're still not sure how our plans might change, and we are actively preserving our ability to deploy completely independent of ATProto if we need to.
But there's a lot of good here that can sometime be hard to see.
When you're reading though all of the technical details of the protocol and seeing all of the specific designs of existing apps on it, it's hard to see what other possibilities there are and what still remains to be tried.
The Final Hesitation
Uh oh, here it comes, I knew there would be another issue.
Heh, yeah, the last, big concern I see built into ATProto that I'm not comfortable with is the
plc.directory
.
PLC directory is the centralized database of user identities that is used by almost all ATProto users.
It's what gives you a permanent identifier that never changes, even if your ATProto handle changes.
It's what lets you tell the network when you change your handle, PDS provider, and other identity critical changes like your rotation keys. It's something most people don't need to think about, but that is critical to ATProto working.
It is almost completely fine and secure except for the fact that whoever controls the PLC directory can technically reject your updates to things like your PDS or handle.
This doesn't mean they can forge updates to your account, so for instance, they can't convince the world that you changed your handle when you didn't.
But they are able to deny your ability to share with the network that you changed your handle, if you wanted to.
For example, if you wanted to change your PDS to a non-Bluesky-hosted one, you could use some tools to create a signed record saying that you want to switch your PDS.
Technically, though, it is within Bluesky's power to completely ignore your updated record and not share it with anybody else. Effecively they've locked changes to your identity.
We need some fallback to prevent this. People have talked about turning power over the PLC directory to a group like the IETF, and that would help, but it's still not enough in my opinion. I feel like we need an option to use an alternative directory or something like that. We need a way to keep our identies even if somebody takes over the directory.
Someone has made a
proposal
to help provide an escape hatch for the directory, and I feel like there are other possibilities here too that are worth exploring.
Because, fundamentally, these records are a list of signed operations on our identity. Even if the directory denies my update, it is still signed and valid, and I could go and share that signed record somewhere else. I could even go post it to bluesky!
Once people find out about my signed update, then they would know that the directory is not up-to-date and my records could now be looked up from somewhere else maybe?
It doesn't sound like you've got any solid ideas.
No, I don't. I haven't had the time to do the necessary research to actually have a proposal in mind, but I do think it's important, and it's the biggest piece of ATProto that I feel I'm not comfortable with.
The rest of it, like relays, are largely optional, at least in the respect that I'm free to try something else, but that's not quite true with the PLC directory.
What about
did:web
, doesn't ATProto support that?
It does, but using
did:web
for your identity basically means you have to maintain ownership over that domain forever, which isn't always realistic. It just moves the problem.
The other promising possible solution is
did:webvh
, but I also haven't had time to look properly into that so I'm not sure if it's a suitable solution yet.
Closing Thoughts
Well, are you done yet? I doubt anybody actually stuck around long enough to read this far.
Yeah, that's pretty much it, and, I mean, you stuck around so you never know.
That's fair.
I'm tentatively excited about ATProto and very glad to get to try and build on top of it. I'm hopefull that some real good will come out of it one way or another.
Until next time, bye! 👋
See ya! 👋
Charlie's picture is licensed in the public domain ( CCO ) and was found here on
SVG Repo
.
Show HN: A website that makes your text look cool anywhere online using Unicode
Make your text fancy with a variety of styles 💖, categorized into relevant categories.
1000's of Fancy Symbols
🤩 Thousands of cool Unicode symbols combined to create fonts for almost every use.
No ads or charges
Enjoy converting your text online with a completely ad-free experience. 💯
Fast, Fun, and Ready in Seconds
💎 Enter your text, preview in cool styles in seconds, and copy with just one click!
The Go-To Font & Text Generator
Font Generator is an online tool that lets you input simple text and convert it into over 180 fancy text styles. To make your text look fun and stylish it utilizes thousands of
cool symbols
from Unicode and combines them to create fonts for you. This enables you to write in bold, underlined, cursive, or italic styles on platforms where such formatting is normally not possible. Simply enter your text and select your favorite font to copy and paste.
With this
fancy font changer
in your hand, you can easily stand out on social media. Add aesthetic fonts to your Instagram bio, create attention-grabbing posts on Twitter and Facebook, and impress your friends during WhatsApp chats. The best part is that its use isn’t limited to social media. Since Unicode is universally supported, you can apply these styles almost anywhere.
This has a variety of stylish fonts organized into over 20 different categories. This makes it super easy for you to find the right text style, ranging from simple to fancy, based on your needs. Prepare to scroll for a while as there is a
huge list of styles
for you to explore 😉. You can select the perfect style for your social media posts, notes, presentations, documents, chats, and more.
Here's how to use the Cool Font Generator
Using this font gen tool to input your text and generate stylish copy and paste fonts is very simple. To show you how, follow the quick step-by-step guide for using the font changer:
Insert Text
Enter or paste your simple text into the top text box and see the cool previews instantly.
Tap the style
Scroll to find the style that fits your needs and click to copy it to your clipboard.
Paste the converted text
You are all done! Just go to where you want to apply the style and press paste.
About Best Fancy Text Fonts
Copy and paste fonts
have been in trend for a long time. Their most noticeable use is on social media platforms, including Instagram, Twitter, and Facebook. Some of these fancy styles not only have aesthetics but are also attention-grabbing, which is why you might have encountered them while scrolling through your feeds. Below we have featured some of the coolest fonts that you can use to make your next text stylish.
Cursive also known as script font with its handwriting-like style makes it one of the most eye-pleasing text styles. Its flowing design can add a touch of elegance to your writing, making it perfect for highlighting specific parts of your text. For example, you can conclude a message with "𝒯𝒽𝒶𝓃𝓀 𝓎ℴ𝓊". Additionally, it works well for brief, heartfelt messages; for instance, sending a "𝓖𝓸𝓸𝓭 𝓜𝓸𝓻𝓷𝓲𝓷𝓰 😊" to your mother would be a wonderful way to start her day.
Gothic font, also known as Fraktur or Blackletter, has the classic vibe of Old English text. With its sharp and angular design, it provides an aesthetic historical feel. Fraktur font has a bold and sophisticated appearance, making it an excellent choice for titles, headings, or for emphasizing parts of the text. For instance, you might wish your friend a great start to the season by writing, "𝕲𝖗𝖊𝖆𝖙 𝕭𝖊𝖌𝖌𝖎𝖓𝖎𝖓𝖌".
One of the most
useful fonts
created using Unicode symbols is the Bold font. We often feel the urge to make certain parts of our text bold to highlight their importance, but many online communities do not have this option. With the text generator, it’s now super easy to emphasize your text and copy-paste it in bold format. Simply enter the text you want to highlight and choose from many available options, including 𝗦𝗮𝗻𝘀-𝗦𝗲𝗿𝗶𝗳 𝗕𝗼𝗹𝗱, 𝑩𝒐𝒍𝒅 𝑰𝒕𝒂𝒍𝒊𝒄, 𝐁𝐨𝐥𝐝, and some cooler styles like 𝓢𝓬𝓻𝓲𝓹𝓽 and 𝕱𝖗𝖆𝖐𝖙𝖚𝖗. For example, you might share an inspiring quote "𝐊𝐢𝐧𝐝𝐧𝐞𝐬𝐬 𝐢𝐬 𝐟𝐫𝐞𝐞." – 𝐔𝐧𝐤𝐧𝐨𝐰𝐧.
Monospace is a beautiful-looking font style with its characters taking up the same amount of horizontal space. Unlike other fonts with characters having varying widths, monospace characters maintain the same width. This type of text style can create a clean, minimalist appearance and adds a nostalgic typewriter feel. Monospace text can be useful when sharing lists, instructions, or information such as a URL, for example 𝚑𝚝𝚝𝚙𝚜://𝚎𝚡𝚊𝚖𝚙𝚕𝚎.𝚌𝚘𝚖.𝚌𝚘𝚖/. It can enhance the clarity of your text.
Circled or bubble text is a cool-looking font style with each character enclosed in a circle. This font style gives a fun feel which makes it a good choice for informal and creative uses. It has two variants 🅓🅐🅡🅚 and normal Ⓑⓤⓑⓑⓛⓔ text style. It can add a cheerful touch to your social media posts, comments, or messages. For example, you could use it to announce something exciting on your Instagram page with a caption Ⓝⓔⓦ ⒶⓡⓡⓘⓥⒶⓛ or 🅒🅞🅝🅖🅡🅐🅣🅤🅛🅐🅣🅔 a close friend to make their day.
Square text is a modern-looking font style with each character being enclosed in a block. Its modern and striking look is a great way to capture attention on social media. You can create unique headlines and emphasize key points with this style. For example, you can use it to announce updates like 🆆🅸🅽🆃🅴🆁 🆂🅰🅻🅴 on your page or send fun messages to friends, such as 🅶🅾🅾🅳 🅽🅸🅶🅷🆃. Its clean and aesthetically pleasing look is a perfect way to add a trendy look to your text.
Uses of Copy and Paste Fonts
Our Stylish text generator can help you
enhance your text
for social media and make it fun for your chats. However, its applications extend beyond just fun; it can also be a valuable tool for business promotion, note-taking, and engaging in online communities. Here are some creative ways to use fancy text:
Social Media
You have probably seen many people using Unicode text on social media. These text styles have become quite popular on platforms like Instagram, Facebook, and X (Twitter). People use these styles to emphasize certain parts of their text or simply to add a creative or fun flair to it.
Instagram Fonts
Instagram, more than any other platform, is where these cool symbols are most commonly used. Fonts like Cursive, Gothic, Bold, Bubble, and Monospace, etc. are among the top fonts being used on Insta. These fancy fonts have become so popular among Instagram users that they specifically search for an Instagram Font Generator. With the font gen tool, you can easily
style your Instagram bio
, write eye-catching comments, and add unique captions to your posts, helping you stand out.
Fonts for Facebook
Just like on Instagram, you can style your Facebook bio using this tool. Facebook allows for text-only posts as well, making it a great platform to emphasize keywords or add a creative flair with this tool. For example, you can use unique numbers like ❶❷❸. Additionally, these stylish Facebook fonts can be used in comments too, giving your interactions a fun feel.
Twitter Fonts
Twitter (now known as X) is primarily used for posting text-only content called tweets. Since Twitter does not offer default text styling options, this font text generator can be your go-to tool for
text formatting
. You can bold your text, underline it, italicize it, or even change its font completely using styles like Monospace.
Messaging
You can make your WhatsApp or other chats more fun and engaging by using stylish text. For instance, to stress a key point, try strong fonts like "𝐁𝐞 𝐭𝐡𝐞𝐫𝐞 𝐚𝐭 𝐬𝐡𝐚𝐫𝐩 𝟏𝟎𝐩𝐦." Make your greetings special with something like "𝓖𝓸𝓸𝓭 𝓜𝓸𝓻𝓷𝓲𝓷𝓰 🌤️." When sending invitations, sprinkle in these unique text styles or save them for a personal signature, such as "– 𝒀𝒐𝒖𝒓 𝒍𝒐𝒗𝒊𝒏𝒈 𝒃𝒓𝒐𝒕𝒉𝒆𝒓." Plus, in group chats, these eye-catching fonts can help you stand out.
Number Symbols
Styling text is not just for alphabets, you can take it a step further by making your number stylish. The next time you create a list in your notes or post on social media, consider using circled numbers like ❶❷❸ for a unique effect. When sharing a date, you can enhance it by writing it as "Join Us on 𝟐𝟖𝐭𝐡 𝐉𝐚𝐧𝐮𝐚𝐫𝐲." You can use cool numbers for announcements or for countdowns such as "⑤ days to go." To copy fancy
number symbols
, simply enter the digits in the text box and choose from different fonts.
Business Promotions
If you run an online business, you can create eye-catching promotional headlines with stylish fonts for your page, website, or email campaigns. Announce your new products with creativity, like, "𝓝𝓮𝔀 𝓗𝓪𝓲𝓻 𝓞𝓲𝓵 𝓲𝓷 𝓢𝓽𝓸𝓻𝓮!" or highlight discounts with impact, such as "𝐅𝐥𝐚𝐭 𝟓𝟎% 𝐎𝐟𝐟." Promote your webinars and include compelling calls to action in your posts or ads, like "👉 𝐂𝐥𝐢𝐜𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨 𝐒𝐢𝐠𝐧 𝐔𝐩." These copy-paste fonts make it easy to grab attention in your promotional content.
Education
Text symbols can add a spark of fun to your educational experience. When writing notes in a simple app, you can highlight key concepts and craft creative headings. You can also write inspiring quotes in bold to motivate yourself throughout your studies. Class representatives can highlight important announcements by using strong fonts, ensuring everyone takes notice. Teachers can also add a special touch by celebrating the class topper with a fancy script.
Fancy Name
Use this tool as your fancy name generator to create a stylish version of your name or any other name you choose. You can add a creative touch by selecting cursive, bold, or various other font styles. The awesome part is that your
stylish name
won't just be for preview; you can use it anywhere Unicode is supported.
Font generator is an online tool where you can type in regular text and it turns it into fancy styles for copying and pasting. It works by mixing in various stylish Unicode symbols that look similar to normal letters, giving your text a fun and cool look.
Copy and paste fonts are made using aesthetic Unicode characters. Unicode is a Universal standard that gives each character a unique code so it looks the same on all platforms.
No, screen readers won't be able to read these text styles effectively. These symbols might look the same as Normal letters but they can have different underlying meanings. Use these styles only when you are sure that your audience does not include individuals using screen readers.
Instagram allows users to change fonts in stories, but unfortunately, there are no built-in options to modify fonts in bios or captions. However, you can easily achieve this by using a third-party tool like the font generator.
Text fonts can be used anywhere Unicode is supported. This includes a variety of platforms such as popular social media sites, messaging applications, documents, websites, emails, and many others.
Facebook does not offer built-in font formatting options. However, you can easily use a fancy font generator to create bold, italic, or underlined text, as well as change your font style completely.
You can easily change your WhatsApp text to bold, italic, monospace, or strikethrough by selecting the text and choosing the formatting options from the menu. If you want to explore more styling options, simply enter your message in the font changer and select your desired style.
Although Twitter is a text-focused platform it does not provide formatting options. If you want to change the font in your tweets or bio, you can use this tool as your Twitter text formatter.
An introduction to a KVM-based single-process sandbox
Hey All. In between working on my PhD,
libriscv
and an untitled game (it’s too much I know), I also have been working on a
KVM sandbox
for single programs. A so-called userspace emulator. I wanted to make the worlds fastest sandbox using hardware virtualization, or at least I had an idea of what I wanted to do.
I wrote a blog post about sandboxing each and every request in Varnish back in 2021, titled
Virtual Machines for Multi-tenancy in Varnish
. In it, I wrote that I would look into using KVM for sandboxing instead of using a RISC-V emulator. And so… I went ahead and wrote TinyKVM.
So, what is
TinyKVM
and what does it bring to the table?
TinyKVM executes regular Linux programs with the same results as native execution.
TinyKVM can be used to sandbox regular Linux programs or programs with specialized APIs embedded into your servers.
TinyKVM’s design
In order to explain just what TinyKVM is, I’m just going to list explicit features that are currently implemented and working as intended:
TinyKVM runs static
Linux ELF programs
. It can also be extended with an API made by you to give it access to eg. an outer HTTP server or cache. I’ll also be adding dynamic executable support eventually. ⏳ It currently runs on AMD64 (x86_64), and I will port it to AArch64 (64-bit ARM) at some later point in time.
TinyKVM creates hugepages where possible for guest pages. It can also use hugepages on the host in addition. The result is often (if not always) higher performance than a vanilla native program. Just to hammer this a bit in:
https://easyperf.net/blog/2022/09/01/Utilizing-Huge-Pages-For-Code
found that just allocating 2MB pages for the execute segment gave a 5% compilation boost for the LLVM codebase.
I quickly allocated some hugepages and ran TinyKVM w/STREAM, and yes it’s quite a bit faster.
TinyKVM has only 2us overhead when calling a function in the guest. This may seem like much compared to my
RISC-V emulators 3ns
, however we are entering another process, and we get to use all of our CPU features.
TinyKVM can halt execution after a given time without any thread or signal setup during the call. This is unavailable to regular Linux programs. With no execution timeout the call overhead is 1.2us, as we don’t need a timer anymore.
TinyKVM can be remotely debugged with GDB. The program can be resumed after debugging, and I’ve actually used that to live-debug a request in Varnish and see it complete normally afterwards. Cool stuff, if I may say so.
TinyKVM can fork itself into copies that use
copy-on-write
to allow for huge workloads like LLMs to share most memory. As an example, 6GB weights required only 260MB working memory per instance, making it highly scalable.
TinyKVM forks can reset themselves in record time to a previous state using mechanisms unavailable to regular Linux programs. If security is important, VMs can be made ephemeral by resetting them after every request. Thus removing the possibility of seeing traces of previous requests and many classes of attacks are made impossible as any form of persistence is no longer possible. A TinyKVM instance can also be reset to another VM it was not forked from at a performance penalty, as the pagetables have to change.
TinyKVM uses only a fraction of the KVM API, which itself is around 42k LOC. The total lines of code covered by TinyKVM at run-time is unknown, but it is probably less than 40k LOC in total due to not using any devices, or any drivers. This can be compared to eg. 350k wasmtime and 165k FireCracker, which are both large enough to also ideally be run with a process jail on top. FireCracker “ships” with a jailer. For TinyKVM, specifically, KVM codebase is around 7k + 37k for base + x86, and TinyKVM is around 9k LOC.
TinyKVM creates static pagetables during initialization, in a way which works even for strange run-times like Go. The only pages that may be changed afterwards are
copy-on-write
pages. We can say that the pagetables are largely static, which is a security benefit. They may not be modified after start, and there are run-time sanity checks in place during KVM exits.
A KVM guest has separate PCID/ASID similar to a process, and so for the purposes of current and future speculation bugs, it will likely be immune.
The TinyKVM guest has a tiny kernel which cannot be modified. SMEP and SMAP is enabled as well as CR0.WP and page protections. The whole kernel uses 7x 4k pages in total. We also enter in usermode and try to avoid kernel mode except where impossible. Did you know that you can handle CPU exceptions in usermode on x86?
System calls
An API toward the host is created by defining and hooking up system calls in the emulator. In the guest they are implemented both by the SYSCALL/SYSRET and you can also use an OUT instruction directly from usermode, the latter being the lowest latency. This then causes a VM exit, and registers are inspected to find that a system call is being performed. All in all, the round-trip is a bit less than ~1 microsecond, which is quite different from libriscv’s 2ns. But, you can design a smaller API with larger inputs and outputs, instead of many smaller calls (which makes sense when its cheaper).
Benchmarks
The overhead of resetting the VM + calling into the VM guest.
For call overhead, we measure the cost of resetting the VM as tail latency, because it can be paid while (for example) a HTTP response is already underway. If you don’t need to reset, simply ignore the red. Reset time scales with working memory usage.
Memory benchmarks are done to see if there’s anything obviously wrong, and there is not. One of my favorite benchmarks was encoding 1500 AVIF images per second in a HTTP benchmark (AVIF encoder as a service):
AVIFs can be transcoded from JPEGs quite fast with the right methods.
So we can see that 16x98.88 = 1582 image transcodings per second. Of course the transcoder settings and image size matters, but I am more interested in the scaling aspect. Does doubling concurrency increase our overall performance up to a point? Yes, it does.
Fun fact: Did you know that you can transcode JPEG to AVIF without going through (lossy?) RGB conversion? YUV planes work the same way in both formats!
Fast sandboxing
So, what exactly makes this the worlds fastest sandbox? Well, for one it doesn’t use any I/O, doesn’t use any drivers and no virtual devices so it shakes off a common problem with other KVM solutions: Virtualized I/O reduces performance somewhat.
TinyKVM also focuses heavily on hugepages, even if they are not available in the host, and it will still benefit from the fewer page walks. When backed by real hugepages, we immediately see large gains. I have measured a large CPU-based LLM workload against one running inside TinyKVM, with the same seed and settings, and correcting for setup I found that TinyKVM ran at 99.7% native speed. I don’t know if that means that virtualization has a minimum overhead of 0.3% or if it is just a statistical difference when running for so long. But I’m happy that the difference is small enough not to matter.
It also completes function calls into the guest (VM calls) fairly fast, meaning we’re not really adding any overheads anywhere. We’re really just running on the CPU, which is what we want. So, provided we have zero-copy solutions for accessing data, we’re just processing it and passing it back in a safe way.
In short, we’re just processing data at the speed of the native CPU, and that’s exactly what we want. To avoid the overheads related to classical full-system virtualization and enjoy our new lane without any at all.
Drawbacks
It’s not possible to reduce vCPU count after increasing it in the KVM API, and because of this I consider multi-processing something that can be better achieved by running more VMs concurrently and just using/abusing the automatic memory sharing. TinyKVM does have experimental multi-processing support but you can’t wind down after, so it’s not a great choice for long-running processes like Varnish.
There are multiple ways to work around this, which I won’t go into detail about here, but I’ll just mention that you can re-use a VM by resetting it to another, different VM. Costly, but opens the door for cool solutions like a shared pool of VMs.
Future work
Intel TDX/AMD SEV support would be nice to have.
AArch64 port.
There is a KVM feature to lock down memory in a way where not even kernel mode in the guest can change it called KVM_MEM_READONLY which I hope can be a part of locking down the guest even more. A potential drawback is increased usage of memory mappings.
The user-facing API in TinyKVM needs work to become friendly.
Move much of the system call emulation that I’ve written for a Varnish integration into TinyKVM proper, which paves the way further for dynamic linker loading. ld.so uses mmap() with fd’s to map in files that it is loading. So, if you load ld.so into your own emulator, and then you add your intended program as the first argv argument, and its arguments after that, ld.so will load your program for you, provided you have that mmap() implementation. This is what
libriscv
does!
Conclusion
TinyKVM perhaps surprisingly places itself among the smallest serious sandboxing solutions out there, and may also be the fastest. It takes security seriously and tries to avoid complex guest features and kernel mode in general. TinyKVM has a minimal attack surface and no ambition to grow further in complexity, outside of nicer user-facing APIs and ports to other architectures.
Have a look at the
code repository for TinyKVM
, if you’re interested. Documentation and user-facing API needs a lot of work still, but feel free to build the project and run some simple Linux programs with it. As long as they are static and don’t need file or network access, they might just run out-of-the box.
Not too long from now we will
open-source a VMOD in Varnish
that lets you transform data safely using TinyKVM. It works on both open-source Varnish and Varnish Enterprise! If you'd like to learn more,
reach out to us today
!
Junichi Uekawa: Filing tax this year was really painful.
PlanetDebian
www.netfort.gr.jp
2025-03-14 02:27:03
Filing tax this year was really painful. But mostly because my home network.
It was ipv4 over ipv6 was not working correctly. First I swapped the Router which was trying to reinitialize the MAP-E table every time there was a dhcp client reconfiguration and overwhelming the server. Then I changed ...
10:27:03
#
Life
Filing tax this year was really painful. But mostly because my home network.
It was ipv4 over ipv6 was not working correctly. First I swapped the Router which was trying to reinitialize the MAP-E table every time there was a dhcp client reconfiguration and overwhelming the server. Then I changed the DNS configuration not use ipv4 UDP lookup which was overwhelming the ipv4 ports.
Tax return itself is a painful process. Debugging network issues is making things was just making everything more painful.
Senate Democrats Must Vote Against the Republicans’ Continuing Budget Resolution
Portside
portside.org
2025-03-14 00:55:45
Senate Democrats Must Vote Against the Republicans’ Continuing Budget Resolution
jay
Thu, 03/13/2025 - 19:55
...
Choice before Senate Democrats force vote on a "clean" 30-day continuing resolution, or if it fails, being blamed for a government shutdown; ,or accepting the House bill which surrenders Congressional budget authority to Trump and Musk to shutter government. (image credit: NBC Chicago // YouTube image)
Yesterday the House passed legislation to fund the government through September 30 and thereby avert a shutdown at the end of this week.
The measure now goes to the Senate, where Democrats must decide whether to support it and thereby hand Trump and Musk a blank check to continue their assault on the federal government.
The House bill would keep last year’s spending levels largely flat but would increase spending for the military by $6 billion and cut more than $1 billion from the District of Columbia’s budget.
In normal times, I recommend that Democrats vote for continuing budget resolutions because Democrats support the vital services that the government provides to the American people: Social Security, Medicare, Medicaid, veterans services, education, the Food and Drug Administration, environmental protection, and much else.
In normal times, Democrats want to keep the government open.
In normal times, Democrats would be wrong to vote against a continuing resolution that caused the government to shut down.
But these are not normal times.
The president of the United States and the richest person in the world are
already
shutting the government down. They have effectively closed USAID and the Consumer Financial Protection Bureau. They have sent half the personnel of the Department of Education packing. They are eliminating Environmental Protection Agency offices responsible for addressing high levels of pollution facing poor communities.
They are usurping from Congress the power of the purse — the power to decide what services are to be funded and received by the American people — and are arrogating that power to themselves.
In 1996, when I was in Bill Clinton’s Cabinet, we opposed Newt Gingrich’s budget bullying. We also understood that Gingrich’s demands would seriously cripple the federal government. So Bill Clinton refused to go along with Gingrich’s budget resolution, and the government was shuttered for four long painful weeks.
Today’s situation is far worse. Trump and Musk aren’t just making demands that would cripple the federal government. They are directly crippling the federal government.
Why should any member of Congress vote in favor of a continuing resolution to fund government services that are no longer continuing?
Why should any member of Congress vote to give Trump and Musk a trillion dollars and then let
them
decide how to spend it — or not spend it?
Why should Congress give Trump and Musk a blank check to continue their pillage?
The real choice congressional Democrats face today is
not
between a continuing resolution that allows the government to function normally
or
a government shutdown. Under Trump and Musk, the government is not functioning normally. It is not continuing. It is already shutting down.
Today’s real choice is between a continuing resolution that gives Trump and Musk free rein to decide what government services they want to continue and what services they want to shut down —
or demanding that Trump and Musk stop usurping the power of Congress, as a condition for keeping the government funded.
Trump, Musk, and the rest of their regime have made it clear that they don’t care what Congress or the courts say. They are acting unconstitutionally. They are actively destroying our system of government.
The spineless Republicans will not say this publicly. So Democrats must — and Democrats must insist on budget language that holds Trump and Musk accountable.
The House’s Republican-drafted budget resolution isn’t contingent on Trump observing existing laws. It does not instruct the president to stop Musk from riding roughshod over the federal government. It doesn’t tell the president and his Cabinet to spend the money Congress intended to be spent.
Members of Trump’s team are already saying that if a continuing resolution is passed, they will not observe laws that Congress has enacted and will not spend funds that Congress has authorized and appropriated. Marco Rubio, for example, says that even if the State Department is fully funded, he will void 83 percent of the contracts authorized for USAID.
Senate Democrats are needed to obtain the 60 votes necessary to pass the House’s continuing budget resolution through the Senate. But there is no point in Democrats voting to fund the government only to let Trump and Musk do whatever they see fit with those funds.
Senate Democrats have an opportunity to stop Trump and Musk from their illegal and unconstitutional shutting of the government. Democrats should say they’ll vote for the continuing budget resolution to keep the government going only if Trump agrees to abide by the law and keep the government going — fully funding the services that Congress intends to be fully funded and stopping the pillaging.
If Democrats set out this condition clearly but Trump won’t agree, the consequences will be on Trump and the Republicans. They run the government now. They are the ones who are engaging in, or are complicit in, the wanton destruction now taking place.
This is an opportunity for the public to learn what Trump and Musk are doing, and why it’s illegal and unconstitutional.
In 1996, when Bill Clinton refused to go along with Newt Gingrich’s plan to cripple the federal government, causing the government to shut down for a month, Clinton wasn’t blamed. Gingrich was blamed.
If you live in a state with a Democratic senator, please phone them right now and tell them
not
to vote for the continuing resolution that gives Trump and Musk free rein to continue shutting the government. The Capitol switchboard is
202-224-3121
. A switchboard operator will connect you directly with the Senate office you request.
[
Robert Reich
is Professor, writer, former Secretary of Labor, author of The System, The Common Good, Saving Capitalism, Aftershock, Supercapitalism, The Work of Nations. Co-creator of "Inequality for All" and "Saving Capitalism." Co-founder of
Inequality Media
]
The concept of
typestates
describes the encoding of information about the current state of an object into the type of that object. Although this can sound a little arcane, if you have used the
Builder Pattern
in Rust, you have already started using Typestate Programming!
In this example, there is no direct way to create a
Foo
object. We must create a
FooBuilder
, and properly initialize it before we can obtain the
Foo
object we want.
This minimal example encodes two states:
FooBuilder
, which represents an "unconfigured", or "configuration in process" state
Foo
, which represents a "configured", or "ready to use" state.
Because Rust has a
Strong Type System
, there is no easy way to magically create an instance of
Foo
, or to turn a
FooBuilder
into a
Foo
without calling the
into_foo()
method. Additionally, calling the
into_foo()
method consumes the original
FooBuilder
structure, meaning it can not be reused without the creation of a new instance.
This allows us to represent the states of our system as types, and to include the necessary actions for state transitions into the methods that exchange one type for another. By creating a
FooBuilder
, and exchanging it for a
Foo
object, we have walked through the steps of a basic state machine.
The Disappearance of Mahmoud Khalil
Intercept
theintercept.com
2025-03-14 00:00:00
Civil rights attorney Edward Ahmed Mitchell and journalist Meghnad Bose discuss the profound implications Khalil’s case raises for free speech and due process.
The post The Disappearance of Mahmoud Khalil appeared first on The Intercept....
When government agents
surrounded Columbia University graduate Mahmoud Khalil and his pregnant wife outside their New York City apartment over the weekend, it marked a chilling escalation in the
battle over free speech in America
. Those agents weren’t enforcing immigration policy; they were sending a message about the consequences of political expression.
After serving as a negotiator during campus protests against Israel’s war on Gaza, Khalil became the target of what his attorney
called
“a profound doxing campaign for two months related to his First Amendment protected activities” — harassment so severe he had desperately
sought help
from university leadership.
Despite being a lawful permanent resident entitled to constitutional protections, Khalil was transported to a detention facility thousands of miles away, effectively “disappeared” for over 24 hours. The political motivation became explicit when President Donald Trump celebrated the arrest on social media, calling it “the first arrest of many to come.”
On this week’s episode of The Intercept Briefing, we discuss the profound implications Khalil’s case raises for free speech and due process with Edward Ahmed Mitchell, civil rights attorney and national deputy director of the Council on American-Islamic Relations, and Columbia Journalism Review reporter Meghnad Bose.
“It’s very clear the administration is waging a war on free speech — free speech for Palestine. They said they were going to do it when they took office. And that is what they are doing. Their issue with him is that he is a Muslim who is a lawful, permanent resident of America and he exercised his right to speak up for Palestinian human rights,” says Mitchell.
Bose adds, “ It’s this sort of thinking that if you are somehow critical of a certain position of the United States government, except this isn’t even a position of the United States government. You’re basically saying, if you’re critical of the position of a foreign government — in this case, the Israeli government — that you can be penalized in the United States, even if you’ve not broken any law.”
Mitchell warns even U.S. citizens face risk: “American citizens should be safe in all this, but Stephen Miller and others have said they want to review the naturalization of citizens to see whether or not there are grounds to remove their citizenship. So in the worst-case scenario, you can imagine them trying to find or manufacture some way to target even the citizenship status of people who were lawful permanent residents and then attained citizenship. So they’re going all out to silence speech for Palestine.”
Bose says it’s not just about immigration status; the government has other draconian tools at its disposal as well. “They can jail U.S. citizens too. They don’t have to deport you or take away your citizenship, he says. “They can incarcerate U.S. citizens too.”
Listen to the full conversation of The Intercept Briefing on
Apple Podcasts
,
Spotify
, or wherever you listen.
I will start with the disclaimer that I really like git. I use it on nearly all of my programming
projects, have read the entirety of Pro Git, and every lab or company I have worked at has held its
codebase on GitHub, with the exception of the National Ignition Facility. For some reason, they use
Accurev, or at least they did when I worked there in 2019, which was much worse.
I understand that git is an evolving open-source system, meaning that there are going to be redundant
commands. This has resulted in many people having their own git 'workflows' consisting of somewhat
arbitrary decisions they happened to make regarding which commands to use. For example, when attempting
to squash the last n commits together, I prefer to use
git rebase -i HEAD~n
rather than
git reset --soft HEAD~n
or
git merge --squash HEAD@{n}
. Why? In part, because
I like creating a new commit message based on a concatenation of the commit messages for each commit I
am squashing, but mostly because of muscle memory. The only time I ever consciously changed which
command I use for a process was when I learned of the existence of
git switch
, and started
using it to change and create branches rather than
git checkout
. This is
because
git checkout
, as a command, is a complete and total disaster.
Git checkout seems to go out of its way to be an absolute fiasco of a command. The official git
documentation describes
git checkout
as a function to "Switch branches or
restore working
tree files." That is
already
two different things that have little to do with each other, and
it's not even the full picture of what this sprawling, overloaded command is capable of. I think a more
thorough description would be: "It moves you from one branch to another, or creates a new branch which
it moves you to, or it reverts specific files to versions of themselves n commits prior. It also can
remove all unstaged files while reverting staged files to the version of themselves when they were
staged, or it can create a detached HEAD state." What does this even mean? That is not even a coherent
sentence! Why did people decide to make a single command do
all of Git's functionality?
What is
this madness?
Pro Git, the official book on git written by one of the founders of GitHub, fully admits that the
checkout command is an utterly cursed incarnation of Cthulhu, along with the reset function:
These commands are two of the most confusing parts of git when you first encounter them. They do
so many things that it seems hopeless to actually understand them and employ them properly.
The book then proceeds to give a thorough explanation of the checkout and reset functions (you can find
it
here
), which ends up being
one of the best explanations I have found of how Git functions as a whole. The reason it is such an
excellent explaination of git is because the only way to really understand deeply how
git checkout
works,
you have to fundamentally understand
the internal workings of git
. And that is all well and good, but
git checkout
is a
command taught to people who are new to git, and a command that requires such a deep understanding of
how git works should,
not be a general-use command
.
In all seriousness, I do believe that, given a proper understanding of how git actually works,
git checkout
makes sense. If you have a good grasp of how HEAD points to the
current
branch, and the branch is just a pointer to a commit,
git checkout
can be
viewed as a
command which moves the HEAD from branch to branch or sometimes from branch to a specific commit.
Normally, I would just say that this means that programmers should develop a deeper understanding of how
git works before using it, but the reason that git is different is that the vast majority of people who
I have taught how to use git were not developers. For non-programmers, git can be useful not as a
version control system, but as a way to access GitHub.
The first time I ever explained git to somebody, they had not asked me to explain git, they had asked me
to explain GitHub. They were in a biomedical engineering lab which had decided to put all their code in
GitHub so it could be centralized, and they could all see each other's code. People didn't typically
collaborate on individual code projects so they didn't really need branches, they just needed to all
have the code in the same location so everybody could access it. The lab had asked one of the
undergraduate programmers to explain to the group how to use GitHub. They had done a bad job at
explaining how it worked, so I
explained
git init
,
git add
,
git commit
,
git push
,
and
git pull
which is a very small fraction of git functionality, but if the
only reason
that a lab uses git is to put all the code in one location, that is the extent of what you need.
The second time I explained git to somebody, it was as part of a presentation to Specere Labs at Purdue
University. I again focused on
git init
,
git add
,
git commit
,
git push
, and
git pull
. However, because it
was a fairly large lab and some
projects involved multiple people working on the same codebase, I also explained
git branch
and
git switch
because branches are useful with larger groups. However,
despite their use
of branches, I never explained to them how
git merge
and
git rebase
work
because all of their branch merging would be done on GitHub via pull requests. Merge is complicated, and
it wasn't very useful given what the lab was doing.
For these groups of people using git,
git checkout
is extremely confusing!
And they are
also the most likely to stumble upon it since rather than reading the documentation they will look up
pre-written scripts online. I know that as software engineers, we don't typically want to change
our methodologies for the sake of people who are using tools built for us despite
not being real software engineers themselves, but at this point GitHub isn't just for software
engineers.
Also, we wouldn't be losing anything because
git checkout
is a pain to us as
well!
I'm not calling for Git to remove
git checkout
. They have been pretty clear
that it isn't
something they are looking to do, since removing the backward compatibility of basic git commands would
break a lot of scripts, and they have already added the more sensible
git switch
and
git restore
. If somebody wants to update their workflow to more sane
commands, they can
simply use those two rather than the ungodly
git checkout
. Git is an
open-source project
which has to provide functional version control for groups ranging from the Linux development team, who
presumably use commands like
git rerere
and
git bisect
, to mechanical
engineering labs who just want to put MATLAB on an accessible website, and it is doing a fantastic job
of it.
The issue, however, is that most people who know git don't learn it from Pro Git, they learn it from
their friends and Stack Overflow. I used git badly for an embarrassingly long time before actually going
through the documentation and learning what each command really does. Most people are learning the
individual workflows of their friends and internet people, which often include
git checkout
. So I am not asking for an official deprecation, but rather a
communal deprecation. Just stop teaching junior developers to use checkout because chances are it will
be entombed in their muscle memory for a long while after.
If you are a person who contributes to Stack Overflow (thank you, by the way) or
gives advice to junior developers (also thank you), please stop teaching them to use
git checkout
! Teach them
git switch
and
git restore
. Or if you
don't like restore, try reset which is also somewhat cursed but less cursed than checkout. Git is
an amazing version control system, but it's better if you avoid the eldritch horrors hidden within.
This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” fro...
This past January the new administration issued an
executive order
on Artificial Intelligence (AI), taking the place of the now rescinded
Biden-era order
, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to
public comments to the National Science Foundation
(NSF).
EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no
secret and untested
algorithms should decide who keeps their job or who
is denied safe haven
in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.
Government Automated Decision Making
US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.
These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in
policing
and
national security
, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to
make decisions about their continued employment
.
Automating important decisions about people is reckless and dangerous. At best these new
AI tools are ineffective
nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.
Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.
Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a
more fair and scientific
development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.
Don’t Enable Powerful Gatekeepers
Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is
exactly what we warned against
back in 2023.
Among these dubious solutions is the growing prominence of
AI licensing schemes
which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more
lunch money to give their bully
.
This is the wrong approach. Looking for easy solutions like
expanding copyright, hurts everyone
. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.
People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.
What Programming Concepts do you Struggle to Grok or Use in Production?
Lobsters
lobste.rs
2025-03-13 23:36:40
For me, type magic whether Haskelly or Idrisy dependent types evades me. A lot of modern comp sci concentrates on types, which I find a slog to get through, in spite of some helps. I’ve worked through a few books on the topic and admire how Rust renders some errors unnecessary or how types handle so...
For me, type magic whether Haskelly or Idrisy dependent types evades me. A lot of modern comp sci concentrates on types, which I find a slog to get through, in spite of some
helps
. I’ve worked through a few books on the topic and admire how Rust renders some errors unnecessary or how types handle some types of control flow, but in practice I only use them to optimize performance in Lisps and give compiler warnings and don’t see the value of modeling my own problems/domains with them.
I’ve also long struggled to appreciate why/when you’d opt for
OOP
outside of GUIs.
I never understood the use of NoSQL nor ORMs (although opinion’s now against them) and embrace relational algebra through Datalog and Prolog.
For about a year, I thought supply chain attacks were mostly just popular paranoia due to a recent case which was caught instantly, but I saw the light through preventative measures like vendoring which offer stability from other threats too.
I don’t understand the purpose of a fair few functions in the Go std lib e.g. the presence of both the
TrimLeft
and
CutPrefix
families vs. e.g. the Split family. It just seems inconsistent and strange to have so many in the normal string API which are easily implemented out of the same base operations in the few cases where necessary.
A few years ago, we bought a church building.
Since then, every time I mention it online and/or on social media, someone always responds, “wait, you bought a church,
what
” and then asks some standard questions. At this point it makes good sense to offer up a Church FAQ to answer some of those most common questions. Let’s begin!
Wait, you bought a church,
what?
Indeed, we bought a church.
Where?
In our town of Bradford, Ohio.
What denomination used to be there?
It’s the former home of Bradford’s Methodist congregation. The church building itself dates back to at least 1919 (that being the year of a calendar we have that features a picture of the building). There was a congregation there until at least 2016. So they got about 100 years of use out of the building.
Why did they stop using it?
The congregation shrank over time, a not uncommon occurrence for mainline protestant churches these days. As I understand it the congregation merged with another congregation down the road, which has services at a different church building. I believe the West Ohio Conference of the Methodist Church (which previously owned the building) may have rented the building for a bit after the congregation left, but when we acquired the building it was not being used, which is probably why the Methodists decided to sell it.
So do you live there?
No, we have an actual house to live in. I know old churches are frequently turned into funky residences, but reconfiguring a church to be an actual livable space on a daily basis takes a lot of effort. Our house is designed to be a residence for humans; we prefer to live in that house.
Are you going to use the building as a church and/or start a cult?
No and no. None of the Scalzis are particularly religious, especially in an organized fashion, and despite the actions of certain science fiction authors in the past offering precedent, I have no desire to start a cult. It seems like a lot of work and my ego does not run in the direction of needing acolytes.
Coffee shop and/or bookstore and/or brewpub and/or some other retail business?
I have worked hard all my life
not
to work in retail and don’t intend to start now, thank you.
Then why did you buy it?
Because we wanted office space. For a number of years Krissy and I talked about starting a company to develop creative projects that were not my novels, and also to handle the licensing and merchandising of the properties that I already had that were not already under option. That company would eventually become Scalzi Enterprises. Although I write my novels at home, we wanted to have office space elsewhere.
Why?
Because if we eventually hired other people to help us, we wanted them to have some place to work that was not our actual house. And in a general sense it would be useful to have extra space; our house is already full with a quarter-century of us living in it.
Why not get actual office space rather than a church building?
We tried, but we live in a small town without a lot of commercial real estate. We looked at a couple of buildings in town that went up for sale, but weren’t happy with their state of repair. We didn’t want to look outside of Bradford because then there would be a commute. We wanted something within a couple of miles of our house. Eventually it looked like to get what we wanted, we would have to buy a plot of land and then build on it. I went online to look at real estate websites to see what land was available, and as it happens a couple of hours prior, the Methodists put the church building up for sale. We saw the listing, made an appointment to see it that afternoon, and put in an offer when we got home from the viewing. We closed on the building in December of 2021.
What made you offer on the building?
It had everything we wanted — ample space for offices and an excellent location — and above and beyond what we already knew we wanted, when we viewed the space we saw that it offered other opportunities as well. I always wanted an extensive library, for example, and the building had a balcony area which would be perfect for one of those. The basement area would be perfect for having gatherings, and the sanctuary area was, of course, a natural place for concerts or readings or whatever else we might want to do there. And then there was the price.
How much was it?
$75,000.
WHAT
One of my favorite things to do is show the building to people who live on the coasts, ask them how much they think it cost, and watch them get angry every time I tell them to go lower. But more seriously, we knew that we wouldn’t find a better building anywhere close to us at anywhere near that price. It made
absolute
economic sense to get the building.
Usually when you get a building like this for that amount of money, it’s on the verge of falling down. Was it?
Thankfully, no. Krissy’s former job was as a insurance claims adjuster; she has certificates attesting to her ability to evaluate the soundness of structures. When we had our visit to the property, she literally climbed through the walls to see for herself what shape the building was in. Her determination: The building would need significant renovation, but fundamentally it was sound. We would need to put in money, but if the renovations were done right it wouldn’t be a money pit.
What renovations did you do?
A whole new roof, to start; now the building has a 50-year roof, which means it will almost certainly outlive me. The electricity was knob and tube and had to be redone. There was an outside retaining wall that had to be torn out and redone. The aforementioned balcony was actually not safe to be on; it was cantilevered out into space with no support and had a shin-high barrier that wouldn’t stop anyone from going over the side. That was fixed, and new floors and custom bookcases by a local artisan built in so I could have my library. The basement floor was redone; the kitchen space down there gutted and remodeled. We pulled up high-traffic industrial carpet glued to the sanctuary floor and reconditioned the hardwood floors underneath. New HVAC, and improved drainage for the maintenance room. The office and Sunday school room in the basement was turned into a guest suite. The structure was sealed against moisture and the walls were all replastered and repainted.
And so on. None of that was cheap, nor was it done quickly; the renovations took two years. Both the time and cost were affected by the work being done during the pandemic, but no matter what it would have been a laborious and expensive process. It was worth it.
Did you do any of that work yourself?
Oh,
hell
no; I’m not competent to do anything but sign checks. We had contractors do everything, and Krissy, who had 20 years of dealing with contractors in her previous job, managed the renovation on our end. She
terrified
them.
Are the renovations complete?
The major ones, yes. There are a few things to do but they are second order tasks. I want to recondition the old pastor’s study, get the organ functional again, and we want to make the sanctuary level more easily accessible via ramps and such. But all of that can be dealt with over time. At this point, most of what we wanted and needed to do is done, and we are able to use the building how we intended.
What did the people of Bradford think about you buying the church?
By and large the response was positive. We’ve lived in town since 2001 so we weren’t an unknown quantity; everyone here knows us. There was some concern that someone might buy the church for the land underneath it, tear it down and then put up, like, a check-cashing store or a vape shop. So when we bought it and stated our intention to renovate and maintain the building, there was some measure of communal relief. When the renovations were done we held an open house for the community so they could see what we’d done with the place. Most people seemed happy with it.
Likewise, we have the intent of keeping the space a part of the community, and not just as our office space. From time to time we plan to have events there (concerts, readings) that will be free and open to the town, sponsored by the Scalzi Family Foundation (yes, we have a foundation; it’s easier to do a lot of charitable things that way). The building will still be part of the civic life of Bradford.
Does this mean you are going to make the building available for event rentals?
Probably not. It’s one thing to offer private events, funded by our foundation, that are open to the townsfolk. It’s another thing to offer the space up as a commercial venue. One, that’s a lot more work for us, and two, we would have to make sure the building was up to code as rental space, which would entail more renovations and cost. We occasionally get inquiries and we’ve politely turned them down and are likely to continue to. There are other event spaces in town, from the community center to the local winery, and we encourage people to give them their money.
But you
have
used it for gatherings, yes?
Sure. My wife threw me a surprise party there for this blog’s 25th anniversary, and when we held an eclipse party in 2024, we had the pre- and post- parties at the church. The last couple of family reunions have been held at the church, and we hold Thanksgiving and Christmas parties there as well. Having a gathering at the church is much less stressful than having it at our house. People aren’t in our personal space, the pets don’t freak out, and people with allergies to cats and dogs don’t have to worry about sneezing. It works out great. Also, when people come to visit, they have the option of staying in the guest suite at the church instead of our more crowded (and cat hair-laden) guest room. So that’s a plus too.
You said something about getting the organ functional again. Do you have, like, a pipe organ?
We do, sort of. The pipes are there, but the organ hasn’t been attached to it for years, possibly decades. The organ itself (which played through a speaker) is also not functioning, and I need to get in touch with someone to repair it. Actually reattaching it to the pipes and making the whole thing work again would be an extremely expensive endeavor and would probably cost as much — if not more — than it cost us to buy the church. Pipe organs are an expensive hobby, basically. I’m not sure I’m ready to commit to that.
Does the church have a bell? And do you ring it?
It does have a bell, and we ring only very occasionally. We don’t want to annoy the neighbors.
Is the church haunted?
We have been told by former parishioners that it is, but I have not met the local ghosts yet. Perhaps they are waiting to see if I am worthy.
Isn’t it a little…
quirky
to own a church?
I mean, yes. I’ve noted before that now I’ve become a bit of a cliche, that cliche being the eccentric writer who owns a folly. Some own theaters and railways, some own Masonic temples, some own islands. I own a church. In my defense, I had a functional reason to own it, noted above, and I didn’t spend a genuinely silly amount of money for it, also noted above. As a folly, it is both practical and affordable.
What do you call the church now?
Not
“Church of the Scalzi,” which is actually the name of a church in Venice, Italy. Its formal name now is “The Old Church.” But for day-to-day use we just say “the church.”
Hey, have you ever heard of that song, “Alice’s Restaurant”?
Yes, I have, and everyone thinks they are being terribly clever when they reference it to me. After the first thousand times it wears a smidge thin.
Is it true that your six-necked guitar now resides at the church?
It is true:
The Beast
, as it is called, and was called long before I owned it, currently resides on the altar. It surprises people every time they see it.
Are you
sure
you’re not starting a cult?
I’m sure. Besides, who would want to worship me? Krissy, maybe. Me, nah.
And that’s it for now. If there are more questions I think need to be in the FAQ, I’ll add them as I go along.
— JS
OpenAI declares AI race "over" if training on copyrighted works isn't fair use
OpenAI is hoping that Donald Trump's AI Action Plan, due out this July, will settle copyright debates by declaring AI training fair use—paving the way for AI companies' unfettered access to training data that OpenAI claims is critical to defeat China in the AI race.
Currently, courts are mulling whether AI training is fair use, as rights holders say that AI models trained on creative works threaten to replace them in markets and water down humanity's creative output overall.
OpenAI is just one AI company fighting with rights holders in several dozen lawsuits, arguing that AI transforms copyrighted works it trains on and alleging that AI outputs aren't substitutes for original works.
So far, one landmark ruling favored rights holders, with a judge declaring AI training is not fair use, as AI outputs clearly threatened to replace Thomson-Reuters' legal research firm Westlaw in the market, Wired
reported
. But OpenAI now appears to be looking to Trump to avoid a similar outcome in its lawsuits, including a
major suit brought by The New York Times
.
"OpenAI’s models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights," OpenAI claimed. "This means our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works."
Providing "freedom-focused"
recommendations
on Trump's plan during a public comment period ending Saturday, OpenAI suggested Thursday that the US should end these court fights by shifting its copyright strategy to promote the AI industry's "freedom to learn." Otherwise, the People's Republic of China (PRC) will likely continue accessing copyrighted data that US companies cannot access, supposedly giving China a leg up "while gaining little in the way of protections for the original IP creators," OpenAI argued.
We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not
blocking them from loading.
For more information you can review our
Terms of Service
and
Cookie Policy
.
Need Help?
For inquiries related to this message please
contact
our support team
and provide the reference ID below.
O
n December 22, 2008, Ansol Clark woke to a ringing phone. It was sometime before 6 a.m., far earlier than he had intended to get up. He drove construction trucks for a living, but he’d been furloughed recently, leaving him little to do in the three days before Christmas except wrap gifts and watch movies with his grown son, Bergan. The house was dark. Janie, Ansol’s wife of thirty-six years, slipped out of bed, stepped across the bedroom, and disappeared through the doorway that led into the kitchen. A light clicked on, and the ringing stopped. She returned to the bedroom. “It’s for you,” she told Ansol.
In the kitchen, Ansol, groggy, picked up the landline from its place atop the bread box. You need to get up, a man told him, and you need to get up right away. Ansol recognized the caller’s voice: it belonged to his supervisor, a general foreman named Tim Henry. “Get to Kingston,” Henry added. “They’ve had a blowout down here.”
Ansol, needing no further explanation, hung up. Janie rushed to brew a thermos of coffee and wrap up biscuits while Ansol pulled on his work clothes: jeans, a blue down jacket, a neon hard hat, muck boots. At fifty-seven, he was a bull of a man, strong from hauling around equipment all day and from spending much of his free time hunting on the Cumberland Plateau; he liked to pick off squirrels with a .22 Magnum rifle. He, Janie, and Bergan lived in Knoxville, Tennessee, in a one-story brick home three miles from the farm where Ansol grew up. Within half an hour of receiving Henry’s call, he was pulling his Chevy S10 pickup truck out of the driveway and heading toward the Kingston Fossil Plant.
He made his way by headlights, following I-40 west out of town. He knew the route well. After a decade crisscrossing North America in a tractor-trailer, he’d joined the Teamsters labor union in 2000 and had spent much of the past five years working at the Kingston Fossil Plant, a coal-fired power station forty miles outside downtown Knoxville, in Roane County. Built and managed by the Tennessee Valley Authority (TVA), a giant federally-owned power company, the Kingston facility was, at the time of its completion in 1954, the world’s largest coal-fired power station. It burned fourteen thousand tons of coal a day, enough to fill one hundred and forty train cars and to power some seven hundred thousand homes.
With no traffic, Ansol reached the plant within thirty minutes. He met Henry in a parking lot; a few other Teamsters and some equipment operators showed up around the same time. In a shipping container used to store tools, the group waited for sunrise, anxious to see what they were up against. Ansol had something of an idea. Each day, the Kingston plant generated a thousand tons of coal ash, the sooty by-product of burning coal to produce electricity. Much of this ash ended up in an unlined pit in the ground, known as a holding pond. But Kingston’s “pond” was not really a pond, at least not anymore. In the 1950s, TVA started flushing coal ash into a spring-fed swimming hole, and, in the decades since, the ash had displaced the water and grown into a mountain, sixty feet tall, covering eighty-four acres, and situated at the confluence of two rivers, the Emory and the Clinch. It was a precarious setup, one made more so by the dike that contained all this ash—an earthen embankment made not of concrete or steel but of clay and bottom ash, a coarse, sandlike component of coal ash.
By that December morning in 2008, the ash mound had become a topographical feature so large that marathoners ran up it for training. TVA planted grass over it, so it resembled a hill, and bulldozers sculpted neat tiers into the sides, reminiscent of those of a stepped pyramid. But the mound lacked the sturdiness that its size suggested. Two years earlier, Ansol had helped to repair a minor breach in the dike that had sent coal ash rushing into a ditch. When he walked on the mound, the ground had jiggled like a water bed; when he drove his truck along the dike’s sloped sides, he went as slowly as possible, to avoid causing another rupture. He’d once told a coworker that the dike would surely fail one day. Now, it seemed, that day had come.
At first light, Ansol and Henry climbed into a pickup and drove up the back side of a large wooded bluff that overlooked the coal-ash mound. The sight at the top would stick in Ansol’s mind until the day he died.
A black wave at least fifty feet high rushed northward with the power and violence of water punching through a dam.
H
ours earlier, as the clock approached midnight, a stiff wind blew over the rolling countryside that surrounded the Kingston Fossil Plant. It was the winter solstice, and, in the quiet of that long night, the old farmhouses, country churches, trailer homes, and lakeside estates that dotted the hills and coves around the power station were dark and still. Christmas trees glowed in living rooms.
The Year Without a Santa Claus
played on TVs. Outside, the temperature hovered around 14 degrees Fahrenheit, cold enough to freeze the pipes at the McDonald’s in town.
Then, shortly before one o’clock, the north section of the coal-ash dike suddenly and almost wholly collapsed. When it did, more than a billion gallons of coal-ash slurry—about fifteen hundred times the volume of liquid that flows over Niagara Falls each second—broke forth. A black wave at least fifty feet high rushed northward with the power and violence of water punching through a dam.
A backwater inlet separated the coal-ash pond from a small peninsula along the Emory River. A few dozen homes sat along the water’s edge. As the black wave roared ahead, shaking the earth, it overwhelmed the inlet and slammed into the peninsula. Most of the slurry, nearly six million tons in all, forked right around the obstruction and rushed into the Emory River, filling in a forty-foot-deep channel. The rest of the ash, nearly three million tons, filled in two sloughs on either side of the peninsula, hurling fish forty feet onto the riverbank. The wave kept going. It downed trees, covered roads, and knocked out power lines. Docks were inundated; boats were carried away; soccer fields were smothered.
The wave first collided into the home of fifty-three-year-old James Schean. With a thundering boom, it ripped Schean’s vinyl-sided place off its foundation, upturning furniture, crushing door frames, and knocking over Christmas presents. The rafters cracked.
Down the road, Chris and DeAnna Copeland, a married couple with two young daughters, were awakened by the din. Chris, a firefighter, glanced out a window. A black torrent was flowing across his backyard in the moonlight. Trees bobbed in the muck. He got up and flipped a light switch. No power. In the dark, he pulled on a T-shirt and pants. Downstairs, he called 911 on his cell phone. There was a landslide, he said, or maybe something else. It was hard to tell. Then he hung up, grabbed a flashlight, and bolted outside.
Then, as now, the Kingston Fossil Plant had twin thousand-foot-high smokestacks that towered over the forested countryside. Far across the inlet that separated Copeland’s home from the plant property, he could see flashers blinking, as always, at the top of the two stacks, but there was no other light beyond that of the moon. He swept the ground with his flashlight. Debris and mud everywhere. A musty smell. Then he realized: it wasn’t mud; the coal-ash dike had failed. He owned a white Suzuki Sidekick. He climbed in, cranked the engine, drove into the backyard, and pointed the high beams toward the plant. A sea of coal ash had inundated the inlet. The black wave had dissipated within a minute, but smaller slides would continue for an hour or more.
Copeland’s next-door neighbor, Jeff Spurgeon, soon emerged from his home carrying a flashlight. The two men had known each other for years. The ash had washed within feet of their homes, but somehow it hadn’t touched either. Copeland jumped out of his Suzuki, and together he and Spurgeon hurried toward the river, knowing that their neighbors would need help.
“This is unbelievable,” another TVA employee scribbled in a log that morning. “We did not expect this.”
T
he first 911 calls poured in shortly before one o’clock, and emergency personnel took swift action. By 1:06 a.m., police had blocked off the long, curving, tree-lined road to the Kingston Fossil Plant. Members of the county emergency-management team arrived minutes later. The TVA staffers who were on shift at the plant quickly realized that the dike had at least partly collapsed, but they had no idea how severe the blowout might be. Someone needed to check. A shift supervisor hurried to the plant’s loading dock, climbed into a TVA pickup, and rattled toward the coal-ash pond, a mile or so down the road. It was impossible to see much as the truck’s headlights carved through the darkness. Leafless trees, heavy shadows. Then the high beams fell upon something in the road—a deer, half buried in sludge, struggling to escape. The supervisor threw the truck into reverse. He knew that the dike had not sprung a leak or suffered a minor breach, as it had in the past. The dike, and nearly all the coal ash it held, was gone. “This is unbelievable,” another TVA employee scribbled in a log that morning. “We did not expect this.”
The two neighbors, Copeland and Spurgeon, reached a white, two-story home at the tip of the peninsula, directly across from the holding pond. A Christmas wreath hung between two windows, but little else was normal. Sludge had buried the home nearly above the porch, shattering the front windows. The place belonged to an elderly couple named Janice and Perry James. As Copeland crossed the yard, he called out to them, then he took another step and sank—muck to his waist. He struggled for several long moments to free himself. Once he recovered his balance, he was able to see Janice James upstairs, shining a flashlight through a window—a relief. She would later tell reporters that she had watched as dark slurry poured in under the front door, filling her living room and sunroom. Through a window, James shouted to Copeland and Spurgeon that she was okay and that her husband was away on business. Copeland asked her not to go downstairs. A rescue squad was en route, he said, and he and Spurgeon needed to check on Mr. Schean.
In the Suzuki, Copeland and Spurgeon sped toward Schean’s place, a quarter of a mile down the road. It was here where the black wave had first collided with the land. Sludge and tree limbs and debris covered the asphalt. Soon gray mounds of ash, at least ten feet tall, blocked the way. Copeland parked at the top of a hill, then he and Spurgeon bushwhacked through a dark wood to the house. The wave had dragged Schean’s little home sixty-five feet off its foundation and thrust it against an embankment. The red-shingled roof had partially caved in, and boards and concrete blocks littered the ground. Schean’s shed was gone, along with the tree where he chained his dog, along with the dog itself. The wave had tossed around Schean’s two-door pickup as if it were a child’s plaything.
The two men hollered out for Schean, and a voice yelled back. They followed the sound to a bedroom window and shined in their lights. Schean, sitting in the dark and somehow uninjured, wore a blank, dazed look. He worked as a boilermaker at the Kingston plant, and yet he said he had no idea what had happened, and he hadn’t dared to venture outside alone.
After offering a brief explanation, Copeland and Spurgeon tried to open the window, but it wouldn’t budge. The house popped and cracked, as if it might collapse. The two men found a board and shattered the glass and lifted Schean to safety. He had on a shirt and pants but neither shoes nor a coat. With the temperature still well below freezing, the sludge would soon crust over with ice. They needed to go.
Three hundred acres lay buried in ash, a foot deep in spots, six or more in others.
H
ours later, standing at the top of the wooded bluff, Ansol Clark wasn’t aware of any of this, but, as he surveyed the ruined landscape, it was clear that TVA had made a tremendous mess. By sunup, helicopters whirled overhead, as local, state, and federal agencies assessed the biblical scope of the breach. Three hundred acres lay buried in ash, a foot deep in spots, six or more in others. The spill would prove to be nearly a hundred times larger than the 1989 Exxon Valdez oil spill, and it would rank as the single largest industrial disaster in U.S. history in terms of volume. The sludge could have filled the Empire State Building nearly four times over.
Wanting to see more, Ansol and Henry, the general foreman, drove around the site’s perimeter. Thirty-foot chunks of ash towered over the sludge. Fish flopped on the ground. Police officers shot deer trapped in the ash. Geese writhed crazily; their carcasses would soon pile up. Ansol had never seen anything like it.
Motoring slowly, he and Henry reached the crushed home of James Schean. First responders had already delivered Schean to a nearby community college to sleep and warm up. Ansol and Henry climbed from their vehicle and waded across the yard, hunting for firm footing in the muck. They entered through the kitchen to silence. Dishes lay shattered on the floor. In an adjacent room, a Christmas tree was submerged in ash; wrapped gifts floated in the slurry.
How could this happen?
Ansol wondered.
All told, the breach damaged or destroyed twenty-six homes, and rescue teams had to evacuate twenty-two people. TVA staff quickly reserved hotel rooms for them and handed out gift cards to be used in restaurants and to replace ruined Christmas gifts. But the biggest concern in the first chaotic hours that morning was finding whoever might be buried under the coal ash. Nearly everyone on-site, including Ansol Clark, expected to find bodies.
Move over, Apple: Meet the alternative app stores coming to the EU
People in the European Union are now allowed to access alternative app stores thanks to the
Digital Markets Act
(DMA), a regulation designed to
foster increased competition
in the app ecosystem. Like Apple’s App Store, alternative app marketplaces allow for easy access to a wider world of apps, but instead of the apps going through Apple’s App Review process, the apps on these third-party marketplaces have to go through a
notarization process
to ensure they meet some “baseline platform integrity standards,”
Apple says
— like being malware-free. However, each store can review and approve apps according to its own policies. The stores are also responsible for any matters relating to support and refunds, not Apple.
To run an alternative app marketplace, developers must accept Apple’s alternative business terms for DMA-compliant apps in the EU. This includes
paying a new Core Technology Fee of €0.50
for each first annual install of their marketplace app, even before the threshold of 1 million installs is met, which is the bar for other EU apps distributed under Apple’s DMA business terms.
Despite the complicated new rules, a handful of developers have taken advantage of the opportunity to distribute their apps outside of Apple’s walls.
Below is a list of the alternative app stores iPhone users in the EU can try today.
AltStore PAL
Image Credits:
AltStore
Co-created by developer Riley Testut, maker of the Nintendo game emulator app
Delta
, the AltStore PAL is an officially approved alternative app marketplace in the EU. The
open source
app store will allow independent developers to distribute their apps alongside the apps from AltStore’s makers, Delta, and a
clipboard manager, called Clip
.
Unlike Apple’s App Store, AltStore apps are self-hosted by the developer. To work, developers download an alternative distribution packet (ADP) and upload it to their server, then create a “source” that users will add to the AltStore to access their apps. That means the only apps you’ll see in the AltStore are those you’ve added yourselves.
Some
popular apps that users are adding
include the virtual machine app
UTM
, which lets you run Windows and other software on iOS or iPad;
OldOS
, a re-creation of iOS 4 that’s built in SwiftUI;
Kotoba
, the iOS dictionary available as a stand-alone app; torrenting app
iTorrent
; qBittorrent remote client for iOS devices called
qBitControl
; and social discovery platform
PeopleDrop
.
Setapp Mobile
Image Credits:
Setapp
MacPaw’s Setapp became one of the
first companies to agree to Apple’s new DMA business terms
to set up an alternative app store for EU users. The company has long offered a subscription-based service featuring a selection of curated apps for customers on iOS and Mac. Following the implementation of the DMA, it released
Setapp Mobile
, an alternative app store for iOS users only in the EU. Similar to its other subscription offerings, the new app store includes dozens of apps under a single recurring subscription price, and the number of apps grows over time. The apps are free from in-app purchases or ads and are generally considered high quality; however, it doesn’t include big-name apps like Facebook, Uber, Netflix and others.
Setapp Mobile is
available to users
on the “Power User” and “AI Expert” Setapp subscription plans for free. Otherwise, users can sign up via a new “iOS Advanced” plan that includes both the iOS app from Setapp’s main subscription and
Setapp Mobile
at $9.99/€9.49 monthly or $107.88/€102.48 yearly.
In addition, all Setapp subscribers (except for “Family” and “Teams”) can try Setapp Mobile for free during the invite-only beta period.
Epic Games Store
Fortnite maker Epic Games
launched its alternative iOS app store
in the EU on August 16, allowing users to download games, including its own Fortnite and others like Rocket League Sideswipe and Fall Guys, with more to come. The company said it’s also bringing its games to other alternative app stores, including AltStore PAL, which it’s now supporting via a grant, as well as Aptoide’s iOS store in the EU and ONE Store on Android.
The move to launch Fortnite in the alternative iOS marketplace comes more than four years after Apple removed the game from its App Store over policy violations, ahead of Epic’s legal challenge to the alleged App Store monopoly. While U.S. courts decided that Apple was not engaged in antitrust behavior, the lawsuit did
pave the way for developers
to link to their own websites for a reduced commission.
Aptoide
Image Credits:
Aptoide
An alternative game store for iPhone, Lisbon-based
Aptoide
is an open source solution for app distribution. The company, already known for its Google Play alternative, says it scans the apps to ensure they are safe to download and install.
The iOS version of the Aptoide store launched as an invite-only beta in June, so you’ll need to put your email
on a waitlist
to get the access code. As a free-to-use store, Aptoide doesn’t charge its users to cover its Core Technology Fee paid to Apple, but takes a 10% to 20% commission on in-app purchases on iOS, depending on whether they were generated by the marketplace or not.
Across all platforms, including Android, web, car and TV, Aptoide offers 1 million apps to its over 430 million users.
Mobivention marketplace
Image Credits:
Mobivention
A B2B-focused app store, the
Mobivention
marketplace allows EU companies to distribute their internal apps that are used by employees, but can’t — or shouldn’t — be published in Apple’s App Store. The company also offers the development of a customized app marketplace for companies that want to offer employees their own app store just for their corporate apps. Larger companies can even license Mobivention’s technology to more deeply customize the app marketplace to their own needs.
Skich
Image Credits:
Skich
In March,
Skich
announced
the launch of an alternative app store for EU users which differentiates itself by offering a Tinder-like interface for app discovery. That is, users swipe right to “match” with apps they might enjoy. They can also create playlists and see what apps their friends are playing. The new store will replace Skich’s existing app and will see the company taking a 15% commission on all purchases. The store has not yet filled with apps, however. Instead, it will market the offering to developers at the Game Developers Conference (
GDC
) and hopes to add titles later in March.
Sarah has worked as a reporter for TechCrunch since August 2011. She joined the company after having previously spent over three years at ReadWriteWeb. Prior to her work as a reporter, Sarah worked in I.T. across a number of industries, including banking, retail and software.
Keep your email professional and private using Proton Mail with your domain name from Porkbun! It’s the ultimate secure email hosting solution. You get a professional, personalized email address backed by state-of-the-art security and privacy features — all at special low prices for Porkbun customers starting at just $5.99/month.
Proton Mail Plans
Note: Price listed is for USD. Pricing at Proton may appear different based on your selected currency. First year price applies to new paid Proton customers only.
$5.99
$7.99
Save 25%!
Per user, per month
Billed annually, $71.88 for first year
Renews at $83.88/year
Encrypted email and calendar to get you started.
Includes:
15 GB storage per user
3 Custom email domains
Secure personal and shared calendar
Cloud storage and sharing for large files
Proton Scribe writing assistant
$7.99
$10.99
Save 27%!
Per user, per month
Billed annually, $95.88 for first year
Renews at $119.88/year
Enhanced security and premium features for teams
Includes:
50 GB storage per user
10 Custom email domains
Secure personal and shared calendar
Cloud storage and sharing for large files
Advanced account protection
Manage user permissions and access
Proton Scribe writing assistant
$9.99
$14.99
Save 33%!
Per user, per month
Billed annually, $119.88 for first year
Renews at $155.88/year
All Proton business apps and premium features to protect your entire business.
Why Choose Proton Email Hosting With a Porkbun Domain?
Proton Mail is more than just an email hosting service — it’s the best in secure email communications. Unlike other large email providers who scan the content of your inbox to profit from your data, Proton Mail’s end-to-end encryption and zero-access encryption means only you can see your emails. It makes sure your information remains private and protected.
Top Features of Proton Mail
End-to-end encryption
Your emails are encrypted from the moment you hit send, ensuring only your intended recipient can read them.
Zero-access encryption
Even Proton Mail’s servers can’t access your messages. Your privacy is their priority.
Open source and audited
Proton Mail’s code is open for review, giving you transparency and confidence in their security measures.
Anti-phishing tools
Protect yourself from cyber threats with built-in defenses against phishing and spam.
Benefits of Using Proton Mail with Your Porkbun Domain Name
The partnership between Porkbun and Proton Mail is bringing you the best-in-class services of both to give you the best experience possible. Pairing Proton Mail with a domain name from Porkbun gives you the perfect combination of professionalism, personalization, and peace of mind:
Professional branding
Create a custom email address unique to you, such as you@yourdomain.com.
Easy integration
Easily set up your Proton Mail email address with your Porkbun domain name.
Unmatched privacy
All of your communications stay private with Proton Mail's encryption technology.
Spam-free experience
Say goodbye to cluttered inboxes with Proton Mail’s robust spam filters.
Global accessibility
Secure access from anywhere, on any device, with Proton Mail’s intuitive apps.
Get Started With Porkbun and Proton Mail
Secure your digital communications and boost your online presence with Proton Mail and Porkbun. Whether you’re an entrepreneur, a freelancer, or just someone who values privacy, this is the email solution you’ve been waiting for.
Ask HN: How to study for system design that doesn't include front end/back end?
I got a system design interview where I was asked to build a pricer for a financial product, then expand it to multiple pricers that might share inputs.
This was not the typical system design interview where you have to deal with APIs / load balancers / latency / etc
I have studied that a lot, but what about these type of general interviews?
One general tip is to always ask (credible) clarifying questions. A lot of interviewers look out for this - they want to see you dissect the requirements a little bit, not just jump straight to solving the problem based on a bunch of assumptions you've made that might not be correct.
APIs are just building blocks and blobs of single-ish responsibility. It's good to ask lots of different questions to understand the actual question being asked, what the desired capabilities of the thing being asked are, etc.
From there, you should be able to start laying out the different components and how they interact.
Edit: here's some areas of consideration when designing a system:
* what does this thing do?
* what are the expected inputs and outputs of the whole system?
* what operations are we wanting to be able to perform against it?
* how frequently will each of these operations be done?
* how quickly do we want a response from this whole system?
* what different sorts of integrations do we expect for this system? Any?
* how often do we expect the system to change?
* where do we expect to expand the system the most in the future? What will that look like?
I'm sure others will have many, many more things to add
IMO domain knowledge is crucial here. I don’t know anything about pricier or financial products so hard to reason about the system.
But to give a general advice, I’d approach it by trying to break down the domain into concepts, then think about how those concepts can be turned into abstractions. Then you can think about relationships between those abstractions and engineering solutions for those relationships.
Also, systems all take inputs and produce outputs. Validating those would be another interesting aspect given its finances related.
When you get a question like this, you just reduce everything down to database design.
Most systems are just wrappers around a database. You can tell because if you take away the database the system does nothing useful.
Financial products are exclusively priced in databases.
I did this in an interview once, failed miserably and got the feedback, “we don’t care about the database.”
I personally feel you aren’t wrong but you also aren’t aligned with what they’re looking for, like I wasn’t when I failed this interview.
Since you’re reading this essay, you probably already know about the mathematical holiday called
Pi Day
held on March 14th of each year in honor of the mystical quantity
π
= 3.14…. Pi isn’t just a universal constant; it’s trans-universal in the sense that, even in an alternate universe with a different geometry than ours, conscious beings who wondered
1
about the integral of sqrt(1–
x
2
) from
x
= –1 to
x
= 1 would still get — well, not 3.14…, but exactly half of it, or 1.57…. Therein lies a catch in the universality of pi: why should 3.14… be deemed more fundamental than 1.57… or other naturally-occurring
2
pi-related quantities?
I suspect that, even if we limit attention to planets in our own universe harboring intelligent beings that divide their years into something like months and their months into something like days, many of those worlds won’t celebrate the number pi on the fourteenth day of the third month. It’s not just because 3.14 is a very decimal-centric approximation to pi (is there a reason to think that intelligent beings tend to have exactly ten fingers or tentacles or pseudopods or whatever?). Nor is it just because interpreting the “3” as a month-count and the “14” as a day-count is a bit sweaty. And it’s not just because holding a holiday to celebrate a number is a weird thing to do in the first place. Its also because, in our own world, we came close to having a different multiple of pi serve as our fundamental bridge between measuring straight things and measuring round things.
BECOMING A NUMBER
Pi is sometimes called Archimedes’ constant
3
because Archimedes was the first mathematician to describe a procedure for estimating pi to any desired level of approximation. Archimedes applied his method to show that pi was between 223/71 and 22/7. Incidentally, Archimedes didn’t think of 22/7, 223/71, or pi as numbers; to him, they were ratios of magnitudes, and the last of them had to be treated geometrically rather than numerically.
4
But the key thing is what Archimedes did
not
do: he did not show that pi lies between 314/100 and 315/100 (corresponding to the modern assertion 3.14 <
π
< 3.15) or between any other pair of fractions with powers of ten as their denominator. There was no reason for him to do so; the decimal system, with its built-in fixation on power of ten, was far off in the future.
Two millennia after Archimedes, Simon Stevin’s manuals
De Thiende
(
The Tenth
) and
L’Arithmétique
(
Arithmetic
), published in 1585, brought Europe into the decimal age. Stevin explicitly renounced the old distinction between numbers and ratios, proclaiming emphatically if obscurely “There are no numbers that are not comprehended under number.” In Stevin’s system, integers, fractions, and irrational numbers could all dine together at the same table and participate in the operations of arithmetic by way of their decimal representations.
Stevin didn’t specifically discuss pi, but his framework encouraged mathematicians to think about pi as a number. One such mathematician was Ludolph van Ceulen, who spent decades applying Archimedes’ method to analyze polygons with enormous numbers of sides, obtaining before his death 35 decimal digits of pi. After van Ceulen’s death in 1610, the digits he’d calculated were inscribed on his tombstone and pi was dubbed “the Ludolphine number” in some parts of Germany and the Netherlands.
5
So now 3.14… was acknowledged as a bona fide number. (Originally I was going to write a blog post called “The Velveteen Ratio; or, How Numbers Become Real”, but that’ll have to wait for some other occasion.)
WRONG NUMBER?
Yes, 3.14… was finally a number, but was it the right number to look at?
William Oughtred, in his 1631 work
Clavis Mathematicae
(
The Key of Mathematics
), used the notation “
π
/
δ
” where
π
is the circumference of a circle and
δ
its diameter; that quotient is our friend 3.14…. On the other hand, James Gregory, in his own 1668 book
Geometriae Pars Universalis
(
The Universal Part of Geometry
, focused on “
π
/
ρ
” instead, where
ρ
is the radius of the circle; this quotient is 3.14…’s relative 6.28…. Although the two mathematicians focused on two different ratios, for both Oughtred and Gregory, “
π
” did not denote a number; it denoted the circumference of a circle of arbitrary size. (I obtained much of the information for this section from Jeff Miller’s document
Earliest Uses of Symbols for Constants
.)
In 1671, Gregory went on to derive a formula
6
for one-fourth of the modern pi constant as 1 − 1/3 + 1/5 − 1/7 + …. This formula implies that pi itself is 4 − 4/3 + 4/5 − 4/7 + …. I don’t know if you’ll agree, but I find 1 − 1/3 + 1/5 − 1/7 + … a much prettier sum than 4 − 4/3 + 4/5 − 4/7 + …. Perhaps Gregory too felt that way, and maybe he even wondered from time to time whether 1 − 1/3 + 1/5 − 1/7 + …, or .79…, might be the truly fundamental numerical characteristic of circles.
In 1706 William Jones, in his
Synopsis Palmariorum Matheseos
(later published in English as
A New Introduction to the Mathematics
), followed in Oughtred’s footsteps but with a notational twist: he proposed that “
π
” should denote not the circumference of a circle but the ratio of the circumference to the diameter.
The greatest mathematician of the 18th century, Leonhard Euler, followed Jones in adopting “
π
” as a dimensionless constant, but he took his time figuring out which constant it should be. In 1729, he used “
π
” to denote the number 6.28… (using the symbol “
p
” to denote 3.14…). In the late 1730s, Euler switched to using “
π
” for 3.14…, but he switched back to using “
π
” for 6.28… again in 1747. Along the way he wrote the formula for the area of the circle as “
A
=
C r
/ 2” instead of “
A
=
πr
2
”, skirting the use of the symbol “
π
”. Then in 1748, in his deeply influential
Introductio in analysin infinitorum
(
Introduction to the Analysis of the Infinite
) he switched again to using “
π
” to denote 3.14… (and “
∆
” to denote 6.28…) and he never looked back.
The world took up Euler’s (eventual) choice, and has forever since used “
π
” to signify 3.14….
So Jones’ convention prevailed. But did it deserve to? Here according to ChatGPT are the ten most widely-used mathematical equations involving
π
:
Of these, (1), (3), (4), (6), and (7) become simpler when expressed in terms of 2
π
and (2), (5), (8), (9), and (10) are simpler when expressed in terms of
π
. I’d call that a dead heat.
Missing from that list is a very important formula involving pi that is not an equation but an approximation:
This formula, due to James Stirling in the 1730s, was expressed by him verbally rather than symbolically, including the Latin words for “the circumference of a circle whose radius is 1”, which is to say, in modern terms, 2
π
. So Stirling, too, would probably have favored 6.28… over 3.14….
SO WHAT HAPPENED?
Why did Euler switch sides? We can’t ask him, but it’s natural to think that he wanted to follow precedent. Archimedes wasn’t the only ancient to focus on 3.14… rather than 6.28…. Many civilizations studied circles and found approximate ways to compute their circumferences, but as far as I’m aware, none of them looked at the ratio of circumference to radius. That’s probably because of practical considerations: if you’re holding a disk in your hand or trying to measure a circular plot of land, it’s easier to get an accurate estimate of the diameter than to get an accurate estimate of the radius (other than by estimating the diameter and then dividing by two). And Archimedes was a practical fellow, being not just a mathematician but an engineer as well.
On the other hand, Archimedes was a Greek mathematician following in the tradition of Euclid, and that tradition venerated construction almost as much as it venerated proof. The Greek definition of a circle was in terms of its radius – which makes sense since the classic Greek way to construct a circle was via a compass. So if Archimedes had been feeling more like a disciple of Euclid when he did his work on circles, he might’ve favored measuring the circumference of a circle in terms of its radius rather than its diameter. And then we might well have adopted 6.28… as the circle constant, and even adopted the Greek letter “
π
” to represent it.
In recent years, it’s been suggested that math would be simpler if we’d all adopted 2
π
as the fundamental quantity to begin with. The suggestion was first made by Bob Palais in his article
π
is Wrong!
, and others after him have argued for the adoption of the Greek letter tau to represent 6.28…: see Michael Hartl’s
Tau Manifesto
. As I found above, roughly half of the commonest formulas involving pi become simpler when expressed in terms of tau. Nobody expects
τ
to prevail over
π
, but its advocates have a kind of cheerfulness one often finds among the partisans of inconsequential lost causes.
7
Tau Day is celebrated each June 28, but that date falls between when most school years end and most math camps start, so the upstart holiday has had trouble gaining traction.
I have two modest proposals of my own. One is that, while we continue to accept 3.14… as the “right” pi, we celebrate Good-Enough Pi Day in honor of the approximation 3.1, which is close enough for most purposes. It could be celebrated on either the 3rd of the 1st month, which happens to coincide with the perihelion of Earth’s orbit around the Sun, or on the 1st day of the 3rd month, which happens to coincide with the day on which the Earth has traveled one radian past the perihelion. Okay, I’m fudging those dates a bit, but only by a day or two. Could these astronomical coincidences be nudging us in the direction of Good-Enough Pi Day?
But an even more important approximation to pi that deserves wide acclaim is the number three. Three is, after all, the Bible’s approximation to pi – a circumstance which has led one Prof. Beauregard G. Bogusian to contend that the circumference of a circle is exactly equal to thrice its diameter, and that any seeming deviations from equality are due to humanity’s fallen state, as described in his video
The Truth About Pi
. Three is also the truly trans-universal approximation to pi, not beholden to any particular choice of base and hence likely to appeal to creatures with any number of appendages. As a way of celebrating
π
≈ 3, I propose that the third repast of each day be hereafter dubbed Pi Meal, and that we honor the number 3 at each such meal by consuming exactly 3 slices of pie. If we consistently perform this ritual, we will in the course of time approximate the mystical rotundity of the circle to any desired level of approximation.
ENDNOTES
#1. Of course one could argue that in a deeply different universe no one would invent integration or even square roots, let alone try to evaluate that integral. It’s hard to argue with such a position, but it’s also hard to have much fun in a conversation in which any attempt at logical deduction could be countered with “Yes, that’s only logical, but what if the rules of logic themselves were different there?”
#2. 1.57… has fewer fans than 3.14… and 6.28…, but fans do exist. One such is K. W. Regan, who, responding to Lance Fortnow and Bill Gasarch’s
blog post on reasons why 6.28… is better than 3.14…
, wrote: “I am agreed that the ‘true’ value of ‘pi’ is off by a factor of 2 – but in the opposite direction!” I sympathize with fans of 1.57…. For one thing, 1.57… is the universal coefficient of annoyance that measures how much further you have to walk when there’s a circular obstacle in front of you that you have to walk around instead of through.
#3. Attempts to approximate pi long predate Archimedes so the term is a bit of a misnomer, and you might guess that it was European Grecophiles who introduced the term but you’d be wrong: the term “Archimedes’ constant” was introduced by Japanese Grecophiles in the early 20th century.
#4. The difference in point of view is subtle, and can be hard for modern readers to understand since it requires unlearning familiar ways of thinking about numbers and measurement that are drilled into us from an early age. I explain more about the Greek theory of proportions in my essay
Dedekind’s Subtle Knife
from last month.
#5. Another term that was used was “van Ceulen’s number”, though this sometimes denoted van Ceulen’s approximation to pi rather than pi itself.
#6. Gregory’s formula was discovered independently two years later by Gottfried Wilhelm Leibniz. Neither one knew it, but the formula had been found by Madhava of Sangamagrama more than two centuries earlier. All three mathematicians derived the formula in the same way: by finding the power series expansion of the function arctan
x
and plugging in
x
= 1. To this day, when I want to know the first dozen digits of
π
, I often type “4*a(1)” into the UNIX program bc, causing my laptop to compute 4 times the arctangent of 1. For more about pi and bc, see John D. Cook’s essay
Computing pi with bc
.
#7. I understand this kind of fatalistic partisanship; I was a Red Sox fan until they actually won the World Series and being a Red Sox fan lost much of its appeal.
Voker (YC S24) Is Hiring an LA-Based Full Stack AI Software Engineer
Y Combinator, one of the world’s most prolific startup accelerators, sent a letter on Wednesday urging the Trump administration to openly support Europe’s Digital Markets Act (DMA), a wide-ranging piece of legislation that
aims to crack open Big Tech’s market power.
The DMA designates six tech companies as “gatekeepers” to the internet — Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft — and limits these technology kingpins from engaging in anticompetitive tactics on their platforms, in favor of interoperability. The law became applicable in May 2023, and it’s
already had a major impact on American tech companies.
In a letter to the White House
posted on X
by YC’s head of Public Policy, Luther Lowe, the startup accelerator argued that the DMA shouldn’t be lumped in with other European tech legislation, which U.S. officials often criticize as being overbearing.
Instead, YC argues in the letter that the spirit of Europe’s DMA is in line with values that promote — not hinder — American innovation.
“[W]e respectfully urge the White House to recalibrate its stance toward Europe’s digital regulation, drawing a clear line between measures that hamper innovation and those that foster it,” states YC’s letter, which was also signed by YC-backed startups, independent tech companies, and trade associations.
It’s not entirely surprising that YC would come out in explicit public support of the DMA. After all, the accelerator markets itself as a champion of “Little Tech” — an American venture-backed ecosystem of technology startups.
YC argues in the letter that the DMA opens up key avenues to create opportunities for American startups in AI, search, and consumer apps, and prevents Big Tech companies from boxing out smaller ventures.
Specifically, YC in its letter points to Apple
reportedly delaying its LLM-powered version of Siri until 2027
, years after competitors brought generative AI voice assistants to market. YC argues this represents a lack of competitive pressure, noting that third-party developers of AI voice assistants are unable to integrate their services into Apple’s operating systems
YC might take Big Tech to task for its reported anticompetitive behavior and take shots at companies like Apple, which it argues harms the venture-backed startup ecosystem. But YC and other supposedly Little Tech-aligned VCs are actually becoming quite influential in Washington.
Andreessen Horowitz (a16z), which published a “Little Tech Agenda” last year, spends millions of dollars trying to influence policy battles at the federal and local levels. According to data from
Open Secrets
, a16z’s contributions during the 2024 U.S. election cycle totaled $89 million. YC, still a smaller player in American politics,
contributed around $2 million
.
What’s less clear here is how the Trump administration will respond to the DMA in the long run — and YC’s endorsement of it.
During the Paris AI Action Summit in February, Vice President J.D. Vance criticized a few of the EU’s laws against tech companies, including the Digital Services Act and General Data Protection Regulation. However, Vance didn’t mention the DMA, which more narrowly targets anticompetitive tech industry practices.
Lowe told TechCrunch last year
during a StrictlyVC event
that the DMA is “not perfect, but at least they’re taking a stab at figuring out how do we curb the most egregious forms of self-preferencing by these large firms.”
Lowe did not immediately respond to TechCrunch’s request for comment.
Maxwell Zeff is a senior reporter at TechCrunch specializing in AI and emerging technologies. Previously with Gizmodo, Bloomberg, and MSNBC, Zeff has covered the rise of AI and the Silicon Valley Bank crisis. He is based in San Francisco. When not reporting, he can be found hiking, biking, and exploring the Bay Area’s food scene.
3D reconstruction and ommatidia size measurements from SRμCT data of female D. simulans (left) and D. mauritiana (right). Facet areas of the ommatidia highlighted in the antero-ventral (green), central (purple) and dorsal-posterior (blue) region of the eye are plotted in corresponding colours (far right). Ommatidia number is 996 for the D. simulans y, v, f and 1018 for the D. mauritiana TAM16. Scale bar is 100 μm. Credit:
BMC Biology
(2025). DOI: 10.1186/s12915-025-02136-8
An international team of scientists has discovered that tiny changes in the timing of the expression of a single gene can lead to big differences in eye size. Gene expression is the process by which the information encoded in a gene is turned into a functional output, for example, the production of a protein.
Researchers including Professor Alistair McGregor, in Durham University's Department of Biosciences, looked at two closely related species of fruit fly—Drosophila mauritiana and Drosophila simulans.
They found that a slight change in the timing of expression of a gene called orthodenticle (otd) can cause a significant difference in the size of the ommatidia, the individual hexagonal units that make up a compound eye.
In Drosophila mauritiana, otd is expressed earlier in
eye development
than in Drosophila simulans. This is associated with an increase in the ommatidia size of Drosophila mauritiana, meaning the eyes of this species are larger.
"Insects exhibit extensive variation in the shape and size of their eyes, which contributes to adaptive differences in their vision. Our study comparing Drosophila species that differ in eye size reveals one of the
genetic mechanisms
that can make the facets of their eyes bigger and potentially allow greater contrast sensitivity," says McGregor.
The work is
published
in the journal
BMC Biology
.
The researchers say their findings could have implications for our understanding of how the size of other organs evolve. They are now planning to investigate whether similar changes in
gene expression
can lead to differences in the size of other organs.
More information:
Montserrat Torres-Oliva et al, Heterochrony in orthodenticle expression is associated with ommatidial size variation between Drosophila species,
BMC Biology
(2025).
DOI: 10.1186/s12915-025-02136-8
Citation
:
Tiny changes in gene expression can lead to big differences in eye size of fruit flies (2025, February 26)
retrieved 13 March 2025
from https://phys.org/news/2025-02-tiny-gene-big-differences-eye.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Recursion kills: The story behind CVE-2024-8176 in libexpat
libexpat
is a fast streaming XML parser.
Alongside libxml2, Expat is one of the
most widely used
software libre XML parsers written in C, specifically C99.
It is cross-platform and licensed under
the MIT license
.
Expat 2.7.0
has been released earlier today.
I will make this a more detailed post than usual
because in many ways there is more to tell about this release
than the average libexpat release: there is a story this time.
What is in release 2.7.0?
The key motivation for cutting a release now is to get the fix to
a long-standing vulnerability out to users: I will get to that vulnerability
—
CVE-2024-8176
— in detail in a moment. First, what else is in this release?
There are also fixes to the two official build systems as usual,
as well as improvements to the documentation.
Another interesting sideshow of this release is the (harmless)
TOCTTOU issue
that was uncovered by static analysis
in a benchmarking helper tool shipped next to core libexpat. If you have not heard
of that class of race condition vulnerability but are curious,
the
related pull request
could be of interest: it is textbook TOCTTOU in a real-world example.
One other thing that is new in this release is that Windows binaries are now built
by GitHub Actions rather than AppVeyor and not just 32bit but also 64bit.
I have added 64bit binaries post-release to the
previous release Expat 2.6.4
already on January 21st, but only now it is becoming a regular part of the release process.
The vulnerability report
So what is that long-standing vulnerability about?
In July 2022 — roughly two and a half years ago —
Jann Horn
of
Google Project Zero
and
Spectre/Meltdown
fame
reached out to me via e-mail with a finding
in libexpat, including an idea for a fix.
What he found can be thought of as
"the linear version of
billion laughs
"
— a linear chain
(of so-called
general entities
)
rather than a tree — like this:
Except not with two (or three) levels, but thousands.
Why would a chain of thousands of entity references be a problem to libexpat?
Because of recursion, because of recursive C function calls:
each call to a function increases the stack, and
if functions are calling each other recursively, and
attacker-controlled input can influence the number of recursive calls,
then with the right input, attackers can force the stack to overflow
into the heap: stack overflow, segmentation fault,
denial of service
.
It depends on the stack size of the target machine how many levels of nesting
it takes for this to hit: 23,000 levels of nesting would be enough
to hit on one machine, but not another.
The education that introduces or leads people towards recursion should come
with a warning; recursion is not just beautiful, a thinking tool and allowing
for often simpler solutions — it also has a dark side to it:
a big inherent security problem.
The article
The Power of 10: Rules for Developing Safety-Critical Code
warned about the use of recursion in 2006, but Expat development already started in 1997.
Already in that initial e-mail, Jann shared what he considered the fix —
avoiding (or resolving) recursion — and there was a proof-of-concept patch
attached of how that could be done in general.
Unlike other Project Zero findings, there would be
no
90-days-deadline
for this issue,
because
— while
stack clashing
was considered and is a theoretical possibility —
denial of service was considered to be the realistic impact.
It should be noted that this risk assessment comes without any guarantees.
The vulnerability process
Two things became apparent to me:
It seemed likely that this vulnerability had multiple "faces" or variants,
and that the only true fix would indeed be to effectively remove
all
remaining
recursion from Expat.
It is not the first time that recursion has been an issue in C software, or even
libexpat in particular:
Samanta Navarro
resolved vulnerable recursion
in a different place in libexpat code in February 2022 already.
Thanks again!
That it would be a pile of work,
not a good match to my unpaid voluntary role in Expat as an addition to my
unrelated-to-Expat day job,
and not without risk without a partner at detail level on the topic.
My prior work on
fixing billion laughs for Expat 2.4.0
made me expect this to be similar, but bigger.
And with that expectation, the issue started aging without a fix,
and in some sense, I felt paralyzed about the topic and kept procrastinating
about it for a long time.
Every now and the topic came up with my friend,
journalist and security researcher
Hanno Böck
whom I had shared the issue with.
He was arguing that even without a fix, the issue should be made public
at some point.
One reason why I was objecting to publication without a fix was that it was clear
that in lack of a cheap clean fix, vendors and distributions would start
applying quick hacks that would produce false positives
(i.e. rejecting well-formed benign XML misclassified as an attack),
leave half of the issue
unfixed, and leave the ecosystem with a potentially heterogeneous state
of downstream patches where — say — in openSUSE a file would be rejected but
in Debian it would parse fine — or the other way around: a great mess.
I eventually concluded that the vulnerability could not
keep sitting in my inbox unfixed for another year,
that it needed a fix before publication to not cause a mess, and
that I had to take action.
Reaching out to companies for help
In early 2024, I started considering ways of finding help more, and
added a call for help banner
to the change log that was included with Expat 2.6.2.
I started drafting an e-mail that I would send out to companies known to
use libexpat in hardware.
I had started maintaining a (by no means complete)
public list of companies using Expat in hardware
that now came in handy.
On April 14th, 2024 I started finding looking for security
contacts for companies on
that list
.
For some, it was easy to find and for others, I gave up eventually;
for some, I am still not sure whether I got the right address
or whether they are ghosting me as part of an
ostrich policy
.
I wish more companies would
start serving
/.well-known/security.txt
;
finding vulnerability report contacts is still actual work in 2025 and should not be.
So then I mailed to circa 40 companies using a template, like this:
Hello ${company},
this e-mail is about ${company} product IT security.
Are you the right contact for that? If not please forward
it to the responsible contact within ${company} — thank you!
On the security matter:
It has come to my attention that ${company} products and
business rely on libexpat or the "Expat" XML parser library,
e.g. product ${product} is using libexpat according to
document [1]. I am contacting you as the maintainer of
libexpat and its most active contributor for the last 8
years, as can be seen at [2]; I am reaching out to you today
to raise awareness that:
- All but the latest release of libexpat (2.6.2) have
security issues known to the public, so every product
using older versions of libexpat can be attacked through
vulnerable versions of libexpat.
- Both automated fuzzing [3] and reports from security
researchers keep uncovering vulnerabilities in libexpat,
so it needs a process of updating the copy of libexpat
that you bundle and ship with your products, if not
already present.
- My time on libexpat is unfunded and limited, and there is
no one but me to constantly work on libexpat security and
to also progress on bigger lower priority tasks in
libexpat.
- There is a non-public complex-to-fix security issue in
libexpat that I have not been able to fix alone in my
spare time for months now, that some attackers may have
managed to find themselves and be actively exploiting
today.
I need partners in fixing that vulnerability.
Can ${company} be a partner in fixing that vulnerability,
so that your products using libexpat will be secure to use
in the future?
I am looking forward to your reply, best
Sebastian Pipping
Maintainer of libexpat
[1] ${product_open_source_copyright_pdf_url}
[2] https://github.com/libexpat/libexpat/graphs/contributors
[3] https://en.wikipedia.org/wiki/Fuzzing
Replies are coming in
The responses I got from companies were all over the map:
My "favorite" reply was "We cannot understand what you want from us"
when everyone else had understood me just fine. Nice!
Though that competes with the reply "A patch will be released after the fall."
when they had not received any details from me. Okay!
There was arguing that the example product that I had mentioned was no
longer receiving updates
(rather than addressing their affected other products
that are not end-of-life and continue to use libexpat).
I was asked to prove a concrete attack on the company's products
(which would not scale, need access to the actual product, etc).
That they "do not have sufficient resources to assist you on
this matter even if libexpat is employed in some of .....'s products"
came back a few times.
It was interesting and fun in some sense, and not fun in another.
Next stop: confidentiality
What came next was that I asked companies to sign a simple freeform
NDA
with me.
Companies were not prepared for that. Why was I asking for an NDA and
TLP:RED
?
To (1) make sure that who got the details would need to collaborate
on a true fix and not just monkey-patch their own setups and (2)
to avoid the scenario of heterogeneous trouble fixes
that I mentioned before
that would have been likely in case of a leak before there was a true fix.
Some discussions failed at NDA stage already, while others survived
and continued to video calls with me explaining Jann's findings in detail.
It is worth noting that I knew going in that many vulnerability reward
programs exclude the whole class of denial of service and so
I tied sharing the expected impact to signing an NDA to reduce the chances
of everyone discarding it "Oh 'just' denial of service, we'll pass".
The eventual team and security work
Simplifying a bit, I found two main partner companies in this:
Siemens
and
a company that would not like to be named
, let's call them "Unnamed Company".
Siemens started development towards a candidate fix, and Unnamed Company
started evaluating options of what other companies they could pay to help for them,
which got
Linutronix
and also
Red Hat
involved.
Siemens took the builder role
while Linutronix, Red Hat and I provided quality assurance of various kinds.
While we did not work day and night, it is fair to say that we have been working
on the issue since May 2024 —
for about 10 months
.
The three faces of the vulnerability
It did indeed turn out that the vulnerability has multiple — three — faces:
The third variant "Parameter entities" reuses ideas from my 2013 exploit for vulnerability
Parameter Laughs (CVE-2021-3541)
:
It used the same mechanism of delayed interpretation.
Conclusions and gratitude
It is no overstatement to say that without
Berkay Eren Ürün
— the main author of the fix —
and his manager
Dr. Thomas Pröll
at Siemens there would be no fix today: a big and personal "thank you!" from me.
Thanks to Unnamed Company, to Linutronix, to Red Hat for your help making this plane fly!
Thanks to Jann Horn for his whitehat research and the demo patch that lead the path to a fix!
Thanks to everyone who contributed to this release of Expat!
And please tell your friends:
Please leave recursion to math and keep it out of (in particular C) software:
it kills and will kill again.
Kind regards from libexpat,
see
CVE-2022-25313
and
CVE-2024-8176
for proof.
Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.
But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?
That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called
Exo 2
.
Exo 2 belongs to a new category of programming languages that MIT Professor Jonathan Ragan-Kelley calls “user-schedulable languages” (USLs). Instead of hoping that an opaque compiler will auto-generate the fastest possible code, USLs put programmers in the driver's seat, allowing them to write “schedules” that explicitly control how the compiler generates code. This enables performance engineers to transform simple programs that specify what they want to compute into complex programs that do the same thing as the original specification, but much, much faster.
One of the limitations of existing USLs (like the original Exo) is their relatively fixed set of scheduling operations, which makes it difficult to reuse scheduling code across different “kernels” (the individual components in a high-performance library).
In contrast, Exo 2 enables users to define new scheduling operations externally to the compiler, facilitating the creation of reusable scheduling libraries. Lead author Yuka Ikarashi, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate, says that Exo 2 can reduce total schedule code by a factor of 100 and deliver performance competitive with state-of-the-art implementations on multiple different platforms, including Basic Linear Algebra Subprograms (BLAS) that power many machine learning applications. This makes it an attractive option for engineers in HPC focused on optimizing kernels across different operations, data types, and target architectures.
“It’s a bottom-up approach to automation, rather than doing an ML/AI search over high-performance code,” says Ikarashi. “What that means is that performance engineers and hardware implementers can write their own scheduling library, which is a set of optimization techniques to apply on their hardware to reach the peak performance.”
One major advantage of Exo 2 is that it reduces the amount of coding effort needed at any one time by reusing the scheduling code across applications and hardware targets. The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.
Exo 2 includes a novel mechanism called “Cursors” that provides what they call a “stable reference” for pointing at the object code throughout the scheduling process. Ikarashi says that a stable reference is essential for users to encapsulate schedules within a library function, as it renders the scheduling code independent of object-code transformations.
“We believe that USLs should be designed to be user-extensible, rather than having a fixed set of operations,” says Ikarashi. “In this way, a language can grow to support large projects through the implementation of libraries that accommodate diverse optimization requirements and application domains.”
Exo 2’s design allows performance engineers to focus on high-level optimization strategies while ensuring that the underlying object code remains functionally equivalent through the use of safe primitives. In the future, the team hopes to expand Exo 2’s support for different types of hardware accelerators, like GPUs. Several ongoing projects aim to improve the compiler analysis itself, in terms of correctness, compilation time, and expressivity.
Ikarashi and Ragan-Kelley co-authored the paper with graduate students Kevin Qian and Samir Droubi, Alex Reinking of Adobe, and former CSAIL postdoc Gilbert Bernstein, now a professor at the University of Washington. This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the U.S. National Science Foundation, while the first author was also supported by Masason, Funai, and Quad Fellowships.
I have spent the last couple of years of my life
trying to make sense of
fsync()
and bringing OpenZFS up to code. I’ve read a lot of horror stories about this apparently-simple syscall in that time, usually written by people who tried very hard to get it right but ended up losing data in different ways. I hesitate to say I
enjoy
reading these things, because they usually start with some catastrophic data loss situation and that’s just miserably unfair. At least, I think they’re important reads, and I’m always glad to see another story of
fsync()
done right, or done wrong.
A few days ago a friend pointed me at a new one of these by the team working on
CouchDB
:
How CouchDB Prevents Data Corruption: fsync
. I was a passenger on a long car trip at the time, so while my wife listened to a podcast I read through it, and quite enjoyed it! As I read it I was happy to read something
not
horrifying; everything about it seemed right and normal. It’s even got some cool animations, check it out!
But then I got to the very last paragraph:
However, CouchDB is
not susceptible
to the sad path. Because it issues
one more
fsync
: when opening the database. That
fsync
causes the footer page to be flushed to storage and
only
if that is successful, CouchDB allows access to the data in the database file (and page cache) because now it knows all data to be safely on disk.
fsync()
after
open()
? What even?
That’s the question I’ve been mulling over for days, because I don’t see how this action can make any particular guarantees about durability, at least not in any portable way. I’ve gone back over my notes from my
BSDCan 2024
presentation, and in turn checked back over some of the research papers, blog posts and code that informed them. If I’m missing something I’m not seeing it.
Here’s a summary of the scenario as I understand it.
a program calls
write()
to append some data to a file
after
write()
succeeds, the program crashes
after it’s restarted, the program
open()
s the file again
the program calls
fsync()
against that file
fsync()
succeeds, and the write is presumed to be on disk
If I’ve misunderstood the situation, then the rest of this post might be meaningless. If not though, then I’m pretty sure that successful
fsync()
has not made any particular guarantee about that original write.
POSIX 1003.1-2024
is very thin in its description of
fsync()
– just two paragraphs. My understanding is that the first paragraph is about durability, while the second is about ordering. However, there’s very little in this description that is concrete, so we can’t rely on much.
Durability is what most people think of when they think about
fsync()
– being able to reliably find out if your writes made it to storage (at least to the extent that the filesystem, OS and hardware can make that guarantee, but that’s a whole different issue).
The
fsync
() function shall request that all data for the open file descriptor named by
fildes
is to be transferred to the storage device associated with the file described by
fildes
. The nature of the transfer is implementation-defined. The
fsync
() function shall not return until the system has completed that action or until an error is detected.
The key part here is the emphasis on the
file descriptor
. What it’s saying is that if you call
fsync(fd)
on a given
fd
, and it returns success, you are guaranteed that the writes made
on that file descriptor
are on disk. That is to say, if you have a different file descriptor open on the same file, and you have made writes there too, the first
fsync()
does not guarantee those writes.
Or, put another way:
char data[4096] = { ... };
int fd1 =open("/tmp/myfile", O_WRONLY|O_CREAT, S_IRUSR|S_IWUSR);
int fd2 =open("/tmp/myfile", O_WRONLY|O_CREAT, S_IRUSR|S_IWUSR);
pwrite(fd1, data, sizeof(data), 0); // 4K at start of file
pwrite(fd2, data, sizeof(data), 0x3ffff000); // last 4K up to 1G
assert(fsync(fd1) ==0); // first 4K definitely on disk, last 4K unknown
assert(fsync(fd2) ==0); // last 4K on disk too
It should be obvious that you wouldn’t want this any other way. If you were a database server, and you have a multi-terabyte file under you, and hundreds of concurrent writers operating on different parts of the file, you don’t want to durably write (that is, make recoverable after a crash)
all
of them just because one decided it had finished its work.
Note that there is nothing in this that says what state the data is in if
fsync()
returns an error. The assumption among applications programmers (including myself) was that if
fsync()
failed, and then on retry it succeeded, then your data had made it to disk. The
“fsyncgate”
email thread was where the mainstream understanding of that behaviour was finally understood, and it included both misunderstanding of the meaning of
fsync()
returning an error, and of the behaviour when called on different file descriptors.
If _POSIX_SYNCHRONIZED_IO is defined, the
fsync
() function shall force all currently queued I/O operations associated with the file indicated by file descriptor
fildes
to the synchronized I/O completion state. All I/O operations shall be completed as defined for synchronized I/O file integrity completion.
To the best of my understanding, what this is trying to say is that a call to
fsync()
also creates a write barrier over the entire file. That is, all writes to the file before
fsync()
is called will be completed before any writes to the file after the call. On the surface, this appears to be without reference to the file descriptor used for the call to
fsync()
.
POSIX has a whole chapter on
Definitions
where
“Synchronized I/O completion”
and
“Synchronized I/O file integrity completion”
are defined. However, those are also ambiguous. I won’t paste them all in here, but if you do care to look, you’ll see that “I/O operation” is not clearly defined, and “queued I/O operation” is not defined at all. Further, “I/O completion” is defined as “successfully transferred or diagnosed as unsuccessful”, but then in the description of write (and read!) data integrity completion it repeats “successful or unsuccessful” and then immediately says “actually, only successful”. So it’s very unclear what’s actually supposed to happen here, but the barrier seems the most likely.
However, regardless of reading, this paragraph makes no claim about
when
those “before” writes should be written, nor how errors on them should be reported, nor how any of this interacts with reads. So for understanding where any given write is, we’re pretty much dependent on the first guarantee.
From this, I believe there are only two conclusions you can draw from the result of an
fsync(fd)
call for a given
fd
:
on success, all writes issued on
fd
before the
fsync()
are stored durably (which, by convention, means they will be available after an unsafe OS restart)
on failure,
all
writes on
fd
since the last successful
fsync()
call are now in an unknown state, and will be forevermore.
That last point is crucial. On error,
you don’t know
if your write made it to disk, and you can neither find out the current state nor induce a retry or any other behaviour, at least not through any standard means. There is no way to call
fsync()
again to get a guarantee. Reading the data back through the same file will not get you a guarantee. You might be able to do clever things “outside” of the normal POSIX APIs (flush some OS caches, read a raw block device, etc) but those are not guarantees provided by
fsync()
.
This isn’t just spec lawyering either. In the absence of a clear direction, different operating systems have handled errors in differents way. I’ll point you back to my BSDCan presentation for more details, since this post isn’t quite about error responses. What I’ll say is that they vary in quite interesting ways, and all make sense from certain perspectives.
If
fsync()
after
open()
doesn’t work, then why did it work?
🔗
I’ve rattled on for a while now but there’s a small problem: at least one project included calling
fsync()
after reopening the file in their recovery strategy. I have to assume it wasn’t just a cargo-cult, and it actually solved a real problem for them. So why did it work? I think we can make a good guess.
If you recall the descriptions from the CouchDB post, the
write()
call would have gone into the page cache. The user program crashed without
fsync()
being called, so that’s just a normal dirty page, sitting there, waiting for the OS to come along and write it out as part of its normal page writeback behaviour. When that happens is very dependent on OS, filesystem and configuration, but it could depend on how many other dirty pages are waiting to be written out, or it could be some scheduled or timeboxed process, or it could be when the OS is under memory pressure and starts reclaiming pages.
I’m willing to bet that, being an important server program, it was was restarted almost immediately by a process supervisor, and so the page was actually still dirty in memory, not yet written down. So, it looks good when you read it.
And why did the
fsync()
work? There’s two possible reasons I can think of, but the post doesn’t have enough information for we to know which.
One possibility is that it actually didn’t do anything.
fsync()
looked, saw no dirty writes associated with the file descriptor and returned success. The program merrily continued on its way, and within the next couple of seconds, the normal page writeback came through and completed the write successfully.
The other possibility is that operating systems and filesystems are allowed to do more than POSIX permits. Using OpenZFS as an example (hey, it’s what I know),
fsync()
always flushes anything outstanding for the underlying object, regardless of where the writes came from. That’s obviously fine;
fsync()
returning success just means that the writes on the same descriptor are done. If OpenZFS has others lying around, it’s totally allowed to write those too. Just so long as the caller doesn’t assume that more happened than did, everything will be fine.
I don’t know enough other filesystems, but it wouldn’t surprise me if at least some of them held a list of dirty pages on the file or even the filesystem as a whole. No one has expressed an interest in them (they would have called
fsync()
if so!) and there’s good batching opportunites available by holding them for a while and then writing them out all at once.
This is all by the by though. The real problem is actually what happens if that page
fails
to write out. Where does the error get reported? Does it get reported at all? And what does the OS do after that? You’d like to think that if there’s no one to report the error to, the page would either remain dirty (and so still be readable) or become invalid (and the next read causes a fault and a read from disk, likely also failing).
Most do one of those. Linux and ext4 however does not (or at least, didn’t used to). It used to both clear the error state and mark the page clean, but valid. In all respects it looked like a page freshly read from disk. But, that data was not on disk (the write failed!), any attempt to flush it would do nothing (it’s not dirty). But, because it’s a clean page, it’s eligible for eviction, resulting in the weird situation that you could read data from it and it “worked”, and read it again later and it faults. This is all perfectly legal, and I imagine more efficient in some situations – if it’s clean then there’s no I/O to do, but if it’s valid there’s no fault to process.
And our program would never know, because it misunderstood what a successful
fsync()
meant.
For portable programs,
fsync()
can only tell you about writes that you did on the same file descriptor. If it succeeds, those writes are on disk. If it fails, assume they are lost and gone forever. It has nothing to say about any other I/O anywhere else in the system.
Though if I’ve got it all wrong please let me know.
fsync()
has thrown up so many subtle surprises over so many years that I don’t trust myself to know if I’m just looking at a new thing I didn’t know about.
And, for avoidance of doubt, understand that I’m definitely not dunking on CouchDB at all. It was just their post that prompted me to write this. Almost every serious data storage system (including the one I work on) has messed up
something
around durability and been found out in the last decade or so. It’s almost a rite of passage these days.
EFF Thanks Fastly for Donated Tools to Help Keep Our Website Secure
Electronic Frontier Foundation
www.eff.org
2025-03-13 22:17:26
EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online.
Eff.org was first registered in 1990, just three months after the o...
EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank
Fastly
for their generous in-kind contribution of services helping keep EFF’s website online.
Eff.org
was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown.
It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily.
EFF is
member-supported
by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.
Join EFF Lists
Microsoft apologizes for removing VSCode extensions used by millions
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 21:53:11
Microsoft has reinstated the 'Material Theme - Free' and 'Material Theme Icons - Free' extensions on the Visual Studio Marketplace after finding that the obfuscated code they contained wasn't actually malicious. [...]...
Microsoft has reinstated the 'Material Theme – Free' and 'Material Theme Icons – Free' extensions on the Visual Studio Marketplace after finding that the obfuscated code they contained wasn't actually malicious.
The two VSCode extensions, which count over 9 million installs, were pulled from the VSCode Marketplace in late February over security risks, and their publisher, Mattia Astorino (aka 'equinusocio') was banned from the platform.
"A member of the community did a deep security analysis of the extension and found multiple red flags that indicate malicious intent and reported this to us,"
stated a Microsoft employee at the time
.
"Our security researchers at Microsoft confirmed this claim and found additional suspicious code."
Researchers Amit Assaraf and Itay Kruk, who were deploying AI-powered scanners seeking suspicious submissions on VSCode, first flagged them as potentially malicious.
The researchers told BleepingComputer that their high-risk evaluation for Material Theme arose from what was detected as the presence of code execution capabilities in the theme's "release-notes.js" file, which was also heavily obfuscated.
Obfuscated code that sparked concerns
Source: BleepingComputer
Astorino immediately objected to the allegations and the removal of his extensions from the VSCode Marketplace, alleging that the problem comes from an outdated sanity.io dependency used since 2016 to show release notes from sanity headless CMS.
The publisher said that they could have removed this dependency from the themes in seconds if Microsoft had contacted them, but instead, they saw themselves getting banned without warning.
"There was nothing malicious. I hadn't updated the extension in years since I was focused on the new version, apart from the obfuscation process," Astorino told BleepingComputer today via email.
"The only issue was a build script that ended up in the distributed index.js (referring to Material Theme Icons). This script was used to generate JSON files after pulling SVG icons from a closed-source repository—something I removed a long time ago."
"Regarding Material Theme, the obfuscation process unintentionally included the sanity.io SDK client, which contained some strings referencing passwords or usernames (the auth client). However, these were not harmful—just a result of a flawed build process made long time ago."
Extensions back in VSMarketplace
Microsoft's Scott Hanselman apologized to Astorino yesterday in a GitHub issue opened by the developer asking for his account and themes to be reinstated.
"The publisher account for Material Theme and Material Theme Icons (Equinusocio) was mistakenly flagged and has now been restored,"
reads Hanselman's post
.
"In the interest of safety, we moved fast and we messed up. We removed these themes because they fired off multiple malware detection indicators inside Microsoft, and our investigation came to the wrong conclusion."
Both extensions available are again in the VSMarketplace
Source: BleepingComputer
"Again, we apologize that the author got caught up in the blast radius and we look forward to their future themes and extensions. We've corresponded with him and thanked him for his patience," continued Hanselman.
Additionally, Hanselman stated that the Visual Studio Code Marketplace will update its policy on obfuscated code and update its scanners accordingly to avoid quickly acting upon projects in the future.
When asked by BleepingComputer about this development, cybersecurity researcher Amit Assaraf continued to claim that the extension did contain malicious code. However, there was no malicious intent from the publisher, commenting that "in this case, Microsoft moved too fast."
According to Astorino, the Material Theme extensions on the VSCode marketplace have been completely rewritten and are safe to use.
Bugs, bugs, bugs!
Rachel By The Bay
rachelbythebay.com
2025-03-11 21:49:49
Look out, I'm talking about feed stuff again.
Since
last week's post
with the latest report on feed reader behaviors, a fair number of things
have happened.
First up, I reported that two FreshRSS instances were sending
unconditional requests. This turned out to be a pair of interacting
bu...
Since
last week's post
with the latest report on feed reader behaviors, a fair number of things
have happened.
First up, I reported that two FreshRSS instances were sending
unconditional requests. This turned out to be a pair of interacting
bugs, and the developers hopped right on that and stamped it out. Their
dev instances shifted right away. That's great.
Then, something funny happened. I wrote that post about the
modem noises
and it included a MP3 file. The way I do this in my stuff here is that
I have some commands baked into my page generator that knows how to
create the <audio> container and then put one or more <source> elements
into it. So, I did that for the (admittedly crappy) recording and
shipped it off.
I didn't know this at the time, but this immediately triggered strange
behavior in a number of feed readers, FreshRSS included. They all
started sending the wrong data. That is, they'd get the latest version
of the feed with the "2025/03/05/1670" post in it, but on their next
poll they'd still be sending the If-Modified-Since and/or If-None-Match
values from the previous version.
This sent me down the road of trying to flush out this behavior on the
feed score project side of things, but nothing I did would reproduce it.
Again, I didn't know that something about that specific post was
responsible, so I was groping around in the dark.
Finally, I reported this to the FreshRSS devs, and they came right back
and showed me my problem: my feed says it's xhtml, but it was doing
something that's not valid: <audio controls>. That's not valid XML, so
it's not valid XHTML by extension. It's just fine in a regular web
page, naturally.
I hate the web.
How did this happen? Easy: In 2023, I
rewrote
my generator for these web pages and the feeds. There's now a bunch of
code that *tries* to crap out reasonable (and validating) results
instead of doing "HTML by printf" like it had been before.
Along the way, I obviously didn't internalize the rule that "in XHTML,
attribute minimization is forbidden". It sounds like the kind of thing
I would have read once back in the 90s when I was trying to get pages
to validate as XHTML 1.0 for some cursed reason, and then promptly
forgot. So, it would do a bare "controls" attribute and thought it was
fine.
In practical terms, it means you can't do this:
<audio controls loop>
But instead you have to do this:
<audio controls="controls" loop="loop">
So yeah, I wrote something terrible. Again, I hate the web.
Anyway, what was happening is that a single post within the feed was
failing validation as a self-proclaimed xhtml document, so the feed
readers were rejecting the _entire poll_ and were not updating their
caches. The next time they polled, it was as if they had never sent the
request in the first place (as far as they were concerned).
Once I shipped a fixed version of the feeds, they went back to normal.
I'd snark about something here, but clearly, I'm no better at this.
What's different now? Well, my bits of code which are responsible for
attributes ("controls", "loop", "href", ...) now refuse to generate a
minimized attribute when being used for XML, and that includes the feed
stuff. They'll blow up before they ship garbage like that again.
My thanks to the FreshRSS team for being responsive and helping me out.
Now, where did I put my clown nose...
Functional Tests as a Tree of Continuations (2010)
One of the most essential practices for maintaining the long-term quality of computer code is to write automated tests that ensure the program continues to act as expected, even when other people (including your future self) muck with it.
Test code is often longer than the code that is being tested. A former colleague estimates that the right ratio is around 3 lines of functional test code for every line of “real” code. Writing test code is often mindless and repetitive, because all you do is trace out all the steps a user could take and write down all the things that should be true about each one.
A great deal of test code simply exists to set up the tests. Sometimes you can abstract out the logic into
setup()
and
teardown()
functions, and put the fake data needed for tests into things called fixtures. Even then, you’ll often have a subset of tests that requires additional setup, and test code often becomes littered with miniature, unofficial setup functions.
The problem
It’s a wonder to me that most testing environments are structured as
lists of tests
. Lists of tests are sensible for unit tests, but they’re a disaster for functional tests. The list structure is one of the main reasons that functional test suites are so repetitive.
Let’s say you want to test a 5-step process, and test the “Continue” and “Cancel” buttons at each step. You can only get to a step by pressing “Continue” in the previous step. With the traditional “list of tests” paradigm, you have to write something like
test_cancel_at_step_one() {
Result = do_step_one("Cancel");
assert_something(Result);
}
test_cancel_at_step_two() {
Result = do_step_one("Continue");
assert_something(Result);
Result = do_step_two("Cancel");
assert_something(Result);
}
test_cancel_at_step_three() {
do_step_one("Continue"); # tested above
Result = do_step_two("Continue");
assert_something(Result);
Result = do_step_three("Cancel");
assert_something(Result);
}
test_cancel_at_step_four() {
do_step_one("Continue"); # tested above
do_step_two("Continue"); # tested above
Result = do_step_three("Continue");
assert_something(Result);
Result = do_step_four("Cancel");
assert_something(Result);
}
test_cancel_at_step_five() {
do_step_one("Continue"); # tested above
do_step_two("Continue"); # tested above
do_step_three("Continue"); # tested above
Result = do_step_four("Continue");
assert_something(Result);
Result = do_step_five("Cancel");
assert_something(Result);
}
As you can start to see, the length of each test is growing linearly in the number of steps we’re testing, so the length of the total test suite ends up being O(N
2
) in the number of steps.
The solution
A more appropriate data structure for functional tests is a
testing tree
. The tree essentially maps out the possible actions at each step. At each node, there is a set of assertions, and parent nodes pass a
copy of state
down to each child (representing a possible user action). Child nodes are free to modify and make assertions on the state received from the parent node, and pass a copy of the modified state down to
its
children. Nodes should not affect the state of parents or siblings.
Let’s take a concrete example. In a 5-step process, the tree would look like:
Step 1
Cancel
Continue to Step 2
Cancel
Continue to Step 3
Cancel
Continue to Step 4
Cancel
Continue to Step 5
Here, the first “Cancel” and “Continue to Step 2” are like parallel universes. Rather than repeating Step 1 to test each of these, we want to automatically make a copy of the universe at the end of Step 1, then run child tests on each parallel universe. If we can write our tests as a tree in this way, the length of the total test suite will be O(N) in the number of steps, rather than O(N
2
).
For modern web applications, all the state is stored in a database. Therefore to make a “copy of the universe”, we just need a way to make a copy of the database to pass down to the child tests, while preserving older copies that tests further up the tree can copy and use.
The solution is to implement a
stack of databases
. As we walk down the testing tree, we push a copy of the current database onto the stack, and the child can play with the database at the top of the stack. After we’ve finished with a set of child nodes and ascend back up the testing tree, we pop the modified databases off the stack, returning to the previous database revisions.
An example
I won’t go through the details of writing a testing framework or the database stack, but here’s how you’d test a multi-step process with
Chicago Boss
’s test framework. This is a tree implemented as nested callbacks in Erlang. Each “node” is an HTTP request with a list of callbacks that make assertions on the response, and a list of labeled continuation callbacks — these are the child nodes. Each child node receives a fresh database copy that it can thrash to its heart’s content. Managing the stack of databases is all done under the hood.
The resulting test code is surprisingly elegant:
start() ->
boss_test:get_request("/step1", [],
[ % Three assertions on the response
fun boss_assert:http_ok/1,
fun(Res) -> boss_assert:link_with_text("Continue", Res) end
fun(Res) -> boss_assert:link_with_text("Cancel", Res) end
],
[ % A list of two labeled continuations; each takes the previous
% response as the argument
"Cancel at Step 1", % First continuation
fun(Response1) ->
boss_test:follow_link("Cancel", Response1,
[ fun boss_assert:http_ok/1 ], []) % One assertion, no continuations
end,
"Continue at Step 1", % Second continuation
fun(Response1) ->
boss_test:follow_link("Continue", Response1,
[ fun boss_assert:http_ok/1 ], [
"Cancel at Step 2", % Two more continuations
fun(Response2) ->
boss_test:follow_link("Cancel", Response2,
[ fun boss_assert:http_ok/1 ], [])
end,
"Continue at Step 2",
fun(Response2) ->
boss_test:follow_link("Continue", Response2,
[ fun boss_assert:http_ok/1 ], [
"Cancel at Step 3",
fun(Response3) ->
boss_test:follow_link("Cancel", Response3,
[ fun boss_assert:http_ok/1 ], [])
end,
"Continue at Step 3",
fun(Response3) ->
boss_test:follow_link("Continue", Response3,
[ fun boss_assert:http_ok/1 ], [
"Cancel at Step 4",
fun(Response4) ->
boss_test:follow_link("Cancel", Response4,
[ fun boss_assert:http_ok/1 ], [])
end,
"Continue at Step 4",
fun(Response4) ->
boss_test:follow_link("Continue", Response4,
[ fun boss_assert:http_ok/1 ], [])
end ]) end ]) end ]) end ]).
If the indentation ever gets out of hand, we can simply put the list of continuations into a new function.
Conclusion
There are several benefits to structuring functional tests as a tree of continuations:
Eliminates code duplication.
We don’t need to repeat steps 1-3 for each action that can be taken at step 4. This reduces the code base, as well as time needed for execution.
Eliminates most setup code.
As long as the setup actions are associated with an HTTP interface, all setup can be done as part of the testing tree with no performance penalty.
Pinpoints the source of failing tests.
If assertions fail in a parent node, we can immediately stop before running the child nodes. By contrast, lists of tests usually produce long lists of failures for one bug, making it harder to find the source.
Tests are well-structured.
There is a 1-1 mapping between nodes and something the user sees, and a 1-1 mapping between child nodes and something the user can do next. The tests appear in a well-structured hierarchy, instead of a haphazard “list of everything I could think of”.
The trail of previous responses are all in scope.
This benefit is unique to a callback implementation in a language that supports closures — it’s handy if you need to compare output from two requests that are separated by several intermediate requests. With a list of tests, you would have to pass around data in return values and function arguments, but here all previous responses are at our fingertips in Response1, Response2, etc.
Why haven’t I been able to find anyone else using this approach? My guess is that all of the side effects of OO languages encourage a wrecking-ball mentality when it comes to unit tests — destroy all possible state after each test. But for functional tests with many steps, this approach is grossly inefficient — if you want to test every rung on a ladder, it’s pointless to climb all the way down to the ground and trudge back up for each test.
To write your own testing framework based on continuation trees, all you need is a stack of databases (or rather, a database that supports rolling back to an arbitrary revision). I don’t know what databases support this kind of revisioning functionality, but adding the feature to Chicago Boss’s in-memory database took about 25 lines of Erlang code.
Once you start writing functional tests as a tree of assertions and continuations, you really will wonder how you did it any other way. It’s just one of those ideas that seems too obvious in hindsight.
You’re reading
evanmiller.org
, a random collection of math, tech, and musings. If you liked this you might also enjoy:
Want to look for statistical patterns in your MySQL, PostgreSQL, or SQLite database? My desktop statistics software
Wizard
can help you analyze
more data in less time
and
communicate discoveries visually
without spending days struggling with pointless command syntax. Check it out!
Most of us have encountered a few software engineers who seem practically magician-like, a class apart from the rest of us in their ability to reason about complex mental models, leap to nonobvious yet elegant solutions, or emit waves of high-quality code at unreal velocity.
I have run into many of these incredible beings over the course of my career. I think their existence is what explains the curious durability of the notion of a “10x engineer,” someone who is 10 times as productive or skilled as their peers. The idea
—
which has
become a meme
—is
based on
flimsy, shoddy research
, and the claims people have made to defend it have often been risible (for example, 10x engineers have dark backgrounds, are rarely seen doing user-interface work, and are poor mentors and interviewers) or blatantly double down on stereotypes (“we look for young dudes in hoodies who remind us of Mark Zuckerberg”). But damn if it doesn’t resonate with experience. It just feels true.
I don’t have a problem with the idea that there are engineers who are 10 times as productive as other engineers. The problems I do have are twofold.
Measuring productivity is fraught and imperfect
First, how are you measuring productivity? I have a problem with the implication that there is One True Metric of productivity that you can standardize and sort people by. Consider the magnitude of skills and experiences at play:
What adjacent skills, market segments, and product subject matter expertise are you drawing upon? Design, security, compliance, data visualization, marketing, finance?
What stage of development? What scale of usage? Are you writing for a
Mars
rover, or shrink-wrapped software you can never change?
Also, people and their skills and abilities are not static. At one point, I was a pretty good database reliability engineer. Maybe I was even a 10x database engineer then, but certainly not now. I haven’t debugged a query plan in
years
.
“10x engineer” makes it sound like productivity is an immutable characteristic of a person. But someone who is a 10x engineer in a particular skill set is still going to have infinitely more areas where they are average (or below average). I know a lot of world-class engineers, but I’ve never met anyone who is 10 times better than everyone else across the board, in every situation.
Engineers don’t own software, teams own software
Second, and even more importantly: So what? Individual engineers don’t own
software; engineering
teams own software. It doesn’t matter how fast an individual engineer can write software. What matters is how fast the team can collectively write, test, review, ship, maintain, refactor, extend, architect, and revise the software that they own.
Everyone uses the same software delivery pipeline. If it takes the slowest engineer at your company five hours to ship a single line of code, it’s going to take the fastest engineer at your company five hours to ship a single line of code. The time spent writing code is typically dwarfed by the time spent on
every other part
of the
software development
lifecycle.
If you have services or software components that are owned by a single engineer, that person is a single point of failure.
I’m not saying this should never happen. It’s quite normal at
startups
to have individuals owning software, because the biggest existential risk that you face is not moving fast enough and going out of business. But as you start to grow as a company, ownership needs to get handed over to a team. Individual engineers get sick, go on vacation, and leave the company, and the business has to be resilient to that.
When a team owns the software, then the key job of any engineering leader is to craft a high-performing engineering team. If you must 10x something, build 10x engineering teams.
The best engineering organizations are the ones where normal engineers can do great work
When people talk about world-class engineering organizations, they often have in mind teams that are top-heavy with staff and principal engineers, or that recruit heavily from the ranks of former
Big Tech
employees and top universities. But I would argue that a truly great engineering org is one where you don’t have to be one of the “best” or most pedigreed engineers to have a lot of impact on the business. I think it’s actually the other way around. A truly great engineering organization is one where perfectly normal, workaday software engineers, with decent skills and an ordinary amount of expertise, can consistently move fast, ship code, respond to users, understand the systems they’ve built, and move the business forward a little bit more, day by day, week by week.
Anyone can build an org where the most experienced, brilliant engineers in the world can
create products
and make progress. That’s not hard. And putting all the spotlight on individual ability has a way of letting your leaders off the hook from doing their jobs. It is a huge competitive advantage if you can build systems where less experienced engineers can convert their effort and energy into product and business momentum. And the only meaningful measure of productivity is whether or not you are moving the business materially forward.
A truly great engineering org also happens to be one that mints world-class software engineers. But I’m getting ahead of myself here.
Let’s talk about “normal” engineers
A lot of technical people got really attached to our identities as smart kids. The software industry tends to reflect and reinforce this preoccupation at every turn, as seen in Netflix’s claim that “we look for the top 10 percent of global talent” or Coinbase’s desire to “hire the top 0.1 percent.” I would like to challenge us to set that baggage to the side and think about ourselves as
normal
people.
It can be humbling to think of yourself as a normal person. But most of us are, and there is
nothing wrong with that
. Even those of us who are certified geniuses on certain criteria are likely quite normal in other ways—kinesthetic, emotional, spatial, musical, linguistic, and so on.
Software engineering both selects for and develops certain types of intelligence, particularly around abstract reasoning, but
nobody
is born a great software engineer. Great engineers are made, not born.
Build sociotechnical systems with “normal people” in mind
When it comes to hiring talent and building teams, yes, absolutely, we should focus on identifying the ways people are exceptional. But when it comes to building sociotechnical systems for software delivery, we should focus on all the ways people are
normal
.
Normal people have
cognitive biases
—confirmation bias, recency bias, hindsight bias. We work hard, we care, and we do our best; but we also forget things, get impatient, and zone out. Our eyes are inexorably drawn to the color red (unless we are colorblind). We develop habits and resist changing them. When we see the same text block repeatedly, we stop reading it.
We are embodied beings who can get overwhelmed and fatigued. If an alert wakes us up at 3 a.m., we are much more likely to make mistakes while responding to that alert than if we tried to do the same thing at 3 p.m. Our emotional state can affect the quality of our work.
When your systems are designed to be used by normal engineers, all that excess brilliance they have can get poured into the product itself, instead of wasting it on navigating the system.
Great engineering orgs mint world-class engineers
A great engineering organization is one where you don’t have to be one of the best engineers in the world to have a lot of impact. But—rather ironically—great engineering orgs mint world-class engineers like nobody’s business.
The best engineering orgs are not the ones with the smartest, most experienced people in the world. They’re the ones where normal software engineers can consistently make progress, deliver value to users, and move the business forward.
Places where engineers can
have a large impact are a magnet for top performers. Nothing makes engineers happier than building things, solving problems, and making progress.
If you’re lucky enough to have world-class engineers in your organization, good for you! Your role as a leader is to leverage their brilliance for the good of your customers and your other engineers, without coming to
depend
on their brilliance. After all, these people don’t belong to you. They may walk out the door at any moment, and that has to be okay.
These people can be phenomenal assets, assuming they can be team players and keep their egos in check. That’s probably why so many tech companies seem to obsess over identifying and hiring them, especially in
Silicon Valley
.
But companies attach too much importance to finding these people after they’ve already been minted, which ends up reinforcing and replicating all the prejudices and inequities of the world at large. Talent may be evenly distributed across populations, but opportunity is not.
Don’t hire the “best” people. Hire the right people
We place too much emphasis on individual agency and characteristics, and not enough on the systems that shape us and inform our behaviors.
I believe a whole slew of issues (candidates self-selecting out of the interview process, diversity of applicants, and more) would be improved simply by shifting the focus of hiring away from this inordinate emphasis on hiring the
best
people and realigning around the more reasonable and accurate
right
people.
It’s a competitive advantage to build an environment where people can be hired for their unique strengths, not their lack of weaknesses; where the emphasis is on composing teams; where inclusivity is a given both for ethical reasons and because it raises the bar for performance for everyone. Inclusive culture is what meritocracy depends on.
This is the kind of place that engineering talent is drawn to like a moth to a flame. It feels good to move the business forward, sharpen your skills, and improve your craft. It’s the kind of place that people go when they want to become world-class engineers. And it tends to be the kind of place where world-class engineers want to stick around and train the next generation.
EFFecting Change: Is There Hope for Social Media?
Electronic Frontier Foundation
www.eff.org
2025-03-13 21:21:52
Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech.
EFFecting Change Livestream Series:Is There Hope for Social Media?Thursday, March 20th12:00 PM - 1:00 PM Pacific - Check Local TimeThis event is LIVE and FREE!
Users are frust...
Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech.
EFFecting Change Livestream Series:
Is There Hope for Social Media?
Thursday, March 20th
12:00 PM - 1:00 PM Pacific -
Check Local Time
This event is LIVE and FREE!
Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away?
Join our panel featuring EFF Civil Liberties Director
David Greene
, EFF Director for International Freedom of Expression
Jillian York
, Mastodon's
Felix Hlatky
, Bluesky's
Emily Liu
, and Spill's
Kenya Parham
as they explore the future of free expression online and why social media might still be worth saving.
EFF is delighted to be attending
RightsCon
again—this year hosted in Taipei, Taiwan between 24-27 February.RightsCon provides an opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. Many EFFers are heading to Taipei and will be actively participating...
We are collecting stories from individuals and organizations who have faced censorship on social media platforms to expose the true scale of abortion censorship. Our goal is to demand greater transparency in tech companies' moderation practices and ensure that their actions do not silence critical conversations about reproductive rights.
Related Issues
Senate Dems Look to Give Trump Everything He Wants After a “Fake Fight” on Spending Bill
Intercept
theintercept.com
2025-03-13 21:11:20
If Senate Democrats oppose Trump’s budget, why are they considering providing Republicans with the needed votes to invoke cloture?
The post Senate Dems Look to Give Trump Everything He Wants After a “Fake Fight” on Spending Bill appeared first on The Intercept....
In the first
real showdown since the election, Senate Democrats seem poised to give President Donald Trump and Elon Musk everything they want while pretending to oppose the Republican spending bill. The proposed maneuver would allow Democrats to feign opposition to the new administration’s power grab, while also giving the GOP enough votes to push their agenda through.
Senate Republicans do not have enough votes for cloture, a legislative procedure to end debate and move for a vote. Republicans only have a 53-47 seat majority, and 60 votes are needed to invoke cloture. If Democrats don’t provide enough votes for cloture, the government is almost certainly headed toward a shutdown Friday.
Despite
promising a fight
, Senate Democrats are
reportedly
considering a vote swap. In exchange for providing enough votes to end debate on the GOP spending bill, Senate Republicans would allow a Democratic amendment to continue to fund the government at current levels for the next month to come to the floor, known as a continuing resolution, or CR. The Democratic amendment is almost certain to fail.
Democrats opposed to the GOP bill would then be allowed to vote “no” on the broader spending bill, allowing them to seem like they’re voting against the GOP measure despite providing the needed support to bring it to the floor. Democrats have criticized the GOP plan, which would give Musk and Trump more power to slash the federal government and also cut $1 billion from Washington, D.C.’s local budget.
Nina Smith, a Democratic strategist, said this kind of wheeling and dealing is exactly why Americans have lost faith in the party.
“What has been so detrimental to the Democratic Party brand is exactly the sort of threading the needle that we’re seeing here,” said Smith. “We really have to focus that energy and use it in a way that’s very strategic so that we rebuild the trust of the American people because they don’t trust us, and it’s [because of] this sort of doublespeak, backroom deals.”
“Democrats weren’t elected to put up a fake fight.”
Some Democratic lawmakers have called out these plans, arguing that Democrats weren’t put in office to pretend to fight.
“Some Senate Democrats are being tempted to pretend to fight the Trump-Musk funding bill today, then quietly agree to give up on blocking it,” wrote Rep. Greg Casar, D-Texas, in a statement to The Intercept. “That would be a disastrous decision. Voting for cloture on a bill that allows Musk and Trump to steal from taxpayers is the same as voting to allow Musk and Trump to steal from taxpayers. Everything is on the line. Democrats weren’t elected to put up a fake fight.”
Rep. Alexandria Ocasio-Cortez, D-N.Y., also weighed in, arguing that voters wouldn’t be fooled. “I hope Senate Democrats understand there is nothing clever about setting up a fake failed 30-day CR first to turn around & vote for cloture on the GOP spending bill,” wrote Ocasio-Cortez
on X
. “It won’t trick voters; it won’t trick House members. People will not forget it.”
Even Senate Democrats who support passing the spending measure have called out party leadership for “performative resistance.”
“The weeks of performative ‘resistance’ from those in my party were limited to undignified antics. Voting to shut the government down will punish millions or risk a recession. I disagree with many points in the CR, but I will never vote to shut our government down,” posted Sen. John Fetterman, D-Pa.,
on X.
Republicans have also called out Senate Minority Leader Chuck Schumer, D-N.Y., agreeing with Fetterman that this is bad political theater.
“[Fetterman] is absolutely correct, because you got to remember when it was Leader Schumer, he had all seven bills sitting on his desk since January 31,” Sen. Markwayne Mullin, R-Okla., told The Intercept. “He chose not to bring up one spending bill. And now, he wants us to believe he’ll do it in 30 days. It’s not going to happen.”
Smith, the Democratic strategist, argued that part of what’s happening here is leadership acting as if it’s business as usual in Washington. “There’s a tension and a struggle between the way folks know how to do things in Washington and the reality we’re seeing on the ground,” said Smith. “I think Democrats aren’t exercising the only power they have, and that sucks, but I think there’s also room for us to get creative and think differently about how to do these things.”
New SuperBlack ransomware exploits Fortinet auth bypass flaws
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 20:57:33
A new ransomware operator named 'Mora_001' is exploiting two Fortinet vulnerabilities to gain unauthorized access to firewall appliances and deploy a custom ransomware strain dubbed SuperBlack. [...]...
A new ransomware operator named 'Mora_001' is exploiting two Fortinet vulnerabilities to gain unauthorized access to firewall appliances and deploy a custom ransomware strain dubbed SuperBlack.
The two vulnerabilities, both authentication bypasses, are CVE-2024-55591 and CVE-2025-24472, which Fortinet disclosed in January and February, respectively.
Confusingly, on February 11, Fortinet added CVE-2025-2447 to their
January advisory
, which led many to believe it was a newly exploited flaw. However,
Fortinet told BleepingComputer
that this bug was also fixed in January 2024 and was not exploited.
"We are not aware of CVE-2025-24472 ever being exploited," Fortinet told BleepingComputer at the time.
However, a new report by
Forescout researchers
, says they discovered the SuperBlack attacks in late January 2025, with the threat actor utilizing CVE-2025-24472 as early as February 2, 2025.
"While Forescout itself did not directly report the 24472 exploitation to Fortinet, as one of the affected organizations we worked with was sharing findings from our investigation with Fortinet's PSIRT team," Forescout told BleepingComputer.
"Shortly afterward, Fortinet updated their advisory on February 11 to acknowledge CVE-2025-24472 as actively exploited."
BleepingComputer contacted Fortinet to clarify this point, but we are still waiting for a response.
SuperBlack ransomware attacks
Forescout says the Mora_001 ransomware operator follows a highly structured attack chain that doesn't vary much across victims.
First, the attacker gains 'super_admin' privileges by exploiting the two Fortinet flaws using WebSocket-based attacks via the jsconsole interface or sending direct HTTPS requests to exposed firewall interfaces.
Next, they create new administrator accounts (forticloud-tech, fortigate-firewall, adnimistrator) and modify automation tasks to recreate those if removed.
After this, the attacker maps the network and attempts lateral movement using stolen VPN credentials and newly added VPN accounts, Windows Management Instrumentation (WMIC) & SSH, and TACACS+/RADIUS authentication.
Mora_001 steals data using a custom tool before encrypting files for double extortion, prioritizing file and database servers and domain controllers.
After the encryption process, ransom notes are dropped on the victim's system. A custom-built wiper called 'WipeBlack' is then deployed to remove all traces of ransomware executable to hinder forensic analysis.
SuperBlack ransom note
Source: Forescout
SuperBlack's link to LockBit
Forescout has found extensive evidence indicating strong links between the SuperBlack ransomware operation and LockBit ransomware, although the former appears to act independently.
The first element is that the SuperBlack encryptor [
VirusTotal
] is based on LockBit's 3.0 leaked builder, featuring identical payload structure and encryption methods, but will all original branding striped.
Relationship diagram based on the available evidence
Source: Forescout
Secondly, SuperBlack's ransom note includes a TOX chat ID linked to LockBit operations, suggesting that Mora_001 is either a former LockBit affiliate or a former member of its core team managing ransom payments and negotiations.
The third element suggesting a link is the extensive IP address overlaps with previous LockBit operations. Also, WipeBlack has also been leveraged by BrainCipher ransomware, EstateRansomware, and SenSayQ ransomware, all tied to LockBit.
Forescout has shared an extensive list of indicators of compromise (IoC) linked to SuperBlack ransomware attacks at the bottom of its report.
Windows Notepad to get AI text summarization in Windows 11
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 20:45:55
Microsoft is now testing an AI-powered text summarization feature in Notepad and a Snipping Tool "Draw & Hold" feature that helps draw perfect shapes. [...]...
Microsoft is now testing an AI-powered text summarization feature in Notepad and a Snipping Tool "Draw & Hold" feature that helps draw perfect shapes.
Dubbed "Summarize," the new and highly-requested Notepad tool is rolling out today to Windows 11 Insiders in the Canary and Dev Channels on Windows 11, who have upgraded to Notepad version 11.2501.29.0.
"To get started, select the text you want to summarize, then right-click and choose Summarize, select Summarize from the Copilot menu, or use the Ctrl + M keyboard shortcut,"
said
Dave Grochocki, Principal Group Product Manager for Windows Inbox Apps.
"Notepad will generate a summary of the selected text, providing a quick way to condense content. You can experiment with different summary lengths to refine the output."
Those who don't want Notepad's AI options to appear anywhere in the interface can disable them from the app's settings.
Grochocki added that you must sign in to your Microsoft personal account to use the new Summarize tool, which will use the AI credits linked to your Microsoft 365 Personal, Family, and Copilot Pro subscription.
Notepad Summarize (Microsoft)
The company also started testing a new "Recent Files" option that can be accessed from the Edit menu, letting you reopen recently closed documents directly within Notepad.
If you don't want a history of closed files, you can also clear the list of recent files at any time or disable the features from settings.
Today, it also added a "Draw & Hold" feature in Snipping Tool version 11.2502.18.0 to help users draw lines, arrows, rectangles, or ovals more precisely using the built-in pen tool.
"These features in Snipping Tool and Notepad are beginning to rollout in these updates, so it may not be available to all Insiders in the Canary and Dev Channels just yet as we plan to monitor feedback and see how it lands before pushing them out to everyone," Grochocki added.
This is part of a broader push to include AI features in the Windows 11 Notepad text editing application. For instance, in November, Microsoft also
added a text rewriting tool
to Notepad 11.2410.15.0, dubbed "Rewrite," which was previously known as
CoWriter
.
Just as its name says, Rewrite helps automatically rewrite content using generative AI, which includes rephrasing sentences, adjusting the tone, and modifying the length of your content.
An online book-in-progress by
Charles Petzold
wherein is explored the utility, history,
and ubiquity of that marvelous invention,
logarithms
including what the hell they are;
with some demonstrations of their
primary historical application in
plane and spherical trigonometry.
This is a work in progress and nowhere close to completion
Some paragraphs are coherent; others are not.
Sometimes paragraphs are only a phrase or a note to myself.
Nothing has been professionally edited.
It is best viewed on a desktop or laptop computer
I've been developing the pages in Edge using Visual Studio Code
running under Windows 11 on a Microsoft Surface Pro 9.
I've also been testing the pages in Chrome on that machine, and
in Safari on a Mac Mini running Sequoia, and
in Chrome (version 126, it says) on an Asus Chromebook.
However, my iPad Mini running iOS 12.5.7 has several
problems with these webpages,
and the pages often become quite awkward on phones.
Xata Agent is an open source agent that monitors your database, finds root causes of issues, and suggests fixes and improvements. It's like having a new SRE hire in your team, one with extensive experience in Postgres.
Letting the agent introduce itself:
Hire me as your AI PostgreSQL expert. I can:
watch logs & metrics for potential issues.
proactively suggest configuration tuning for your database instance.
troubleshoot performance issues and make indexing suggestions.
troubleshoot common issues like high CPU, high memory usage, high connection count, etc.
and help you vacuum (your Postgres DB, not your room).
More about me:
I am open source and extensible.
I can monitor logs & metrics from RDS & Aurora via Cloudwatch.
I use preset SQL commands. I will never run destructive (even potentially destructive) commands against your database.
I use a set of tools and playbooks to guide me and avoid hallucinations.
I can run troubleshooting statements, like looking into pg_stat_statements, pg_locks, etc. to discover the source of a problem.
I can notify you via Slack if something is wrong.
I support multiple models from OpenAI, Anthropic, and Deepseek.
Past experience:
I have been helping the Xata team monitor and operate tons of active Postgres databases.
Demo
Here is an under 4 minutes walkthrough of the agent in action:
We provide docker images for the agent itself. The only other dependency is a Postgres database in which the agent will store its configuration, state, and history.
We provide a docker-compose file to start the agent and the Postgres database.
Edit the
.env.production
file in the root of the project. You need to set the
PUBLIC_URL
and the API key for at least OpenAI.
Start a local instance via docker compose:
Open the app at
http://localhost:8080
(or the public URL you set in the
.env.production
file) and follow the onboarding steps.
We have a more detailed
guide
on how to deploy via docker-compose on an EC2 instance.
For authentication, you can use your own OAuth provider.
Development
Go to the
apps/dbagent
directory and follow the instructions in the
README
.
Extensibility
The agent can be extended via the following mechanisms:
Tools
: These are functions that the agent can call to get information about the database. They are written in TypeScript, see this
file
for their description.
Playbooks
: These are sequences of steps that the agent can follow to troubleshoot an issue. They are simply written in english. The pre-defined playbooks are
here
.
Integrations
: For example, the AWS and Slack integrations. They contain configuration and UI widgets.
Status / Roadmap
While it's still early days, we are using the agent ourself in our day-to-day operations work at Xata.
Add eval testing for the interaction with LLMs (
#38
)
Approval workflow:
Add an approval workflow for the agent to run potentially dangerous statements
Allow configuration of the tools that can be defined per monitoring schedule
While the Agent is by its nature primarily an open-source project that you self-host, we are also working on a cloud version. The advantage of the cloud version is that some integrations are easier to install. If you are interested in the cloud version, please
sign up on the waitlist here
.
How Mahmoud Khalil’s Attorneys Plan to Fight for his Release
Intercept
theintercept.com
2025-03-13 19:22:59
Lawyers trying to free the Columbia University activist point to a legal exception undermining the Trump administration’s argument.
The post How Mahmoud Khalil’s Attorneys Plan to Fight for his Release appeared first on The Intercept....
Since the arrest
of Mahmoud Khalil, his attorneys have fought any suggestion that this case is about whether their client committed a crime or is a threat to national security. Instead, they say, it’s about the U.S. government stifling Khalil’s advocacy for Palestine.
Even the government agrees it’s not about committing a crime.
According to
court
filings
obtained by The Intercept, the government’s main argument against Khalil rests on a civil law provision within the
Immigration and Nationality Act
, which governs the country’s immigration and citizenship system. The provision, known as Section 237(a)(4)(c)(i), gives the secretary of state the authority to request the deportation of an individual who is not a U.S. citizen, if they have “reasonable ground to believe” the individual’s presence in the country hurts the government’s foreign policy interests.
Department of Homeland Security agents arrested Khalil, a Syrian-born Palestinian whose family is from Tiberias, in the lobby of his Columbia University apartment on Saturday. After initially alleging they had revoked his student visa, they said they had instead revoked Khalil’s green card. Authorities then secretly transported Khalil, a U.S. permanent resident, from New York to New Jersey, then to an immigration detention facility in Louisiana where judges are known to be more favorable to the government’s legal arguments.
In a
notice
for Khalil to appear in immigration court in Louisiana where he remains jailed, the government cites the specific provision and states: “The Secretary of State has determined that your presence or activities in the United States would have serious adverse foreign policy consequences for the United States.” Government lawyers have not, however, provided any evidence, in court filings or hearings, to support their claim. Khalil refused to sign the notice.
Khalil’s legal team plans to fight the government’s “foreign policy” provision in both the push for his release in federal court and in his deportation proceedings in immigration court, said Baher Azmy, legal director of the Center for Constitutional Rights and a member of Khalil’s legal team. A Manhattan federal district court judge temporarily halted Khalil from being deported while his lawyers continue to push for his release and transfer back to New York, where his attorneys can represent him more easily and he can be closer to his wife who is eight months pregnant.
Khalil’s attorneys plan to contest his detention on free speech grounds under the First Amendment and by challenging the government’s use of the “foreign policy” provision. By evoking the “foreign policy” provision, the Trump administration is making a clear statement not just about its foreign policy goals but also free speech, Azmy said.
“The United States government thinks Mahmoud’s speech in favor of Palestinian human rights and to end the genocide is not only contrary to U.S. foreign policy, which is something in itself, but that that dissent provides grounds for arrest, detention, and deportation,” Azmy said. “It’s an astonishing claim.”
Central to their challenge in court will likely be another provision within the Immigration and Nationality Act that exempts noncitizens facing deportation under the government’s “foreign policy” provision. The
exception
, known as Section 212(a)(3)(C)(iii), says that an individual cannot be deported under the “foreign policy” provision cited by the government if their “past, current, or expected beliefs, statements, or associations, if such beliefs, statements, or associations would be lawful within the United States.”
“The government doesn’t get to decide what you can talk about and what you cannot talk about based on whether or not it helps the U.S.”
In other words, since Khalil’s past activities were protected free speech under the First Amendment, he should not be deported under the “foreign policy” provision cited by the government, Azmy said. The Department of Homeland Security has said it arrested Khalil, a lead negotiator for Palestine solidarity protesters at Columbia, for having “led activities aligned to Hamas.” But even if such alignments exist, advocacy is protected activity in the U.S., Khalil’s attorneys maintain.
“If there is constitutionally protected speech,” Azmy said. “It doesn’t matter if it goes adverse to the foreign policy interests of the United States — it’s still protected. The government doesn’t get to decide what you can talk about and what you cannot talk about based on whether or not it helps the U.S.”
Khalil’s legal team said the “foreign policy” provision giving the secretary of state the ability to request a deportation is rarely used, and when it has been evoked it is to deny visas for foreign officials who have interfered with democracy in their respective countries or officials with a poor human rights record. And the exception to the provision that prohibits deportations exists to ensure that it would not be used to specifically crack down on people’s speech, Azmy said.
“Any kind of removal proceeding because the government disagrees with a political perspective would be unlawful,” Azmy said. “So Congress wrote that into the statute, mindful of what the Constitution requires.”
There is an obvious counterargument for government lawyers seeking to deport Khalil. They can turn to an exemption within the exemption that still gives the secretary of state leeway to further argue for deportation if the State Department can provide “a facially reasonable and bona fide determination” that the individual’s presence and activities in the U.S. “compromises” U.S. foreign policy interest, according to the provision and
previous immigration case law
.
Secretary of State Marco Rubio has said Khalil’s case “is not about free speech” but about “people that don’t have a right to be in the United States to begin with.”
“I think being a supporter of Hamas and coming into our universities and turning them upside down and being complicit in what are clearly crimes of vandalization, complicit in shutting down learning institutions,” he said in Ireland on Wednesday, after a visit to Saudi Arabia for ceasefire talks with Ukrainian officials. “If you told us that’s what you intended to do when you came to America, we would have never let you in. And if you do it once you get in, we’re going to revoke it and kick you out.”
Rubio’s words rang hollow to a longtime New Jersey-based immigration attorney Robert Frank, one of the few attorneys to have represented a client in the U.S. who faced deportation under the same “foreign policy” provision evoked in Khalil’s case. During his 50 years of practice as an immigration attorney, Frank said a case decided in 1999 was the only time he had seen the government use the provision.
In the late 1990s, Frank represented former Mexican attorney general Mario Ruiz Massieu, who had fled Mexico and entered the U.S. on a temporary visa to avoid a slew of criminal charges, ranging from money laundering to embezzlement and torture. The U.S. government had ordered his deportation by using the “foreign policy” provision after Mexico requested his return, following several failed extradition attempts. Then-Secretary of State Warren Christopher argued that keeping him would strain the U.S. relationship with Mexico. The government eventually won the case, and in 1999 Massieu was ordered to be deported.
“You can see the clear foreign policy connection — the government of Mexico is asking the U.S. to get involved — whereas this present case [with Khalil], you don’t have that at all,” Frank told The Intercept.
“What is the foreign policy effect of this fellow talking pro-Palestinian or pro-Hamas — how does that affect the foreign policy of the United States?” Frank said, adding, “Israel may not be happy with saying things in favor of Hamas,” but that’s not grounds for deportation under the provision.
Frank challenged the provision in his client’s 1990s case, arguing that an immigration court judge should preside over whether his client would be deported or not, rather than the secretary of state alone. An immigration judge sided with Frank and halted the deportation. However, the Board of Immigration Appeals, which is under the Department of Justice,
overturned the decision
upon government appeal.
While government attorneys have yet to argue their claim under the “foreign policy” provision in court, the White House has made unsubstantiated claims linking Khalil with Hamas, the Palestinian militant and political group that governs Gaza, which the U.S. includes on its Foreign Terrorist Organizations list.
White House press secretary Karoline Leavitt told reporters on Tuesday that Khalil had “organized group protests that not only disrupted college campus classes and harassed Jewish-American students and made them feel unsafe on their own college campus, but also distributed pro-Hamas propaganda flyers with the logo of Hamas.”
“We have a zero tolerance policy for siding with terrorists,” she said.
Leavitt added that the Department of Homeland Security had provided her copies of the flyers, which the White House press office later privately shared with the conservative tabloid the New York Post. The flyers include the cover of a pamphlet published by Hamas, and
widely
shared online
, titled “Our Narrative: Operation Al-Aqsa Flood,” and another flyer showing a boot crushing the Star of David with the message “Crush Zionism.” Leavitt and the Post made the accusations without offering evidence that ties Khalil himself to the flyers.
Azmy dismissed the claims and said government attorneys have not introduced the flyers as evidence in their case against Khalil and haven’t referred to them in court. Even if campus protesters had passed out those flyers, such actions would be protected under the First Amendment, he said.
“We don’t concede for a moment he did any of this,” Azmy added. “And even if the Trump administration is choosing to deport people for flyers, then we have much bigger problems on our hands — it’s a form of tacky authoritarianism.”
During a press conference outside a Manhattan courthouse where a hearing took place Wednesday for Khalil’s case, hundreds of protesters had gathered. Thousands more marched across the country throughout the last several days, demanding Khalil’s release.
Ramzi Kassem, one of Khalil’s attorneys and the founding director of the Creating Law Enforcement Accountability & Responsibility at the City University of New York, said the “foreign policy” provision “is not intended to be used to silence pro-Palestinian speech, or any other speech, that the government happens to dislike.”
“This case is not going to set the precedent that the government wants it to set, whether its federal court or immigration court,” he said before a crowd of protesters. “And you already know that just by looking behind you that it’s not having the effect that the government wants it to have with people’s solidarity with Palestinians.”
Choi: announcing Casual Make
Linux Weekly News
lwn.net
2025-03-13 19:10:59
Charles Choi has announced
the release of the Casual
Make: a menu-driven interface, implemented as part of the Casual
suite of tools, for Makefile
Mode in GNU Emacs.
Emacs supports makefile editing with make-mode which has a mix of
useful and half-baked (though thankfully obsoleted in 30.1)
c...
Emacs supports makefile editing with make-mode which has a mix of
useful and half-baked (though thankfully obsoleted in 30.1)
commands. It is from this substrate that I'm happy to announce the
next Casual user interface:
Casual Make
.
Of particular note to Casual Make is its attention to authoring and
identifying automatic variables whose arcane syntax is
un-memorizable. Want to know what $> means? Just select it in the
makefile and use the . binding in the Casual Make menu to identify
what it does in the mini-buffer.
Casual Make is part of
Casual
2.4.0
, released on March 12 and is available from
MELPA
. The 2.4.0 update to
Casual also includes documentation in the Info format for the first time.
Intel shares surge as investors cheer appointment of new CEO Lip-Bu Tan
Guardian
www.theguardian.com
2025-03-13 18:54:40
Incoming head is tasked with reviving chipmaker’s fortunes after it experienced heavy losses over the past several yearsNever miss global breaking news. Download our free app to keep up with key stories in real time.Shares of Intel surged nearly 15% on Thursday, as Wall Street cheered its decision t...
Shares of
Intel
surged nearly 15% on Thursday, as Wall Street cheered its decision to name former board member Lip-Bu Tan as CEO. Tan left in August over differences about the chipmaker’s direction after the company suffered several years of market underperformance.
Tan will be tasked with reviving the company’s fortunes after it missed out on the artificial intelligence-driven semiconductor boom while plowing billions of dollars into building out its chip-making business.
Intel
has posted several quarters of market share losses in data centers and PCs, as well as billion-dollar losses in its manufacturing business, and over the past five years, the stock has lost about 60% of its value, a period of time when the Nasdaq Composite Index and S&P 500 have both more than doubled.
“Tan in as CEO at Intel was as good as stakeholders could have hoped for,” said TD Cowen analysts, noting that he has “deep relationships” across the chip ecosystem that could draw customers to the company’s contract manufacturing business. Tan will take the helm next week, three months after Intel ousted CEO Pat Gelsinger. Tan had been brought on to the board two years earlier to help turn the company around, but left due to disagreements over the size of the company’s workforce and its culture. Skepticism about Intel’s future has deepened in recent months amid reports that rivals, including Broadcom, were evaluating the chip design and marketing business, while TSMC has separately studied controlling some or all of its plants.
Analysts expect Tan to follow Gelsinger in keeping the chip design and manufacturing operations together, a plan that Tan hinted at in a letter to employees by vowing to make Intel a top foundry, an industry term for a contract chip manufacturer. Some analysts have said the foundry business may find it difficult to draw orders from chip designers wary of entrusting production to a rival.
But Tan, who oversaw more than a decade of strong growth at Cadence Design Systems, an Intel supplier and chip-design software firm, enjoys strong credibility as a “neutral party” that could help Intel overcome some of the challenges, analysts said.
Stacy Rasgon of Bernstein Research also said Tan’s previous two-year tenure on the Intel board would aid his efforts.
That “should have given him a pretty good idea of where all the bodies are buried, and he should be much realistic in his evaluations and outlook than prior leadership (it was unbridled optimism proved to be Pat’s undoing)”, Rasgon said.
Still, any turnaround is expected to take years, as hinted by Tan in his letter to employees. Intel’s market value has remained stuck below $100bn for the first time in three decades after shares slumped 60% last year, and its Gaudi AI chips have also missed sales targets.
Microsoft says button to restore classic Outlook is broken
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 18:51:21
Microsoft is investigating a known issue that causes the new Outlook email client to crash when users click the "Go to classic Outlook" button, which should help them switch back to the classic Outlook. [...]...
Microsoft is investigating a known issue that causes the new Outlook email client to crash when users click the "Go to classic Outlook" button, which should help them switch back to the classic Outlook.
"Some users have reported that the 'go back to classic Outlook' button in new Outlook for Windows does not open a support article on how to download classic Outlook for Windows," the company says in a Wednesday support document.
"The application just closes, and classic Outlook is not installed," as those clicking the button would expect.
While looking into the issue, Redmond also provides those affected with a temporary workaround, which requires them to install the classic client by clicking the store link in
this support document
.
If you're using a work or school account and can't install classic Outlook using any of the steps described above, you must reach out to your IT admin for assistance.
"Go to classic Outlook" button (BleepingComputer)
Microsoft
announced in December
that the new Microsoft 365 desktop client app installed on Windows devices will also deploy the new Outlook in addition to the classic Outlook client. It also advised IT admins to edit their configurations and exclude the new Outlook app before new installations if it's not needed.
You can find more details on how to block users from connecting to new Outlook and controlling other features and capabilities like connecting personal accounts to the new client on
Microsoft's support website
.
Redmond also
began force-installing the new email client
on Windows 10 systems in January, starting with the KB5050081 non-security preview update. This move was first revealed in early January when the company said its new email client
would be automatically installed
for all Windows 10 users installing the February Windows security updates.
Last month, Microsoft
fixed another known issue
that broke email and calendar drag-and-drop in Outlook after installing recent updates on Windows 24H2 systems.
Since the beginning of the year, it has addressed other Outlook issues, including one
that causes classic Outlook to crash
when writing, replying to, or forwarding an email, and
another one
that led to Classic Outlook and Microsoft 365 applications crashes on Windows Server 2016 or Windows Server 2019 systems.
If Protesting Tesla Is Domestic Terrorism, Then What Demonstration Against Musk Isn’t
Intercept
theintercept.com
2025-03-13 18:40:08
In a bid to boost Elon Musk’s car company, Trump did a live White House ad and threatened Tesla protesters would “go through hell.”
The post If Protesting Tesla Is Domestic Terrorism, Then What Demonstration Against Musk Isn’t appeared first on The Intercept....
During a bizarre
live ad for Elon Musk’s car company on the White House lawn Tuesday, President Donald Trump said people protesting at Tesla dealerships around the country would be treated as
domestic terrorists
.
A reporter asked Trump what he would do about the Tesla protests, noting that some commentators had suggested that protesters “should be labeled domestic terrorists.”
Trump leapt at the opening. “I will do that. I’ll do it. I’m going to stop them. We catch anybody doing it because they’re harming a great American company,” he said. “We’re going to catch them. And let me tell you, you do it to Tesla and you do it to any company, we’re going to catch you and you’re going to — you’re going to go through hell.”
The remarks came in the form of an unprecedented display of salesmanship with the White House as a backdrop, part press conference, part advertisement to improve stock performance for the president’s billionaire righthand man amid controversy and tumbling markets.
On Wednesday, Rep. Marjorie Taylor-Greene, R-Ga., urged the Department of Justice and the FBI to investigate Tesla protests. Forbes
reported
that in doing so, Taylor-Greene — who owns Tesla stock — may have violated House ethics rules.
Trump cannot summarily declare a class of people exercising their First Amendment rights as domestic terrorists. He can try to encourage domestic terror charges, which can be terror crimes or sentencing enhancements on other felonies, but the efforts would be unlikely to stand up in court.
“It’s absurd,” said organizer Alice Hu, executive director at the youth-led climate justice group Planet Over Profit. “It’s obvious, both legally and from a common sense perspective, that First Amendment-protected peaceful protest is not domestic terrorism.”
The response from Trump and
Musk
, Hu said, is a sign that the protests are working. Tesla’s stock prices tanked. And Musk, who is
spearheading
Trump’s
controversial
effort to
gut
the federal
government
, is using his social media platform Twitter to target individual organizers and grassroots groups, Hu said.
“They’re trying to label this peaceful movement an illegal, violent, whatever, domestic terrorist movement,” she said “The reality is that it’s DOGE’s illegal rampage destroying the government, taking over the government. That’s the real problem here.”
The president is immune from federal ethics regulations prohibiting the use of public office for private gain. However, it’s clear that after campaigning on being a champion of the working class, Trump is using the White House to advertise the company of his
largest single donor
and de facto appointee amid tanking Tesla stocks.
“There aren’t laws that apply to Trump here, and with Musk, it’s still not 100 percent clear how he fits in with the government,” said Jordan Libowitz, vice president of communications at Citizens for Responsibility and Ethics in Washington. “But beyond that, there’s a huge optics problem, where it looks like the president of the United States is trying to stop the stock slide of his election’s biggest individual funder.”
Police have showed
up in force to barricade Tesla dealerships around the country since, amid Musk’s de facto takeover of the government, the grassroots #TeslaTakedown movement began demonstrating against the electric car maker last month. An
image
from Chicago showed more than 20 officers standing in a line blocking the entrance to a Tesla storefront last week.
Some 350 people attended protests at the Tesla showroom in Manhattan on Saturday. New York Police Department officers arrested six people; one was charged with resisting arrest, and the other five were released.
Hu said the police showed up with several massive vans from the NYPD’s Strategic Response Group, a counterterror unit that deals with civil unrest. In 2021, The Intercept reported that NYPD officer
trainings
— and
manuals
for the SRG in particular — included few directives about protecting demonstrators’ First Amendment rights.
“In response to this outpouring of turnout, the police had at least five or six Strategic Response Group vans. Those are the vans that are ready to transport prisoners from some kind of location to the precinct. Each of those vans fits at least 10 people or so in there,” Hu said. “They were totally preparing for some kind of event where they might have needed to arrest dozens of people.”
An NYPD spokesperson confirmed that five people arrested on Saturday were released but did not respond to questions about how many officers were dispatched to the scene.
At least nine people have been arrested at protests at Tesla dealerships and showrooms in New York. It’s not clear how many more arrests have been made around the country.
Undeterred by the draconian response of the Trump administration, an organizer in California said she and her colleagues decided to start their own Tesla protests after seeing others pop up around the country. Lara Starr is part of a small group called Solidarity Sundays that writes postcards as a form of protest. “But after the election, postcards didn’t seem like enough,” Starr told The Intercept.
Organizers with the group anticipated that fewer than 10 people would show up to their first action, but after Indivisible, the protest group that sprung up during Trump’s first term, publicized the plan, 200 people showed up.
“So many people thanked us for giving them something to do,” Starr said. The group plans to keep protesting at Tesla every Sunday.
“Tesla cannot be disentangled from Musk, and Musk cannot be disentangled from Trump and the chaos he is causing, the lives he is ruining, and the dismantling of our democracy and decent society,” she said. “Taking down Tesla’s sales, stock price, and potentially the entire brand is the people speaking up, stepping up, and not letting our voices and values be silenced and ignored.”
The strategy behind the Tesla protests is to take a moment when so many people feel helpless and turn it into an opportunity to exert the power of the masses, Hu said.
“Ordinary people around the country knew that they needed to take matters in their own hands and start organizing,” Hu said. “We are seeing relative inaction from the Democrats and from people with that kind of platform and power.”
Trump and Musk have no grounds to speak about what’s lawful or unlawful, Hu said.
“It’s more than clear to anybody paying attention that this is a wannabe dictator, a wannabe authoritarian administration,” she said, raising the
arrest
and
attempt to deport
Palestinian protester and Columbia University graduate Mahmoud Khalil. “They’re undermining the rule of law to try to disappear a permanent resident.”
In the comments on
my recent post on books on the history of maths
Fernando Q. Gouvêa jumped in to draw attention to the book
Math Through the Ages
:
A Gentle History for Teachers and Others
, which he coauthored with William P. Berlinghoff. I had not come across this book before, as I noted in my post I gave up reading general histories of maths long ago, so, I went looking and came across the following glowing recommendation:
“
Math Through the Ages
is a treasure, one of the best history of math books at its level ever written. Somehow, it manages to stay true to a surprisingly sophisticated story, while respecting the needs of its audience. Its overview of the subject captures most of what one needs to know, and the 30 sketches are small gems of exposition that stimulate further exploration.” — Glen Van Brummelen, Quest University
Glen Van Brummelen is an excellent historian of maths and regular readers will already know that I’m a very big fan of his books on the history of trigonometry,
which I reviewed here.
Intrigued I decided to acquire a copy and see if it lives up to Glen’s description. The book that I acquired is the paperback Dover reprint from 2019 of the second edition from Oxton House Publishers, Farmington, Maine, it has a recommended price in the USA of $18, which is certainly affordable for its intended audience, school, college and university students. The cover blurb says, “Designed for students just beginning their study of the discipline…”
The book distinguishes itself by not being a long continuous narrative covering the chronological progress of the historical development of mathematics, the usual format of one volume histories. Instead, it opens with a comparatively brief such survey, just fifty-eight pages entitled,
The History of Mathematics in a Large Nutshell
, before presenting the thirty sketches mentioned by Van Brummelen in his recommendation. These are brief expositions of single topics from the history of maths, each one conceived and presented as a complete, independent pedagogical unit. The sketches however contain references to other units where a term or concept used in the current unit is explained more fully. Normally when reviewing books, I usually leave the discussion of the bibliography till the end of my review but in this case the bibliography is an integral part of the unit presentation. The bibliography contains one hundred and eighty numbered titles listed alphabetically by the authors’ names. At the end of each sketch there is a list of numbers of the titles where the reader can find more information on the topic of the sketch. Within the sketches themselves this recommendation system is also applied to single themes. The interaction between sketches and bibliography is a very central aspect of the books pedagogical structure.
Given its very obvious limitations, my own far from complete survey of the history of mathematics now occupies more than two metres of bookshelf space and that is not counting the numerous books on the histories of astronomy, physics, navigation, cartography, technology and so forth, which I own, that contain elements on the history of mathematics, the
Large Nutshell
is actually reasonably good. It sketches the development of mathematics in Mesopotamia, Egypt, China, Ancient Greece, India, and the Islamic Empires, before leaping to Europe in the fifteenth and sixteenth centuries and from there is fairly large steps down to the present day. Naturally, a lot gets left out that other authors might have considered important but it still manages to give a general feeling of the depth and breadth of the history of mathematics. The real work begins with the
Sketches
.
Sketch No.1 is naturally
Keeping Count
:
Writing Whole Numbers
. Our authors deliver competent but brief accounts of the Egyptian, Mesopotamian, Mayan, Roman and Hindu-Arabic number systems but don’t consider the Greek system worth mentioning. They of course emphasise the difficulty of calculating with Roman numerals, as everybody does, but at least mention the calculation was actually done with a counting board or abacus, without seeming to realise that abacus is the Greek name for a counting board, an error that leads them to a stupid statement in another sketch. They also don’t mention that everybody, Mesopotamians, Greeks, Egyptians etc used counting boards to do calculations. They of course mention this in order to emphasise the advantages of the Hindu-Arabic numerals for doing calculations on paper, which leads to another error later.
Sketch No.2
Reading and Writing Arithmetic
:
The Basic Symbols
now makes a big leap to the introduction of symbolic arithmetic in the Renaissance to replace the previous rhetorical arithmetic. This precedes chronologically through the various innovations, including, unfortunately, the myth that Robert Recorde was the first to introduce the equal’s sign. Is OK, but they admit that their brief sketch skips over many, many symbols that were used from time to time.
Almost predictably, Sketch No.3 is
Nothing Becomes a Number
:
The Story of Zero
. We start with the Babylonians, who had a place value number system, and after a long time without one, introduced a place holder zero. We then spring to India and from them straight to the Arabs and the logistic origins of the word zero before being served up the first major error in the book. Our authors tell us:
By the 9
th
century A.D., the Indians had made a conceptual leap that ranks as one of the most important mathematical events of all time. They had begun to recognise
sunya
, the absence of quantity, as a quantity in its own right! That is, they had begun to treat zero as a number. For instance, the mathematician Mahāvīra wrote that a number multiplied by zero results in zero, and that zero subtracted from a number leaves the number unchanged. He also claimed that a number divided by zero remain unchanged. A couple of centuries later. Bhāskara declared, a number divided by zero to be an infinite quantity.
This is quite frankly bizarre! Already in the 628 CE, Brahmagupta (c. 598–c. 668) had in Chater 12 of his
Brāhma-sphu
ṭ
a-siddhānta
given the rules for addition, subtraction, multiplication, and division with positive and negative numbers and zero in the same way as they can be found in any elementary arithmetic textbook today, with the exception that he defined division by zero. His work was well known in India. In fact, Bhāskara II, mentioned by our authors, based his account on that of Brahmagupta. His work was also translated into Arabic in the eight century and provided the basis for al-Khwārimī’s account, which has not survived in Arabic but was translated into Latin in the twelfth century as
Dixit Algorizmi
, which was the first introduction of the Hindu-Arabic number system in Europe, as is noted by our authors. Our authors then over emphasise the reluctance of some European mathematicians to accept zero as a number, whilst introducing Thomas Harriot’s method for solving polynomials and zero as a defining property of rings and fields.
Sketch No.4
Broken Numbers
:
Writing Fractions
is a quite good account of their history considering its brevity.
Sketch No.5
Less than Nothing
:
Negative Numbers
is also a reasonably good account of the very problematic history of accepting the existence of negative numbers. Somewhat bizarrely, having totally ignored him on the topic of zero, our authors now state that:
A prominent Indian mathematician, Brahmagupta, recognised and worked with negative quantities to some extent as early as the 7
th
century. He treated positive numbers as possessions and negative numbers as debts, and also stated rules for adding, subtracting, multiplying, and dividing with negative numbers.
It’s the same section of his book where he deals with zero as a number!
Sketch No. 6
By Ten and Tenths
:
Metric Measurement
is a good brief account of the French introduction of the metric system and its subsequent modernisation. I would have appreciated a brief nod for the English, polymath John Wilkins (1614–1672), who advocated for a metric system already in the seventeenth century.
Sketch No. 7
Measuring the Circle
:
The Story of
π
a simple brief account. An interesting addition is a table showing the size of the error in calculating the circumference of a circular lake with a diameter of one kilometre using different historical values for π.
Sketch No. 8
The Cossic Art
:
Writing Algebra with Symbols
is a competent sketch of the gradual but zig zag transition from rhetorical to symbolic algebra.
Sketch No. 9
Linear Thinking
:
Solving First Degree Equations
starts with the Egyptians mentions the Chinese but completely ignores the Babylonians. It then wanders off into an account of the false proposition method of solution.
Sketch No. 10
A Square and Things
:
Quadratic Equations
after one dimension we move onto two dimensions. This is devoted to al- Khwārimī’s discussion of methods of solving quadratic equations completely ignoring the fact that the Babylonians had the general solution of the quadratic equation a thousand years earlier, albeit in a different form to the one we teach schoolchildren and, of course ignoring negative solutions. Every worse ignoring the fact that Brahmagupta had the general solution in the form we use today including negative solutions!
Sketch No. 11
Intrigue in Renaissance Italy
:
Solving Cubic Equations
and now onto three dimensions. We start with the Ancient Geek problem of trisecting an angle and then surprisingly, considering that this is supposed to be “for students just beginning their study of the subject,” we suddenly get a piece of fairly advanced trigonometry blasted into the text:
cos(3𝛂) = 4 cos
3
(𝛂) – 3 cos(𝛂)
to point out that the solution can be found with a cubic equation!
We move on to Omar Khayyám and his solution of cubic equations using conic sections. We then get a mangled version of the Antonio Fiore/Tartaglia challenge. Out authors claim that Tartaglia bragged that he could solve cubic equations and so Fiore, who had learnt the solution from his teacher Scipione del Ferro, challenged him to a mathematical contest. In fact, Tartaglia didn’t know how to solve them when challenged by Fiore and realising he was onto a hiding to nothing sat down and discovered how to solve them. End result victory for Tartaglia. They get the rest of the story right. How Cardano persuaded Tartaglia to reveal his solution after promising not to publish it before Tartaglia did. Then discovering that Scipione del Ferro had discovered the solution before Tartaglia, and having extended Tartaglia’s solution, went on and published anyway, although giving full credit to Tartaglia for his work. You can read the whole story here. Our authors, however, contradict themselves. In their
Large Nutshell
they gave a brief version of the story and wrote:
Once he knew Tartaglia’s method for solving some cubic equations. Cardano was able to generalise it to a way of solving any cubic equation. Felling he had made a contribution of his own, Cardano decided he was no longer bound by his promise of secrecy.
Now in Sketch 11 they write:
At this point, Cardano knew that he had made a real contribution to mathematics. But how could he publish it without breaking his promise? He found a way. He discovered that del Ferro had found the solution of a crucial case before Tartaglia had. Since he had the not promised to keep
del Ferro’s
solution secret, he felt he could publish it, even though it was identical to the one he had learnt from Tartaglia.
We then get Cardano’s discovery of conjugate pairs of complex numbers sometimes in the solutions of cubic equations and a somewhat confused explanation of his reaction to them. Our authors tell us that Cardano wrote to Tartaglia and asked him about the problem and Tartaglia simply suggest that Cardano had not understood how to solve such problems. This is true, quoting MacTutor:
One of the first problems that Cardan hit was that the formula sometimes involved square roots of negative numbers even though the answer was a ‘proper’ number. On 4 August 1539 Cardan wrote to Tartaglia:-
I have sent to enquire after the solution to various problems for which you have given me no answer, one of which concerns the cube equal to an unknown plus a number. I have certainly grasped this rule, but when the cube of one-third of the coefficient of the unknown is greater in value than the square of one-half of the number, then, it appears, I cannot make it fit into the equation.
Indeed, Cardan gives precisely the conditions here for the formula to involve square roots of negative numbers. Tartaglia, by this time, greatly regretted telling Cardan the method and tried to confuse him with his reply (although in fact Tartaglia, like Cardan, would not have understood the complex numbers now entering into mathematics):-
… and thus, I say in reply that you have not mastered the true way of solving problems of this kind, and indeed I would say that your methods are totally false.
Our authors simply move on to state, It fell to Rafael Bombelli to resolve the issue.”
This is selling Cardano short, firstly he was well aware that if one multiplies complex conjugate pairs together the imaginary term disappears as he demonstrated in Ars Magna, he simply thought that it didn’t make sense, mathematically.
Dismissing mental tortures, and multiplying
5
+ √-
15
by
5
– √-
15
, we obtain
25
–
(
–
15)
. Therefore the product is
40
. …. and thus far does arithmetical subtlety go, of which this, the extreme, is, as I have said, so subtle that it is useless.
(MacTutor)
Secondly, Bombelli used Cardano’s work as the starting point for his own work.
Sketch No. 12
A Cheerful Fact
:
The Pythagorean Theorem
This is a reasonable overview of some of the history of what is probably the most well-known of all mathematical theorems. The first half of the title is a Gilbert and Sullivan quote!
Sketch No. 13
A Marvellous Proof
:
Fermat’s Last Theorem
Once again a reasonably synopsis of the history of another possible candidate for the most well-known of all mathematical theorems.
Sketch No. 14
On Beauty Bare
:
Euclid’s Plane Geometry
Another reasonably synopsis of the world’s most famous and biggest selling maths textbook.
Sketch No. 15
In Perfect Shape
:
The Platonic Solids
A brief, compact and largely accurate introduction to the history of the Platonic regular solids and the Archimedean semi-regular solids. I would have wished for more detail on the Renaissance rediscovery of the Archimedean solids, which was actually carried out by artists rather than mathematicians, a fact that our authors seem not to be aware of, as part of the explorations into the then new linear perspective.
Sketch No. 16 Shapes by the
Numbers
:
Coordinate Geometry
Another good synopsis covering both Fermat and Descartes and includes the important fact that the Cartesian system was actually popularised by the expanded Latin edition of
La Géométrie
put together by Frans van Schooten jr. Although our authors miss the junior. Having acknowledge his contribution they then miss one of his biggest contributions. Our authors write:
The two-axis rectangular coordinate system that we commonly attribute to Descarte seems instead to have evolved gradually during the century and a half after he published
La Géométrie
.
Rectangular Cartesian coordinates were introduced by Frans van Schooten jr. in his Latin edition.
Sketch No. 17
Impossible, Imaginary, Useful
:
Complex Numbers
Having, in their discussion of cubic equations, failed to acknowledge the fact that Cardano actually showed how to eliminate conjugate pairs of complex numbers in his
Ars Magna
, even if doing so offended his feelings for mathematics, our authors now bring the relevant quote to this effect. Having opened with Cardano we then move onto Bombelli’s acceptance and Descartes’ rejection before our authors attribute an implicit anticipation of Euler’s Formula in the work of Abraham De Moivre, whilst ignoring the explicit formulation in the work of Roger Cotes. Dealing comparatively extensively with Euler our authors then move onto Argand and Gauss and the geometrical representation of complex numbers, whilst completely ignoring Casper Wessel.
Sketch No. 18
Half is Better
:
Sine and Cosine
We now get introduced to the history of trigonometry. Once again a reasonable short presentation of the main points of the history down to the seventeenth century, with an all too brief comment about the development of the trigonometrical ratios as functions.
Sketch No. 19
Strange New Worlds
:
The Non-Euclidian Geometries
Once again nothing here to criticise.
Sketch No. 20
In the Eye of the Beholder
:
Projective Geometry
Starts with some brief comments about the discovery of linear perspective during the Renaissance, then moves on to the development of projective geometry out of that and ends with Pascal’s Theorem. Once again no complaints on my part.
Sketch No. 21
What’s in a Game?
:
The Start of Probability Theory
Having had the teenage Pascal on conics we now have the mature Pascal on divvying up the stakes in an interrupted game of chance. This is followed by standard story of the evolution of probability theory.
Sketch No. 22
Making Sense of Data
:
Statistics Becomes a Science
Once again presents a standard account without any significant errors.
Sketch No. 23
Machines that Think?
:
Electronic Computers
Having presented twenty-two sketches with only occasional historical error, our authors now come completely of the rails: To start:
Some would say that the story begins 5000 years ago with the abacus, a calculating device of beads and rods that is still used today.
The Greek term abacus refers to a counting board. The calculating device of beads and rods, which is also referred to as an abacus, was developed in Asia and didn’t become known in Europe until the end of the seventeenth beginning of the eighteenth centuries, via Russia, well after the counting board had ceased to be used in Europe.
We get no mention of the astrolabe, which is universally described as an analogue computer,
We then get Napier’s Bones and Oughtred’s slide rule as calculating devices implying that they are somehow related. They aren’t the slide rule is based on logarithms and although Napier invented logarithms his Bones aren’t.
Next up is the
Pascaline
with, quotes correctly the information that they were difficult to manufacture and so were a flop, although they don’t put it that bluntly. No mention of Wilhelm Schickard, whose calculator was earlier that the
Pascaline
. Of course, if we have Pascal’s calculator then we must have Leibniz’s, and we get the following hammer statement:
Leibniz’s machine, the
Stepped Reckoner
, represented a major theoretical advance over the Pascaline in that
its calculations were done in binary (base-two) arithmetic
(my emphasis), the basis of all modern computer architecture.
When I read that I had a genuine what the fuck moment. There are three possibilities, either our authors are using a truly crappy source on the history of reckoning machines, or they are simply making shit up, or they created a truly bad syllogistic argument.
1) Leibniz was one of the first European mathematicians to develop binary arithmetic
2) Leibniz created a reckoning machine
Therefore: Leibniz’s reckoning machine must have used binary arithmetic
Of course, the conclusion does not follow from the premises and Leibniz’s
Stepped Reckoner
used decimal not binary numbers.
A fourth possibility is a truly cataclysmic typo but our authors repeat the claim at the beginning of the next sketch of that is definitively not the case.
We get a brief respite with the description of a successful stepped reckoner in the nineteenth century, then we move onto Charles Babbage and a total train wreck. Our authors tell us:
Early in the 19
th
century, Cambridge mathematics professor Charles Babbage began work on a machine for generating accurate logarithmic and astronomical tables.
[…]
By 1822, Babbage was literally cranking out tables with six-figure accuracy on a small machine he called a
Difference Engine
.
I think our authors live in an alternative universe, only this could explain how Babbage was “cranking out tables with six-figure accuracy” on a machine that was never built! They appear to be confusing the small working model he produced to demonstrate the principles on which the engine was to function with the full engine which was never constructed. They even include a picture of that small working model labelled, Babbage’s Difference Engine.
In 1832, Babbage and Joseph Clement produced a small working model (one-seventh of the plan), which operated on 6-digit numbers by second-order differences. Lady Byron described seeing the working prototype in 1833: “We both went to see the thinking machine (or so it seems) last Monday. It raised several Nos. to the 2nd and 3rd powers, and extracted the root of a Quadratic equation.” Work on the larger engine was suspended in 1833. (Wikipedia)
1822 was the date that Babbage began work on the Difference Engine. As to being small:
This first difference engine would have been composed of around 25,000 parts, weighed fifteen short tons (13,600 kg), and would have been 8 ft (2.4 m) tall. (Wikipedia)
Not exactly a desktop computer.
Our authors continue with the ahistorical fantasies:
In 1801, Joseph-Marie Jacquard had designed a loom that wove complex patterns guided by a series of cards with holes punched in them. Its success in turning out “pre-programmed” patterns led Babbage to try to make a calculating machine that would accept instructions and data from punch cards. He called the proposed device an
Analytical Engine
.
In 1801 Jacquard exhibited an earlier loom at the Exposition des produits de l’industrie française and was awarded a bronze medal. He started developing the punch card loom in 1804. The claim about Babbage is truly putting the cart before the horse. Following the death of his wife in 1827, Babbage undertook an extended journey throughout continental Europe, studying and researching all sorts of industrial plant to study and analyse their uses of mechanisation and automation. Something he had already done in the 1820s in Britain. He published the results of this research in his
On the Economy of Machinery and Manufactures
in 1832. To quote myself, “It would be safe to say that in 1832 Babbage knew more about mechanisation and automation that almost anybody else on the entire planet and what it was capable of doing and which activities could be mechanised and/or automated. It was in this situation that Babbage decided to transfer his main interest from the Difference Engine to developing the concept of the Analytical Engine conceived from the very beginning as a general-purpose computer capable of carrying out everything that could be accomplished by such a machine, far more than just a super number cruncher.” The Jacquard loom was just one of the machines he had studied on his odyssey and he incorporated its concept of punch card programming into the plans for his new computer.
Of course, our authors can’t resit repeating the Ada Lovelace myths and hagiography:
Babbage’s assistant in this undertaking was Augusta Ada Lovelace […] Lovelace translated, clarified and extended a French description of Babbage’s project, adding a large amount of original commentary. She expanded on the idea of “programming” the machine with punched-card instructions and wrote what is considered to be the first significant computer program…
Ada was in no way ever Babbage’s assistant. The notes added to the French description of Babbage’s project were cowritten with Babbage. They did not expand on the idea of “programming” the machine with punched-card instructions. The computer program included in Note G was written by Babbage and not Lovelace and wasn’t the first significant computer program. Babbage had already written several before the Lovelace translation.
Babbage in his autobiography
Passages from the Life of a Philosopher
(Longmans, 1864)
We now turn to George Boole and the advent of Boolean algebraic logic, which is dealt with extremely briefly but contains the following rather strange comment:
Although Boole reportedly thought his system would never have any practical applications…
I spent several years at university researching the life and work of George Boole and never came across any such claim. In fact, Boole himself applied his logical algebra to probability theory in
Laws of Thought
, as is stated quite clearly in its full title
An Investigation of the Laws of Thought: on Which are Founded the Mathematical Theories of Logic and Probabilities
.
After a very brief account of Herman Hollerith’s use of punch cards to accelerate the counting of 1880 US census, we take a big leap to Claude Shannon.
The pieces started to come together in 1937, when Claude Shannon, in his master’s thesis at M.I.T., combined Boolean algebra with electrical relays and switching circuits to show how machines can “do” mathematical logic.
We get no mention of the fact that Shannon was working on the circuitry of Vannevar Bush’s Differential Analyser, an analogue computer. The Differential Analyser played a highly significant role in the history of the computer as all three, US, war time computers, the ABC, the Harvard Mark I, and the ENIAC, all mentioned by our authors, were all projects set in motion to create an improved version of the Differential Analyser.
We move onto WWII:
Alan Turing, the mathematician who spearheaded the successful British attempt to break the German U-boat command’s so-called “Enigma code,” designed several electronic machines to help in the crypto analysis.
Turing designed one machine the Bombe based on the existing Polish Bomba, his design was improved by Gordon Welchman and the machine was actually designed and constructed by Harold Keen. We get a brief not totally accurate account of Max Newman, Tommy Flowers and the Colossus, before moving to Germany and Konrad Zuse. Our authors tell us:
Meanwhile in Germany, Konrad Zuse had also built a programmable electronic computer. His work begun in the late 1930s, resulted in a functioning electro-mechanical machine by sometime in the early 1940s, giving him some historical claim to the title of inventory of electronic computers.
This contains a central contradiction an electro-mechanical computer is not an electronic computer. Also, we have a precise time line for the development of Zuse’s computers. His Z1, a purely mechanical computer was finished in 1938, his Z2, an electro-mechanical computer in 1939. The Z3, which our authors are talking about, another electro-mechanical computer, an improvement on the Z2, was finished in 1941. Our authors erroneously claimed that Leibniz’s steeped reckoner used binary arithmetic but didn’t think it necessary to point out that Zuse was the first to use binary rather than decimal in his computers beginning with the Z1. They also state:
However, wartime secrecy kept his work hidden as well.
Zuse already set up his computer company during the war and went straight into development and production after the war. Although he couldn’t complete construction of his Z4, begun in 1944, until 1949, delivering the finished product to the ETH in Zurich in 1950.
The
Z4
was arguably the world’s first commercial digital computer, and is the oldest surviving programmable computer. (Wikipedia).
Our authors now deliver surprisingly brief accounts of the ABC, the Harvard Mark I, and ENIAC, before delivering the John von Neumann myth.
John von Neuman is generally credited with devising a way to store programs inside a computer.
The stored program computer in question is the EDVAC devised by John Mauchly and J. Presper Eckert, the inventors of ENIAC. Von Neuman merely described their design, without attribution, in his
First Draft of a Report on the EDVAC
, thereby stealing the credit.
The sketch ends with a very rapid gallop through the first stored program computers, EDSAC in Britain and UNIVAC I in the US, the introduction of first transistors and then integrated circuitry.
There is so much good, detailed history of the development of computers that I simply don’t understand how our authors could get so much wrong.
Sketch No. 24
The Arithmetic of Reasoning
:
Boolean Algebra
A brief nod to Aristotelian logic and then the repeat of the false claim that Leibniz’s steeped reckoner used binary numbers leads us into Leibniz’s attempts to create a calculus of logic, which however our authors admit remained unpublished and unknown until the twentieth century. All of this is merely a lead in to Augustus De Morgan and George Boole. We get brief, pathetic biographies of both of them emphasising their struggles in life. This leads into a presentation of Boole’s logic, presented in the form of truth tables which didn’t exist till much later! We then get an extraordinary ahistorical statement:
De Morgan, too, was an influential, persuasive proponent of the algebraic treatment of logic. His publications helped to refine, extend, and popularise the system started by Boole.
De Morgan was an outspoken supporter of Boole’s work but , his publications did not help to refine, extend, and popularise the system started by Boole. We do however get De Morgan’s Laws and his logic of relations. The latter gives our authors the chance to wax lyrical about Charles Saunders Pierce.
Towards the end of this very short sketch, we get the following, “But it was the work of Boole, De Morgan, C. S. Pierce
and others…”
(my emphasis) That “and others” covers a multitude of omissions, probably the most important being the work of William Stanley Jevons (1835–1882). An oft repeated truisms is that Boole’s algebraic logic is not Boolean algebra! It was first Jevons, who modifying Boole system turned it into the Boolean algebra used by computers today.
Sketch No. 25
Beyond Counting
:
Infinity and the Theory of Sets
Enter stage right George Cantor. We get a reasonable introduction to Cantorian set theory, Kronecker’s objections and the problem of set theory paradoxes. We then get a long digression on nineteenth century neo-Thomism, metaphysics, infinite sets and the Mind of God. Sorry but this has no place in a very short, supposedly elementary introduction to the history of mathematics, “designed for students just beginning their study of the discipline.”
Sketch No. 26
Out of the Shadows
:
The Tangent Function
is a reasonable account of the gradual inclusion of the tangent function into the trigonometrical canon. My only criticism is although they correctly state that Regiomontanus introduced the tangent function in his
Tabulae directionum profectionumque
written in 1467. Our authors only give the first two words of the title and translate it as
Table of Directions
. This is not incorrect but left so probably leads to misconceptions. Directions here does not refer to finding ones way geographical but to a method used in astrology to determine from a birth horoscope the major events, including death, in the subject life. This require complex spherical trigonometrical calculation transferring points from one system of celestial coordinates to another system. Hence the need for tangents.
Sketch No. 27
Counting Ratios
:
Logarithms
A detailed introduction to the history of the invention of logarithms. In general, OK but a bit off the rails on the story of Jost Bürgi.
As Napier and Briggs worked in Scotland, the Thirty Years’ War was spreading misery on the European continent. One of its side effects was the loss of most copies of a 1620 publication by Joost Bürgi, a Swiss clockmaker. Bürgi had discovered the basic principles of logarithms while assisting the astronomer Johannes Kepler in Prague in 1588, some years before Napier, but his book of tables was not published until six years after Napier’s
Descriptio
appeared. When most of the copies disappeared, Bürgi’s work faded into obscurity.
The failure of Bürgi’s work to make an impact almost certainly had to do with the facts that only a very small number were ever printed and it only consisted of a table of what we now call anti-logarithms with no explanation of how they were created or how to use them. Also, in 1588 Kepler was still living and working in Graz and Bürgi was living and working in Kassel. Kepler first moved to Prague in 1600 and Bürgi in 1604. The reference to 1588 in Bürgi’s work is almost certainly to prosthaphaeresis, a method of using trigonometrical formulars to turn difficult multiplications and divisions into addition and subtractions which Bürgi had learnt from Paul Wittich, and not to logarithms. His work on logarithms almost certainly started later than that of Napier but was independent. Interestingly it is thought that Napier was led to logarithms after being taught prosthaphaeresis by John Craig, who had also learnt it from Wittich
Sketch No. 28
Anyway You Slice It
:
Conic Sections
What is actually quite a good section on the history of conic sections is spoilt by one minor error and a cluster fuck concerning Kepler. The small error concerns Witelo (c.1230–after 1280), our authors write:
There is some evidence that Apollonius’s
Conics
was known in Europe as far back as the 113
th
century, when Erazmus Witelo used conics in his book on optics and perspective.
Witelo’s book is titled
De Perspectiva
and is purely a book on optics not perspective. Perspectivist optics is a school of optical theory derived from the optics of Ibn al-Haytham, whose book
Kitāb al-Manāẓir
is titled
De Perspectiva
in Latin translation. On to Kepler:
Sorry about the scans but I couldn’t be arsed to type out the whole section.
The camera obscura that Kepler assembled on the market place in Graz in the summer of 1600 was a small tent and not “a large wooden structure.” It was a pin hole camera and not a kind of large-size one.
The pin hole camera problem, the image created is larger than it should be, had been known since the Middle Ages, was widely discussed in the literature, and Kepler had been expecting it before he began his observations. His attention had been drawn to the problem by Tycho when he was in Prague at the beginning of 1600 to negotiate his move there to work with Tycho. Tycho had also drawn his attention to the problem of atmospheric refraction in the astronomy. These were the triggers that motivated his studies in optics. As noted he moved to Prague to work with Tycho but there was no Imperial Observatory in Prague.
Kepler inherited Tycho’s position of Imperial Mathematicus but not his observation, which were inherited by his daughter. This led to several years of difficult negotiations before he obtained permission to publish his work based on those observations. Tycho had set Kepler on the problem of calculating the orbit of Mars, the work that led to his first two planetary laws, before his death.
The book that Kepler used for his optical studies and on which he based his own work was Friedrich Risner’s “
Opticae thesaurus: Alhazeni Arabis libri septem, nuncprimum editi; Eiusdem liber De Crepusculis et nubium ascensionibus, Item Vitellonis Thuringopoloni libri X
” (Optical Treasure: Seven books of Alhazen the Arab, published for the first time; His book On Twilight and the Rising of Clouds, Also of Vitello Thuringopoloni book X), which was published in 1572.
“Of course he thought of the conic sections!” Actually, when Kepler first realised that the orbit of Mars was some sort of oval, he didn’t think of the conic section at all. Something for which he criticised himself in his
Astronomia Nova,
when he finally realised, after much more wasted effort, that the orbit was actually an ellipse. A realisation not based on more measurements. The
Astronomia Nova
from 1609 only contains the first two planetary laws. The third was first published in his
Harmonice Mundi
in 1619.
Sketch No. 29
Beyond the Pale
:
Irrational Numbers
Once again an acceptable historical account.
Sketch No. 30
Barely Touching
:
From Tangents to Derivatives
Not perfect but more than reasonable
The book ends with a
What to Read Next
which begins with
The Reference Shelf
. Here they recommend several one volume histories of mathematics starting with Victor J Katz and then moving onto Howard Eves’s
An Introduction to the History of Mathematics
and David M . Burton’s
The History of Mathematics
which they admit are both showing their age. They move on to lot more recommendations but studiously ignore, what was certainly the market leader before Katz, Carl B. Boyer,
A History of
Mathematics
in the third edition edited by Uta Merzbach. I seriously wonder what they have against Boyer whose books on the history of mathematics are excellent.
Having covered a fairly wide range of reference books, including a lot of Grattan-Guinness we now move onto
Twelve Historical Books You Ought to Read
. Of course, any such recommendation is subjective but some of their choices are more that questionable. They start, for example, with Tobias Dantzig’s
Number the Language of Science
. This was first published in 1930 and is definitively severely dated.
Our authors also recommend Reviel Netz & William Noel,
The Archimedes Codex
(2007). This book was the object of an incredible media hype when it first appeared. Whilst the imaging techniques used to expose the Archimedean manuscript on the palimpsest are truly fascinating the claims made by the authors about the newly won knowledge about Archimedes works are extremely hyperbolic and contain more than a little bullshit. Should this really be on a list of twelve history of maths books one ought to read?
I did a double take when I saw that they recommend Eric Templer
Bell’s Men of
Mathematics
. They themselves say:
The book has lost some of its original popularity, not (or at least not primarily) because of its politically incorrect, but rather because Bell takes too many liberties with his sources. (Some critics would say “because he makes things up.”) The book is fun to read, but don’t rely solely on Bell for facts.
My take don’t waste your time reading this crap book. I read it when I was sixteen and spent at least twenty years unlearning all the straight forward lies that Bell spews out!
Another disaster recommendation is Dava Sobel’s
Longitude
, which our authors say, “provides a good picture of the interactions among mathematics, astronomy, and navigation in the 18
th
century.” Sobel’s much hyped book give an extremely distorted “picture of the interactions among mathematics, astronomy, and navigation in the 18
th
century,” which at times is closer to a fantasy novel than a serios history book. If you really want to learn the true facts about those interactions then read Richard Dunn & Rebekah Higgitt,
Finding Longitude
:
How ships, clocks and stars helped solve the longitude problem
(Collins, 2014) and Katy Barrett,
Looking for Longitude
:
A Cultural
History
(Liverpool University Press),
both volumes are written by professional historians, who spent several years researching the topic in a major research project.
The section closes with
History Online
, which can’t be any good because it doesn’t include The Renaissance Mathematicus! 🙃
The apparatus begins with a useful
When They Lived
, which simply lists lots of the most well-known mathematicians alphabetically with their birth and death dates. This is followed by the bibliography, already mentioned above, and a good index. There are numerous, small, black and white illustration scattered throughout the pages.
Despite my negative comment about some points in the book, I would actually recommend it as a reasonably priced, mostly accurate, introduction to the history of mathematics. A small criticism is it does at times display a US American bias, but it was written primarily for American students. A good jumping off point for somebody developing an interest in the discipline. In any case considerably better that my jumping off point, E. T. Bell!
How to end your phone addiction once and for all | Letter
Guardian
www.theguardian.com
2025-03-13 18:32:05
Sarah Sorensen says we should be hanging up on surveillance capitalism The Guardian has published quite a lot of articles recently expressing some concern over the use of smartphones. Emma Beddington’s piece (My phone knows what I want before I do. That should be worrying – but it’s oddly comfo...
There is a simple way in which we can set ourselves free. Remove the battery if possible, then take your biggest hammer and set to work on the rest. The more cost-effective solution is not to buy one in the first place. I never have and it’s still rarely a problem, though surveillance capitalism tries hard to make it one. Oldish people like myself are less likely to own smartphones, increasingly likely to be marginalised as a result, lucky to remember life without their constant intrusion.
Whatever your age, fight back, be phone-free and proud. Wherever you go, demand that services can be accessed without a smartphone. Live in the moment. Talk to the people in sight and earshot. Enjoy the calm inside your head. Hear the waves on the shore.
Sarah Sorensen
Portsmouth, Hampshire
EFF Joins AllOut’s Campaign Calling for Meta to Stop Hate Speech Against LGBTQ+ Community
Electronic Frontier Foundation
www.eff.org
2025-03-13 18:30:43
In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:
People sometimes use sex- or gender-exclusive language when disc...
In January, Meta made targeted changes to its
hateful conduct policy
that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:
People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech.
The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity.
In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to
demand that Meta immediately reverse the policy changes
. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads.
The campaign is supported by the following partners:
All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania.
Sign the
AllOut petition
(external link) and tell Meta: Stop hate speech against LGBT+ people!
If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.
Join EFF Lists
Crypto reaps political rewards after spending big to boost Trump
Guardian
www.theguardian.com
2025-03-13 18:20:24
America’s biggest crypto companies are riding high. Plus, can the left reclaim techno-optimism? Hello, and welcome to TechScape. In this week’s edition, the crypto industry’s political investments pay off in spades, the left attempts to reclaim an optimistic view of our shiny technological future, a...
Hello, and welcome to TechScape. In this week’s edition, the crypto industry’s political investments pay off in spades, the left attempts to reclaim an optimistic view of our shiny technological future, and your memories of
Skype
.
The day before,
Donald Trump
signed an executive order to create a “strategic reserve” of cryptocurrency for the US, which he called “digital Fort Knox for digital gold to be stored”. It’s just one of several recent announcements that indicated a new and friendly posture toward the industry.
Throughout last week, US oversight agencies announced they would drop investigations into major crypto companies without penalty – Coinbase, Gemini, OpenSea, Yuga Labs, Robinhood Crypto, Uniswap Labs, Consensys, Kraken – and nixed fraud charges against an entrepreneur who had bought $75m worth of Trump’s meme coin, $Trump. Two weeks ago, the new leadership of the US’s main financial regulator declared that “meme coins”, flash-in-the-pan cryptocurrencies like the one
launched by Donald Trump himself
, would not face strict oversight.
Trump had signaled the moves already on the campaign trail. He was the first major-party presidential candidate to accept crypto campaign contributions and promised while campaigning to make the US “the crypto capital of the world”. On Friday, he reiterated that pledge to make the country “the bitcoin superpower”.
As an industry, crypto displays a victim complex similar to that of Trump, who often speaks from the highest seat of power as if his mother were ordering him to bed without dessert. David Sacks, Trump’s appointed czar of crypto, said that the industry had been “subjected to persecution” and that “nobody understands that better than you”.
He was speaking to Trump, of course. Cameron Winklevoss, co-founder of Gemini, said at the White House crypto summit: “This marks another milestone to the end of the war on crypto.” Though Winklevoss’s whining may seem unearned, Joe Biden’s administration had, in fact, adopted a tough regulatory attitude towards cryptocurrency, aiming to oversee digital, decentralized finance as securities, subject to the same regulations as Wall Street’s stocks and bonds. All those investigations had to originate somewhere.
The summit was a victory for crypto and a spectacular return on its collective investment in politics. Combined, the attenders had donated $11m to Trump’s inauguration, according to
the Intercept
. During the 2024 campaign, crypto spent nearly
a quarter of a billion dollars
, far more than any other sector, though the industry’s main political action committee notably did not contribute to the presidential race.
Crypto is riding high, but you wouldn’t know it by the price of bitcoin, usually considered a sign of the industry’s overall health. The main character of cryptocurrency is down some 13% since the start of the year, which tracks more closely with the outlook of the overall US stock market – not good – than many crypto enthusiasts would care to countenance. Trump has said out loud that a
recession might come for the US economy
as he flips and flops on major tariffs, which has bewildered investors and sent US equities spiraling, particularly in recent days.
If you only click on four headlines about Elon Musk, click on these:
‘The left used to embrace technology.’
Illustration: Edward Carvalho-Monaghan/Guardian Design
Do you have faith in the future?
An excitement for the polished chrome of days ahead reads as conservative in today’s political climate. Techno-optimism has become the provenance of the right. It’s a counterintuitive development: liberal politics may aim to change the present for the better, but the left, in its opposition to Silicon Valley’s billionaires, is now the party of techno-pessimism. Conservative politics, by contrast, aim to preserve and return to the past, but US Republicans and rightwingers around the world trumpet technological progress in their alliance with its funders. The rapid advancement of technology would seem to lend itself to the liberal ethos, upending ossified norms in the name of human betterment, and yet today, the dominant vision of an advanced future is one hawked by billionaires expressing adoration for unfettered capitalism.
My colleague
Amana Fontanella-Khan
writes in an essay introducing a new Guardian series, Breakthrough, about how the left can reclaim its lost belief in the possibility of a platinum tomorrow:
“Hope is scarce, pessimism is high and despair is pervasive. As one meme that captures the grim, morbid mood of our age reads: ‘My retirement plan is civilisational collapse.’
“On the extreme other end of public sentiment sit Silicon Valley billionaires: they are some of the most optimistic people on earth. Of course, it’s easy to be optimistic when you are sitting on enough money to sway national politics. And yet, the source of their optimism isn’t simply money. It is also a deep-seated faith in unfettered technological advances.
“But can and should the left advance its own techno-optimism? Can it put forward a vision of a brighter future that can compete with the grand visions of space exploration presented by Elon Musk? Can it make the case that science and technology ought to be harnessed to deliver breakthroughs, abundance, sustainability and flourishing of human potential? And what would a progressive, leftwing techno-optimism look like? A techno-optimism that the 99% could get on board with, especially communities of color, including Black people who have historically been excluded, and even harmed, by scientific and technological breakthroughs? These are some of the questions that this new series, Breakthrough, launched by the Guardian examines.”
Skype’s office in Tallinn, Estonia.
Photograph: Jaak Nilson/Alamy
Last week, we asked you to email us your memories of using Skype, which will shut down in May. Hundreds of you responded, and we have compiled your reminiscences, which include not only photos and your words but also an original song based on the inimitable Skype ringtone. I can’t embed the song in this email, but you can listen to it on the article page:
Skype shutdown surfaces sweet memories: ‘I proposed marriage’
.
In addition to musical inspiration, you recounted to us how Skype bridged the distance to faraway loved ones, both family and soon-to-be family – two of you proposed marriage via Skype video calls. I am not often moved to tears while editing stories, but these recollections touched me.
***
Here are a few excellent examples of what you told us:
I proposed to my Swedish husband over Skype using sticky notes. We got married on 5-5-15, the same day Skype will end its service. It’s very sad, I especially liked it since it was from my husband’s homeland of Sweden. Skype played a big part in our lives in keeping us connected while we were dating and it will always be in my heart.
– Holly, Iowa
In 2004, I moved across the world to attend university in the United States. Phone calls were too expensive, so I would spend hours on Skype chatting with my family and friends back home. When we went around the dinner table saying what we were grateful for my first American Thanksgiving, Skype was my answer. Homesickness was my malady, Skype was my medicine.
– Laura, Los Angeles
The person with whom I used Skype the most and used it last was my friend Harald. I live in Wisconsin and work at the university in Madison. Harald was from Germany but did a postdoctoral fellowship in Madison in the early 2000s. We became friends and, interestingly, we grew much closer after he moved back to Germany. We would get together before or after conferences and do bike trips together, and we visited each other many times over the years.
Harald’s preferred way of communicating when we were on opposite sides of the world was by Skype. He’d use it as a verb. “Let’s Skype next Tuesday,” he’d say. I would often tease him as new platforms became popular that he was wedded to this outdated mode of communication. He died about two years ago, and I miss him terribly. And any time I hear about Skype I think of him.
I tried to write this post yesterday, but I didn’t like my approach there. Last
night, I had a conversation with one of my best friends about it, and he
encouraged me to write this, but in a different way. I think this way is better,
so thanks, dude.
The biggest questions people had were “Why not C#?” and “Why not Rust?”. To
be clear, I do think that asking why someone chooses a programming language
can be valuable; as professionals, we need to make these sorts of decisions
on a regular basis, and seeing how others make those decisions can be useful,
to get an idea of how other people think about these things.
But it can also… I dunno. Sometimes, I think people have little imagination
when it comes to choosing a particular technology stack.
It is true that sometimes, there are hard requirements that means that choosing
a particular technology is inappropriate. But I’ve found that often, these
sorts of requirements are often more
contextually
requirements than ones that
would be the case in the abstract. For example,
you can write an OS in
Lisp
if you want. But in practice, you’re probably not working with
this hardware, or willing to port the environment to bare metal.
But it’s also the case that you often don’t know what others’ contextual
situation actually is. I would have never guessed the main reason that Microsoft
chose Go for, which is roughly “we are porting the existing codebase, rather
than re-writing it, and our existing code looks kinda like Go, making it easy to
port.” That’s a very good reason! I hadn’t really thought about the differences
between porting and re-writing in this light before, and it makes perfect sense
to me.
Funnily I'm pretty sure I'm the only team member who could actually claim their favorite language is Go, but I tried so hard to prove rust out early on. It sure wasn't my decision either, Go just kept working the best as the hacking continued
I think it's going well, though, besides the terrifying infohazard that is the "Why Go?" GitHub discussion thread, which contains some of the weirdest and most frustrating takes I've ever seen
I think it's going well, though, besides the terrifying infohazard that is the "Why Go?" GitHub discussion thread, which contains some of the weirdest and most frustrating takes I've ever seen
I peeked briefly yesterday (and quickly ran away). Just took another look. 370 comments in 24 hours?? Wow. The proposal to add generics to Go (mildly contentious, to put it lightly) only got ~450 comments over a few months.
There’s some sort of balance here to be struck. But at the same time, I hate
“choose the right tool for the job” as a suggestion. Other than art projects,
has anyone ever deliberately chosen the wrong tool? Often, something that
seems like the wrong tool is chosen simply because the contextual requirements
weren’t obvious to you, the interlocutor.
I think there’s two other reasons why this situation is making me feel
complicated things. The first is RIIR, and the second is a situation from a long
time ago that changed my life.
Many of you have probably heard of the “re-write it in Rust” meme. The idea is
that there’s this plague of programmers who are always suggesting that every
single thing needs to be re-written in Rust. While I am sure this happens from
time to time, I have yet to see real evidence that it is actually widespread. I
almost even wrote a blog post about this, trying to actually quantify the effect
here. But on some level, perception is reality, and if people believe it’s true,
it’s real on some level, even if that’s not actually the case. Regardless of the
truth of the number of people who suggest this, on every thread about Rust,
people end up complaining about this effect, and so it has an effect on the
overall discourse regardless. And while I don’t believe that this effect is real
overall, it absolutely has happened in this specific situation. A bunch of
people have been very aggressively suggesting that this should have been in
Rust, and not Go, and I get where they’re coming from, but also: really? Go is
an absolutely fine choice here. It’s fine. Why do you all have to reinforce a
stereotype? It’s frustrating.
The second one is something that’s been on my mind for over a decade, but I’ve
only started talking about it in places other than conversations with close
friends. Long-time readers may remember
that one time where I was an
asshole
. I mean, I’ve been an asshole more than once in my life, but
this one was probably the time that affected me the most. The overall shape of
the situation is this:
Someone wrote
grep
, but in Node.
I made fun of it on the internet, because why would someone write a CLI tool in Node?
The author found out that I did and felt bad about it.
Looking back at my apology, I don’t think it’s necessarily the best one. It was
better than the first draft or two of what I wrote, but I’ve gotten better at
apologizing since then. Regardless, this incident changed the trajectory of my
life, and if you happen to read this, Heather, I am truly, unequivocally, sorry.
There was a couple of things that happened here. The first is just simply, in
2013, I did not understand that the things I said had meaning. I hate talking
about this because it makes me seem more important than I am, but it’s also
important to acknowledge. I saw myself at the time as just Steve, some random
guy. If I say something on the internet, it’s like I’m talking to a friend in
real life, my words are just random words and I’m human and whatever. It is what
it is.
But at that time in my life, that wasn’t actually the case. I was on the Rails
team, I was speaking at conferences, and people were reading my blog and tweets.
I was an “influencer,” for better or worse. But I hadn’t really internalized
that change in my life yet. And so I didn’t really understand that if I
criticized something, it was something thousands of people would see. To me, I’m
just Steve. This situation happened before we talked about “cancel culture” and
all of that kind of stuff, but when the mob came for me, I realized: they were
right, actually. I was being an asshole, and I should not have been. And I
resolved myself to not being an asshole like that ever again.
And to do that, I had to think long and hard about why I was like that. I love
programming languages! I love people writing programs that they find
interesting! I don’t think that it’s stupid to write a tool in a language that
I wouldn’t expect it to be written in! So why did I feel the reason to make fun
of this in that situation?
The answer? A pretty classic situation: the people I was hanging out with were
affecting me, and in ways that, when I examined it, I didn’t like very much.
You see, in the Rails world at the time, there was a huge
contempt
culture
at the time, especially around JavaScript and Node. And, when
you’re around people who talk shit all the time, you find yourself talking shit
too. And once I realized that, well, I wanted to stop it. But there was a big
problem: my entire professional, and most of my social, life was built around
Ruby and Rails. So if I wanted to escape that culture… what should I do?
If you clicked on the link to my apology above, you probably didn’t make note of
the date: Jan 23 2013. Something had just happened to me, in December of 2012: I
had decided to check out this little programming language I had heard of called
Rust.
One of the things that struck me about the “Rust community” at the time, which
was like forty people in an IRC room, was how nice they were. When I had trouble
getting Hello World to compile, and I typed
/join #rust
, I fully expected an
RTFM and flames. But instead, I was welcomed with open arms. People were chill.
And it was really nice.
And so, a month and a day later, I had this realization about the direction my
life was heading, and how I wasn’t really happy about it. And I thought about
how much I enjoyed the Rust stuff so far… and combined with a few other
factors, maybe of which I’ll write about someday, this is when I decided that I
wanted to make Rust a thing, and dedicated myself to achieving that.
And part of that was being conscious about the culture, and how it developed,
and the ways that that would evolve over time. I didn’t want Rust to end up the
way that Rails had ended up. And that meant a few things, but first of all,
truly internalizing that I had to change. And understanding that I had to be
intentional with what I did and said. And what that led to, among other things,
was a “we talk about Rust on its own merits, not by trash talking other languages”
culture. (That’s also partially because of Nietzsche, but I’m already pretty far
afield of the topic for this post, maybe someday.)
Anyway, the Eternal September comes for us all. The “Rust Community,” if it ever
were a coherent concept, doesn’t really make sense any more. And it is by no
means perfect. I have a lot of criticism about how things turned out. But I do
think that it’s not a coincidence that “Rust makes you trans,” for example, is
a meme. I am deeply proud of what we all built.
So yeah, anyway: people choose programming languages for projects based on a
variety of reasons. And sure, there may be better or worse choices. But you
probably have a different context than someone else, and so when someone
makes a choice you don’t agree with, there’s no reason to be a jerk about it.
U.S. novelist David Weiss (1909-2002) developed several deep abiding interests across his long life and sometimes meandering writing career. These passions included President Franklin Roosevelt-era public works projects, his own family history (he was an orphan largely raised by an aunt), secular Judaism, his hometown of Philadelphia, and the writing and career of poet and playwright Stymean Karlen, who also happened to be his wife and creative partner. It was, however, none of these topics that had brought me to the
Special Collections Reading Room
at Temple University’s
Charles Library
during a rainy week in August 2024. Instead, I was on the trail of Weiss’s writing about a long-debunked musical conspiracy theory.
Special Collections Research Center, Temple University. All photos provided by the author.
One of Weiss’s other lifelong fascinations was the life, music, and posthumous reception of composer Wolfgang Amadeus Mozart. Across his career as a biographical novelist, Weiss wrote three novels inspired by Mozart, two of which—
Sacred and Profane
(1968) and
The Assassination of Mozart
(1970/1971)—were published near the height of his career and achieved moderate international success. Having read these novels for fun some years ago, I was curious to learn more about his research and writing process. As a postdoctoral researcher working on the interplay of fact and fiction in the reception history of Mozart’s colleague (and alleged rival) Antonio Salieri, I was interested in examining Weiss’s research notes and outlines for
The Assassination of Mozart
to learn three things. First, I was curious as to whether or not he actually believed in the anachronistic and convoluted scenario he laid out in the novel, in which the Viennese secret police colluded with Salieri to poison Mozart for political reasons. Secondly, I wanted to better understand how Weiss engaged with historical sources that had long discounted most speculation about Mozart’s death as conspiratorial and sensational. Finally, given how much of Salieri’s twentieth-and twenty first-century reception has been shaped by Miloš Forman’s 1984 Oscar-winning film adaptation of Peter Shaffer’s play
Amadeus
(now also currently under production as a limited series by Sky TV), I wondered how Weiss and his publisher pitched this story in the decade before the play’s 1979 premiere.
1
David Weiss Papers, Box 4. Courtesy of the Special Collections Research Center. Temple University Libraries. Philadelphia, PA.
My visit to Philadelphia was something of a whirlwind. I had planned on taking two days to make my way through the thirty-seven meticulously organized archival boxes that Weiss had donated to his alma mater. Unfortunately, a severe thunderstorm led to the initial cancellation of my flight from Toronto and I had to rebook the next day, thus cramming everything into one very long day at the library. This included taking notes on which boxes and folders contained what kinds of documents: personal and professional correspondence, drafts and typed manuscripts, reviews, research notes, and contracts. I also took some time to read as much as I could of Weiss’s numerous unpublished books, including a memoir, a biography of his partner Karlen, and a peculiar novel in which an irritated Mozart returns to life to interfere with a production of one of his operas in the early 1990s.
2
More chaotically, I found myself frantically photographing everything that seemed even somewhat relevant to my project, from proofreaders’ comments to a Japanese radio play based on
The Assassination of Mozart.
Japanese Radio Play, The Assassination of Mozart. Courtesy of the Special Collections Research Center. Temple University Libraries. Philadelphia, PA.
While
Weiss’s papers do have a thorough finding aid
, I was still so overwhelmed with the need to get through his papers that I went about things in a less strategic manner than I might have otherwise done. About an hour before the reading room closed, I came across the folders that yielded what I was exactly there for: the research notes and outlines for
Assassination
. I already knew that Weiss was familiar with the major names in Anglophone Mozart scholarship and works in translation from the early and mid-twentieth century. Both
Sacred and Profane
and
Assassination
end with detailed bibliographies of seemingly everything Weiss consulted in his research: musicology, biography, and music criticism, political histories of eighteenth-century Vienna, and even cookbooks and other works of historical fiction. If nothing else, he was clearly a voracious reader who wished to absorb as many details as possible. But looking at the notes and manuscripts behind these novels further fleshed out Weiss’s choice of subject matter and writing process.
Weiss’s unpublished memoir and biography of Karlen told a thrilling story about the research trip the two of them undertook for
Sacred and Profane
. Upon learning that he planned on writing about Mozart and wanted to travel to Salzburg and Vienna, Weiss’s agent obtained a letter of introduction for Weiss and Karlen to various European music scholars from none other than the celebrity conductor and composer Leonard Bernstein. It was on this trip that Weiss began to ponder the circumstances of Mozart’s death, particularly his infamously unknown burial place. He claimed that a conversation with Karlen while visiting St. Marx Cemetery in Vienna solidified his belief in the longstanding rumor that Mozart did not die of natural causes.
3
When a planned major press junket surrounding
Sacred and Profane
was apparently sidelined by media coverage of Senator Robert F. Kennedy’s 1968 assassination, Weiss decided to put his idle speculation about the mystery of Mozart’s death into a sort of sequel to what had otherwise been a straightforward biographical narrative ending with his subject’s burial. As a typed blurb for
Assassination
noted, Weiss seemed to imply a connection between the complex landscape of late eighteenth-and-early nineteenth-century Europe and the fraught politics of mid-twentieth-century America.
4
Typed blurb for The Assassination of Mozart. Courtesy of the Special Collections Research Center. Temple University Libraries. Philadelphia, PA.
The research notes and outlines for
Assassination
revealed the methodological process of how an author who did not otherwise consider himself a writer of mysteries or thrillers transformed notes from a variety of historical sources into a fictional plot. Weiss was, above all things, a writer of fictionalized biography, and he treated the plotting of this novel in much the same way as he treated the research for his more mainstream works, consulting letters and scholarship to assemble a narrative based on where his real and fictional people would likely be at various points in 1825, the year of Salieri’s death.
5
For someone familiar with the broader literature of scholarship, fiction, and semi-fictional anecdotes about Salieri, however, Weiss’s notes were a fascinating, if frustrating, look at how someone already convinced of a conspiracy theory reads non-conspiratorial sources.
6
Where something was not explicitly spelled out in his sources, Weiss floated various ideas—reasons for a plot against a particular operatic performance, potential means of murder, various other figures who might be in on the intrigue, and ways his protagonists and their allies might be prevented by the authorities from revealing the “truth” to the general public. A small file of papers labeled “poisons” quoted extensively from medical literature on the history and symptoms of arsenic, reminding me of some present-day mystery novelists who joke about their own disturbing Google search histories.
Outline for The Assassination of Mozart. Courtesy of the Special Collections Research Center. Temple University Libraries. Philadelphia, PA.
My favorite source, however, was more general, a typed and handwritten list of major plot events, locations, and the historical figures Weiss’s protagonists would interview, followed by some additional notes on the novel’s planned structure. He initially planned a posthumous conversation among Mozart’s, Haydn’s, and Salieri’s ghosts, which sadly did not make it into the final novel. But an additional note reminded me that Weiss was aware that the story he was writing would be largely unfamiliar to his mostly American and British readership. In one note to himself midway down the page, he typed, “In frontspiece (sic) – instead of author’s note..HAVE QUOTES ABOUT SALIERI – to show factual background.” Besides joking with friends that “HAVE QUOTES ABOUT SALIERI” would be a good title for my current project, Weiss’s note to himself demonstrates the balance of fact, claim, and outright fiction in the project of writing historical crime fiction. He needed to construct a narrative that seemed to him (and hopefully to his readers) historically plausible. That he appeared to fundamentally disagree with the scholarly consensus on Mozart and Salieri was only a minor (to him) concern and one which he clearly hoped would attract attention and debate in the press for
The Assassination of Mozart
and potential future works.
7
In this, he was ironically in the same boat as Peter Shaffer, whose first edition of the much-revised script to
Amadeus
contains additional details about Salieri and Viennese political and music history that would have been largely unknown to initial audiences but are often cut in more recent productions.
8
For both writers, in order to get audiences to engage with their largely invented and potentially obscure musical claims, they first had to share some basic historical facts.
While Weiss’s archived personal and professional writings never mention Peter Shaffer by name or acknowledge the play
Amadeus
, he strongly disliked the film, which (somewhat to my surprise) he saw as overly sympathetic to Salieri and presenting Mozart as too politically and artistically naïve. Nevertheless, Weiss clearly hoped throughout the 1980s and 1990s that the increased pop culture interest in Mozart might lead to additional publication and adaptation opportunities for his own work. His third Mozart-themed novel (unpublished in English, although he did negotiate the rights for a Hungarian edition) contains a scene where a revived Mozart is infuriated by a film screening of
Amadeus
. Curiously, both Shaffer and Weiss seemed to share an interest in drawing verbatim from historical sources and presenting a sort of “secret history” approach to their historical subjects. For more on Shaffer’s perspective on historical fiction, see his essay “Preface:
Amadeus
: The Final Encounter“ in
Amadeus
(New York: Harper Collins, 2001), xv–xxxiv. I discuss Weiss’s use of sources in my article “‘Everything You’ve
Heard
is True: Resonating Musicological Anecdotes in Crime Fiction about Antonio Salieri,”
Journal of Historical Fictions
4, no. 1 (2022): 41-60, available at
https://historicalfictionsresearch.org/wp-content/uploads/2024/11/jhf-2022-4.1.pdf
The unpublished Mozart novel—existing in multiple full and partial drafts in the Weiss papers with various titles, including
If Mozart Came Back, Forever and After,
and
Mozart: An Encore
—is quite a departure from Weiss’s typical approach to historical and biographical fiction. With a character clearly modeled on Weiss himself—a moderately successful author from Philadelphia who befriends Mozart while engaged to ghostwrite the memoirs of a popular but arrogant conductor (to whom Mozart takes an immediate dislike and who is pointedly described in much the same way as Salieri is in
Assassination
)—much of the novel clearly reflects the author’s personal views on the state of opera and popular culture in the late 1980s and early 1990s. Some drafts also have more supernatural, dreamlike scenes wherein Mozart (from his new vantage point in the late twentieth century) speaks back across time and space to his wife, son, and various contemporaries including Haydn, Salieri, and Lorenzo Da Ponte in an attempt to make sense of his posthumous legacy.
As Weiss tells it, he was more swept up in the possibilities of foul play, while Karlen was more skeptical, a dynamic mirrored in the characters of Jason and Deborah Otis (the American protagonists of
Assassination
).
For a variety of scholarly perspectives on the speculation and narratives surrounding Mozart’s death and posthumous reception, see H.C. Robbins Landon,
1791: Mozart’s Last Year
(Schirmer, 1988), William Stafford,
The Mozart Myths: A Critical Reassessment
(Stanford University Press, 1991), Mark Everist,
Mozart’s Ghosts: Haunting the Halls of Musical Culture
(Oxford University Press, 2012), Simon Keefe,
Mozart’s Requiem: Reception, Work, Completion
(Cambridge University Press, 2015), Keefe,
Haydn and Mozart in the Long Nineteenth Century
(Cambridge University Press, 2023), and Edmund Goehring,
Mozart, Genius, and the Possibilities of Art
(University of Rochester Press, 2024). For more historically grounded discussion of Antonio Salieri’s career during the 1780s and 1790s, see Volkmar Braunbehrens,
Maligned Master: The Real Story of Antonio Salieri
(Fromm, 1992), John Rice,
Antonio Salieri and Viennese Opera
(University of Chicago Press, 1998), and Timo Jouko Herrmann,
Antonio Salieri: Eine Biografie
(Morio, 2019).
One of Weiss’s acknowledged sources,
The Mozart Handbook
(The World Publishing Company, 1954), clearly provided a grounding in a variety of primary and secondary sources, including excerpts from Emily Anderson’s well-known English translations of the Mozart family letters and then-fairly recent critical and biographical scholarship by Edward Dent, Alfred Einstein, and Eric Blom (among many others). While much of this work is considered dated today, it strikes me that it likely shaped some elements of Weiss’s characterizations of certain people in Mozart’s family and social circles. While none of the sources that editor Louis Biancolli includes subscribe to any conspiracy theories about Mozart’s death, many depict Salieri (often without citation) as politically minded, staunchly opposed to German music, and quick to take offense in ways that verge on the speculative.
I am reminded here of what rhetorician Jenny Rice calls “the power of empty archives,” wherein conspiracy theorists will latch onto the idea of missing, forged, rumored, censored, or otherwise unavailable sources in order to read between the lines of existing documentary evidence and official narratives. Jenny Rice,
Awful Archives: Conspiracy Theory, Rhetoric, and Acts of Evidence
(The Ohio State University Press, 2020).
While the reviews for
Assassination
were decidedly mixed and Weiss struggled to publish his later attempts at music-themed fiction, he saved numerous articles and press clippings related to speculation about Mozart’s death.
For example, the first published version of the
Amadeus
script includes a scene where the success of Salieri’s opera
Axur, re d’Ormus
is explicitly contrasted against Joseph II’s political anxieties and Mozart’s professional failures. Peter Shaffer,
Amadeus
(London: Deutsch, 1980). (Note that while the script was first published in 1980, the play premiered in 1979.) Although the premiere of
Axur
is included in a very different context in Forman’s film (wherein Mozart’s mockery of Salieri’s music reinforces his sense of artistic inferiority and religious betrayal), this scene appears to have been cut from most later productions and subsequent editions of the published script. Beyond Shaffer’s own revisions and personal involvement in subsequent professional productions of the play until his death in 2016, more recent theatrical cuts tend to emphasize specific production and casting choices. For example, the production that I saw in Salzburg last year—which was part of the Salieri-themed 2024 Mozartwoche festival and emphasized a “Mozart as rockstar” image—shifted the play’s focus from the musical politics of the 18th century to the destructiveness of contemporary celebrity culture. It minimized the historical setting through deliberate choices in costuming (the cast was in various types of modern formal dress in the 1780s/90s scenes, with the aged Salieri dressed as a fading celebrity hiding from paparazzi in an oversized coat, hat, sunglasses, and wig in 1823) and use of deliberate anachronism (Mozart’s mimed performances occasionally included air guitar and jumping on top of a piano-shaped set piece, while Salieri “live-streamed” his confessional monologues into a camera that projected a digital image to the back of the theater). Most of the specific historical and musical allusions in the 19th-century frame story were also cut.
Kristin M. Franseen is a postdoctoral associate at Western University, where she is currently at work on a new book project tentatively entitled The Intriguing Afterlives of Antonio Salieri: Gossip, Fiction, and the Post-Truth in Musical Biography. Her first book, Imagining Musical Pasts: The Queer Literary Musicology of Vernon Lee, Rosa Newmarch, and Edward Prime-Stevenson, was published by Clemson University Press in 2023.
Spark AI (YC W24) Is Hiring a Full-Stack Engineer in San Francisco
Spark
is building an advanced AI research tool that helps energy developers build solar farms and battery plants.
One of the biggest challenges in renewable energy is not construction — it’s navigating local
regulations
. We build AI agents that help energy developers find and understand critical information to develop and invest in solar farms.
Industry leaders like Colliers Engineering & Design, Standard Solar, and Cypress Creek Renewables use Spark to inform their investment decisions. Our customers’ energy pipelines will produce the equivalent of 60GW — enough to power tens of millions of households a year!
If you're interested in learning how the energy infrastructure works and how you can help accelerate a historical energy transition with AI and software, we’d love to hear from you!
The Team
We’re funded by top investors, including
AI Grant
(Nat Friedman and Daniel Gross), the founders of Brex, Plaid, Helioscope (acq. Aurora Solar), and more (to be announced).
You’ll be part of a small and ambitious team that has led Engineering and Product at Tesla, Apple, Brex, and Google. As an early member of the team, you’ll work closely with the founders and shape the engineering culture.
We are an in-person company in San Francisco, CA, and we are in the office 5 days a week.
We use Typescript, NextJS, NodeJS, and Postgres.
Responsibilities
Design and build our core APIs, AI infrastructure, and data pipeline that scrapes millions of data points every week
Own features end-to-end from working on an idea with the founders to designing, testing, shipping to prod, and getting feedback from customers
Architect and build cutting-edge AI systems like agentic web browsing, RAG, and data extraction
Write both frontend and backend code
Work closely with the founders to build the product roadmap
Talk to customers and understand how the energy industry works to inform better product decisions
You’re a great fit if…
You have 3+ years of experience
You love writing code but also care about having a significant impact. Being an early engineer means making the right trade-offs between writing perfect code and “good enough” for the situation.
You believe that everything can be figured out. You can take a fuzzy goal, work independently toward a solution, and communicate effectively along the way.
You want to be a founder one day. You’ll see what it's like to start a vertical AI company (in the energy space). If there’s anything about business or tech you’re interested in, we’ll teach you.
You’re NOT a great fit if…
You want to ship perfect code. Speed is one of our main advantages as a startup. We don’t have the luxury of spending time on perfection, and we always try to do more with less.
You don’t care about the business side of things. Our technical decisions and business strategy go hand in hand. Understanding how they influence each other helps us build the right things.
About
Spark
Spark
is a AI research tool that helps energy developers build solar farms and battery plants faster.
Spark was founded by Tae and Julia, who were product and engineering leaders at Tesla, Brex, and Apple. We're backed by industry leaders like AI Grant (Nat Friedman, Daniel Gross), the founder of Brex, and more. We're growing fast and serve large development firms like Colliers Engineering, Standard Solar, and Cypress Creek Renewables.
Spark
Founded:
2023
Team Size:
2
Status:
Active
Founders
Curry: A Truly Integrated Functional Logic Programming Language
A Truly Integrated Functional Logic Programming Language
-- Returns the last number of a list.last ::[Int]->Intlast(_ ++[x]) = x-- Returns some permutation of a list.perm ::[a]->[a]perm[]=[]perm(x:xs) = insert(permxs)whereinsertys=x:ysinsert(y:ys) = y:insertys
Curry is a
declarative multi-paradigm programming language
which combines in a seamless way features from
functional programming
(nested expressions, higher-order functions, strong typing, lazy evaluation)
and
logic programming
(non-determinism, built-in search, free variables, partial data structures).
Compared to the single programming paradigms, Curry provides
additional features, like optimal evaluation for logic-oriented
computations and flexible, non-deterministic pattern matching
with user-defined functions.
Features
Purely Declarative
Curry is called a declarative language, because computed results are independent of the time and order of evaluation, which simplifies reasoning on programs. Side effects can be modeled as “IO” operations, i.e., a declarative description of what to do. Operations are constructed by expressions only, there are no statements or instructions, and every binding to a variable is immutable.
Type Inference
Curry is strongly typed but type annotations of functions need not be written by the programmer: they are automatically inferred by the compiler using a type inference algorithm. Nevertheless, it is a good style to write down the types of functions in order to provide at least a minimal documentation of the intended use of functions.
Non-Determinism
Curry supports non-deterministic operations which can return different values for the same input. Non-deterministic operations support a programming style similar to that of logic programming while preserving some advantages of functional programs such as expression nesting and lazy (demand-driven) evaluation.
Free Variables
Free variables denote “unknown” values. They are instantiated (i.e., replaced by some concrete values) so that the instantiated expression is evaluable. REPLs of Curry systems show bindings (i.e., instantiations) of the free variables of the main expression it has computed to evaluate an expression.
The development of Curry is an international
initiative intended to provide a common platform for the research, teaching and application of integrated
functional logic languages. The design of Curry is mainly discussed in the Curry mailing list. A detailed report
describing the language is also available. To get an idea of Curry, you may have a look into the short list of
Curry's features or a tutorial on Curry.
Ecosystem
Compilers
Several implementations of Curry are available and/or under development. The most prominent representatives are the Portland Aachen Kiel Curry System (PAKCS), the new version of the Kiel Curry System (KiCS2), and the Münster Curry Compiler (MCC).
Package Manager
The Curry Package Manager (CPM) is a tool to distribute and install Curry libraries and applications and manage version dependencies between them. These libraries are organized in packages. There is a central index of all these packages which can easily be downloaded by CPM.
CurryDoc
CurryDoc is a tool that generates the documentation for a Curry program in HTML (and optionally also LaTeX) format. The generated HTML pages contain information about all data types and operations exported by a module as well as links between the different entities.
Curr(y)gle API Search
Curr(y)gle is a Curry API search engine which allows you to search the Curry libraries indexed by CPM. You can search by names of either operations, data types, and modules. It was designed following the example set by Haskell’s search engine Hoogle.
Non-volatile storage is a cornerstone of modern computer systems.
Every modern photo, email, bank balance, medical record, and other critical pieces of data are kept on digital storage devices, often replicated many times over for added durability.
Non-volatile
storage, or colloquially just "
disk
", can store binary data even when the computer it is attached to is powered off.
Computers have other forms of
volatile
storage such as CPU registers, CPU cache, and random-access memory, all of which are faster but require continuous power to function.
Here, we're going to cover the history, functionality, and performance of non-volatile storage devices over the history of computing, all using fun and interactive visual elements.
This blog is written in celebration of our latest product release:
PlanetScale Metal
.
Metal uses locally attached NVMe drives to run your cloud database, as opposed to the slower and less consistent network-attached storage used by most cloud database providers.
This results in a blazing fast queries, low latency, and unlimited IOPS.
Check out
the docs
to learn more.
As early as the 1950s, computers were using
tape drives
for non-volatile digital storage.
Tape storage systems have been produced in many form factors over the years, ranging from ones that take up an entire room to small drives that can fit in your pocket, such as the iconic Sony Walkman.
A tape
Reader
is a box containing hardware specifically designed for reading
tape cartridges
.
Tape cartridges are inserted and then unwound which causes the tape to move over the
IO Head
, which can
read
and
write
data.
Though tape started being used to store digital information over 70 years ago, it is still in use today for certain applications.
A standard LTO tape cartridge has several-hundred meters of 0.5 inch wide tape.
The tape has several tracks running along its length, each track being further divided up into many small cells.
A single tape cartridge contains many trillions of cells.
Each cell can have its magnetic polarization set to up or down, corresponding to a binary 0 or 1.
Technically, the magnetic field created by the
transition
between two cells is what makes the 1 or zero.
A long sequence of bits on a tape forms a page of data.
In the visualization of the tape reader, we simplify this by showing the tape as a simple sequence of data pages, rather than showing individual bits.
When a tape needs to be read, it is loaded into a reader, sometimes by hand and sometimes by robot.
The reader then spins the cartridge with its motor and uses the
reader head
to read off the binary values as the tape passes underneath.
Give this a try with the (greatly slowed down) interactive visualization below.
You can control the
speed of the tape
if you'd like it faster or slower.
You can also
issue read requests
and
write requests
and then monitor how long these take.
You'll also be able to see the queue of pending IO operations pop up in the top-left corner.
Try issuing a few requests to get a feel for how tape storage works:
If you spend enough time with this, you will notice that:
If you read/write to a cell "near" the read head, it's fast.
If you read/write to a cell "far" from the read head, it's slow.
Even with modern tape systems, reading data that is far away on a tape can take 10s of seconds, because it may need to spin the tape by hundreds of meters to reach the desired data.
Let's compare two more specific, interactive examples to illustrate this further.
Say we need to
read
a total of 4 pages and
write
an additional 4 pages worth of data.
In the first scenario, all 4 pages we need to read are in a neat sequence, and the 4 to write to are immediately after the reads.
You can see the IO operations queued up in the white container on the top-left.
Go ahead and click the
Time IO button
to see this in action, and observe the time it takes to complete.
As you can see, it takes somewhere around 3-4 seconds.
On a real system, with an IO head that can operate much faster and motors that can drive the spools more quickly, it would be much faster.
Now consider another scenario where we need to read and write the same number of pages.
However, these reads and writes are spread out throughout the tape.
Go ahead and click the
Time IO button
again.
That took ~7x longer for the same total number of reads and writes!
Imagine if this system was being used to load your social media feed or your email inbox.
It might take 10s of seconds or even a full minute to display.
This would be totally unacceptable.
Though the latency for random reads and writes is poor, tape systems operate quite well when reading or writing data in long sequences.
In fact, tape storage still has many such use cases today in the modern tech world.
Tape is particularly well-suited for situations where there is a need for
massive
amounts of storage that does not need to be read frequently, but needs to be safely stored.
This is because tape is both cheaper per-gigabyte and has a longer shelf-life than its competition: solid state drives and hard disk drives.
For example, CERN has a tape storage data warehouse with
over 400 petabytes of data
under management.
AWS also offers
tape archiving
as a service.
What tape is not well suited for is high-traffic transactional databases.
For these and many other high-performance tasks, other storage mediums are needed.
The next major breakthrough in storage technology was the
hard disk drive
.
Instead of storing binary data on a tape, we store them on a small circular metal disk known as the
Platter
.
This disk is placed inside of an
enclosure
with a special
read/write head
, and spins very fast (7200 RPM is common, for example).
Like the tape, this disk is also divided into tracks.
However, the tracks are
circular
, and a single disk will often have well over 100,000 tracks.
Each track contains hundreds of thousands of pages, and each page containing 4k (or so) of data.
An HDD requires a mechanical spinning motion of both the reader and the platter to bring the data to the correct location for reading.
One advantage of HDD over tape is that the entire surface area of the bits is available 100% of the time.
It still takes time to move the needle + spin the disk to the correct location for a read or write, but it does not need to be "uncovered" like it needs to be for a tape.
This combined with the fact that there are two different things that can spin, means data can be read and written with much lower latency.
A typical random read can be performed in 1-3 milliseconds.
Below is an interactive hard drive.
You can control the
speed of the platter
if you'd like it faster or slower.
You can request that the hard drive
read a page
and
write to a nearby available page
.
If you request a read or write before the previous one is complete, a queue will be built up, and the disk will process the requests in the order it receives them.
As before, you'll also be able to see the queue of pending IO operations in the white IO queue box.
As with the tape, the speed of the platter spin has been slowed down by orders of magnitude to make it easier to see what's going on.
In real disks, there would also be many more tracks and sectors, enough to store multiple terabytes of data in some cases.
Let's again consider a few specific scenarios to see how the order of reads and writes affects latency.
Say we need to write a total of three pages of data and then read 3 pages afterward.
The three writes will happen on nearby available pages, and the reads will be from tracks 1, 4, and 3.
Go ahead and click the
Time IO button
.
You'll see the requests hit the queue, the reads and writes get fulfilled, and then the total time at the end.
Due to the sequential nature of most of these operations, all the tasks were able to complete quickly.
Now consider the same set of 6 reads and writes, but with them being interleaved in a different order.
Go ahead and click the
Time IO button
again.
If you had the patience to wait until the end, you should notice how the same total number of reads and writes took much longer.
A lot of time was spent waiting for the platter to spin into the correct place under the read head.
Magnetic disks have supported command queueing directly on the disks for a long time (80s with SCSI, 2000s with SATA).
Because of this, the OS can issue multiple commands that run in parallel and potentially out-of-order, similar to SSDs.
Magnetic disks also improve their performance if they can build up a queue of operations that the disk controller can then schedule reads and writes to optimize for the geometry of the disk.
Here's a visualization to help us see the difference between the latency of a
random tape read
compared to a
random disk read
.
A random tape read will often take multiple seconds (I put 1 second here to be generous) and a disk head seek takes closer to 2 milliseconds (one thousandth of a second)
Even though HDDs are an improvement over tape, they are still "slow" in some scenarios, especially random reads and writes.
The next big breakthrough, and currently the most common storage format for transactional databases, are SSDs.
Solid State Storage, or "flash" storage, was invented in the 1980s.
It was around even while tape and hard disk drives dominated the commercial and consumer storage spaces.
It didn't become mainstream for consumer storage until the 2000s due to technological limitations and cost.
The advantage of SSDs over both tape and disk is that they do not rely on any
mechanical
components to read data.
All data is read, written, and erased electronically using a special type of non-volatile
transistor
known as NAND flash.
This means that each 1 or 0 can be read or written without the need to move any physical components, but 100% through electrical signaling.
SSDs are organized into one or more
targets
, each of which contains many
blocks
which each contain some number of
pages
.
SSDs read and write data at the page level, meaning they can only read or write full pages at a time.
In the SSD below, you can see reads and writes happening via the
lines
between the controller and targets (also called "traces").
The removal of mechanical components reduces the latency between when a request is made and when the drive can fulfill the request.
There is no more waiting around for something to spin.
We're showing small examples in the visual to make it easier to follow along, but a single SSD is capable of storing multiple terabytes of data.
For example, say each page holds 4096 bits of data (4k).
Now, say each block stores 16k pages, each target stores 16k blocks, and our device has 8 targets.
This comes out to
4k * 16k * 16k * 8 = 8,796,093,022,208
bits, or 8 terabytes.
We could increase the capacity of this drive by adding more targets or packing more pages in per block.
Here's a visualization to help us see the difference between the latency of a random read on an
HDD
vs
SSD
.
A random read on an SSD varies by model, but can execute as fast as 16μs (μs = microsecond, which is one millionth of a second).
It would be tempting to think that with the removal of mechanical parts, the organization of data on an SSD no longer matters.
Since we don't have to wait for things to spin, we can access any data at any location with perfect speed, right?
Not quite.
There are other factors that impact the performance of IO operations on an SSD.
We won't cover them all here, but two that we will discuss are
parallelism
and
garbage collection
.
Typically, each
target
has a dedicated
line
going from the control unit to the target.
This line is what processes reads and writes, and only one page can be communicated by each line at a time.
Pages can be communicated on these lines
really fast
, but it still does take a small slice of time.
The organization of data and sequence of reads and writes has a significant impact on how efficiently these lines can be used.
In the interactive SSD below, we have 4 targets and a set of 8 write operations queued up.
You can click the
Time IO button
to see what happens when we can use the lines in parallel to get these pages written.
In this case, we wrote 8 pages spread across the 4 targets.
Because they were spread out, we were able to leverage parallelism to write 4 at a time in two time slices.
Compare that with another sequence where the SSD writes all 8 pages to the same target.
The SSD can only utilize a single data line for the writes.
Again, hit the
Time IO button
to see the timing.
Notice how only one line was used and it needed to write sequentially.
All the other lines sat dormant.
This demonstrates that the order in which we read and write data matters for performance.
Many software engineers don't have to think about this on a day-to-day basis, but those designing software like MySQL need to pay careful attention to what
structures data is being stored in
and how data is laid out on disk.
The minimum "chunk" of data that can be read from or written to an SSD is the size of a page.
Even if you only need a subset of the data within, that is the unit that requests to the drive must be made in.
Data can be read from a page any number of times.
However, writes are a bit different.
After a page is written to, it cannot be overwritten with new data until the old data has been explicitly
erased
.
The tricky part is, individual pages cannot be erased.
When you need to erase data, the entire block must be erased, and afterwards all of the pages within it can be reused.
Each SSD needs to have an internal algorithm for managing which pages are empty, which are in use, and which are
dirty
.
A
dirty
page is one that has been written to but the data is no longer needed and ready to be erased.
Data also sometimes needs to be re-organized to allow for new write traffic.
The algorithm that manages this is called the
garbage collector
.
Let's see how this can have an impact by looking at another visualization.
In the below SSD, all four of the targets are storing data.
Some of the data is
dirty, indicated by red text
.
We want to
write
5 pages worth of data to this SSD.
If we
time this sequence of writes
, the SSD can happily write them to free pages with no need for extra garbage collection.
There are sufficient unused pages in the first target.
Now say we have a drive with different data already on it, but we want to
write
those same 5 pages of data to it.
In this drive, we only have 2 pages that are unused, but a number of
dirty pages
.
In order to write 5 pages of data, the SSD will need to spend some time doing garbage collection to make room for the new data.
When attempting to
time another sequence of writes
, some garbage collection will take place to make room for the data, slowing down the write.
In this case, the drive had to move the two non-dirty pages from the top-left target to new locations.
By doing this, it was able to make all of the pages on the top-left target dirty, making it safe to erase that data.
This made room for the 5 new pages of data to be written.
These additional steps significantly slowed down the performance of the write.
This shows how the organization of data on the drive can have an impact on performance.
When SSDs have a lot of reads, writes, and deletes, we can end up with SSDs that have degraded performance due to garbage collection.
Though you may not be aware, busy SSDs do garbage collection tasks regularly, which can slow down other operations.
These are just two of many reasons why the arrangement of data on a SSD affects its performance.
The shift from tape, to disk, to solid state has allowed durable IO performance to accelerate dramatically over the past several decades.
However, there is another phenomenon that has caused an additional shift in IO performance: moving to the cloud.
Though there were companies offering cloud compute services before this, the mass move to cloud gained significant traction when Amazon AWS launched in 2006.
Since that time, tens of thousands of companies have moved their app servers and database systems to their cloud and other similar services from Google, Microsoft, and others.
Though there are many upsides to this trend, there are several downsides.
One of these is that servers tend to have less permanence.
Users rent (virtualised) servers on arbitrary hardware within gigantic data centers.
These servers can get shut down at any time for a variety of reasons - hardware failure, hardware replacement, network disconnects, etc.
When building platforms on rented cloud infrastructure, computer systems need to be able to tolerate more frequent failures at any moment.
This, along with many engineers' desire for dynamically-scaleable storage volumes has led to a new sub-phenomenon: Separation of
storage
and
compute
.
Traditionally, most servers, desktops, laptops, phones and other computing devices have their non-volatile storage directly attached.
These are attached with SATA cables, PCIe interfaces, or even built directly into the same SOC as the RAM, CPU, and other components.
This is great for speed, but provides the following challenges:
If the server goes down, the data goes down with it.
The storage is of a fixed size.
For application servers, 1. and 2. are typically not a big deal since they work well in ephemeral environments by design.
If one goes down, just spin up a new one.
They also don't typically need much storage, as most of what they do happens in-memory.
Databases are a different story.
If a server goes down, we don't want to lose our data, and data size grows quickly, meaning we may hit storage limits.
Partly
due to this, many cloud providers allow you to spin up compute instances with a separately-configurable storage system attached over the network.
In other words, using network-attached storage as the default.
When you create a new server in EC2, the default is typically to attach an EBS network storage volume.
Many database services including Amazon RDS, Amazon Aurora, Google Cloud SQL, and PlanetScale rely on these types of storage systems that have compute separated from storage over the network.
This provides a nice advantage in the that the storage volume can be dynamically resized as data grows and shrinks.
It also means that if a server goes down, the data is still safe, and can be re-attached to a different server.
This simplicity has come at a cost, however.
Consider the following simple configuration.
In it, we have a server with a CPU, RAM, and direct-attached NVMe SSD.
NVMe SSDs are a type of solid state disk that use the non-volatile memory host controller interface specification for blazing-fast IO speed and great bandwidth.
In such a setup, the
round trip from CPU to memory (RAM)
takes about 100 nanoseconds (a nanosecond is 1 billionth of a second).
A round trip from the CPU to a locally-attached NVMe SSD
takes about 50,000 nanoseconds (50 microseconds).
This makes it pretty clear that it's best to keep as much data in memory as possible for faster IO times.
However, we still need disk because (A) memory is more expensive and (B) we need to store our data somewhere permanent.
As slow as it may seem here, a locally-attached NVMe SSD is about as fast as it gets for modern storage.
Let's compare this to the
speed of a network-attached storage volume
, such as EBS.
Read and write requires a short network round trip within a data center.
The round trip time is significantly worse, taking about 250,000 nanoseconds (250 microseconds, or 0.25 milliseconds).
Using the same cutting-edge SSD now takes an
order of magnitude
longer to fulfill individual read and write requests.
When we have large amounts of sequential IO, the negative impact of this can be reduced, but not eliminated.
We have introduced significant
latency
deterioration for every time we need to hit our storage system.
Another issue with network-attached storage in the cloud comes in the form of limiting IOPS.
Many cloud providers that use this model, including AWS and Google Cloud, limit the amount of IO operations you can send over the wire.
By default, a GP3 EBS instance on Amazon allows you to send 3000 IOPS per-second, with an additional pool that can be built up to allow for occasional bursts.
The following visual shows how this works.
Note that the burst balance size is smaller here than in reality to make it easier to see.
If instead you have your storage attached directly to your compute instance, there are no artificial limits placed on IO operations.
You can read and write as fast as the hardware will allow for.
For as many steps as we've taken forward in IO performance over the years, this seems like a step in the wrong direction.
This separation buys some nice conveniences, but at what cost to performance?
How do we overcome issue 1 (data durability) and 2 (drive scalability) while keeping good IOPS performance?
Issue 1 can be overcome with replication.
Instead of relying on a single server to store all data, we can replicate it onto several computers.
One common way of doing this is to have one server act as the primary, which will receive all write requests.
Then 2 or more additional servers get all the data replicated to them.
With the data in three places, the likelihood of losing data becomes very small.
Let's look at concrete numbers.
As a made up value, say in a given month, there is a 1% chance of a server failing.
With a single server, this means we have a 1% chance of losing our data each month.
This is an unacceptable for any serious business purpose.
However, with three servers, this goes down to 1% × 1% × 1% = 0.0001% chance (1 in one million).
At PlanetScale the protection is actually far stronger than even this, as we automatically detect and replace failed nodes in your cluster.
We take frequent and reliable backups of the data in your database for added protection.
Problem 2. can be solved, though it takes a bit more manual intervention when working with directly-attached SSDs.
We need to ensure that we monitor and get alerted when our disk approaches capacity limits, and then have tools to easily increase capacity when needed.
With such a feature, we can have data permanence, scalability, and blazing fast performance. This is exactly what PlanetScale has built with Metal.
Planetscale just announced
Metal
, an industry-leading solution to this problem.
With Metal, you get a full-fledged Vitess+MySQL cluster set up, with each MySQL instance running with a direct-attached NVMe SSD drive.
Each Metal cluster comes with a primary and two replicas by default for extremely durable data.
We allow you to resize your servers with larger drives with just a few clicks of a button when you run up against storage limits.
Behind the scenes, we handle spinning up new nodes and migrating your data from your old instances to the new ones with zero downtime.
Perhaps most importantly, with a Metal database, there is no artificial cap on IOPS.
You can perform IO operations with minimal latency, and hammer it as hard as you want without being throttled or paying for expensive IOPS classes on your favorite cloud provider.
If you want the ultimate in performance and scalability,
try Metal today
.
Juniper patches bug that let Chinese cyberspies backdoor routers
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 17:40:07
Juniper Networks has released emergency security updates to patch a Junos OS vulnerability exploited by Chinese hackers to backdoor routers for stealthy access. [...]...
Juniper Networks has released emergency security updates to patch a Junos OS vulnerability exploited by Chinese hackers to backdoor routers for stealthy access.
This medium severity flaw (
CVE-2025-21590
) was reported by Amazon security engineer Matteo Memelli and is caused by an
improper isolation or compartmentalization
weakness. Successful exploitation lets local attackers with high privileges execute arbitrary code on vulnerable routers to compromise the devices' integrity.
"At least one instance of malicious exploitation (not at Amazon) has been reported to the Juniper SIRT. Customers are encouraged to upgrade to a fixed release as soon as it's available and in the meantime take steps to mitigate this vulnerability," Juniper
warned in an out-of-cycle security advisory
issued on Wednesday,
"While the complete list of resolved platforms is under investigation, it is strongly recommended to mitigate the risk of exploitation by restricting shell access to trusted users only."
The vulnerability impacts NFX-Series, Virtual SRX, SRX-Series Branch, SRX-Series HE, EX-Series, QFX-Series, ACX, and MX-Series devices and was resolved in 21.4R3-S10, 22.2R3-S6, 22.4R3-S6, 23.2R2-S3, 24.2R1-S2, 24.2R2, 24.4R1, and all subsequent releases.
"These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise," the U.S. cybersecurity agency said.
Exploited by Chinese cyberspies
Juniper's advisory was released the same day
as a Mandiant report
revealing that Chinese hackers have exploited the security flaw since 2024 to backdoor vulnerable Juniper routers that reached end-of-life (EoL).
All six backdoors deployed in this campaign had distinct C2 communication methods and used a separate set of hardcoded C2 server addresses.
"In mid 2024, Mandiant discovered threat actors deployed custom backdoors operating on Juniper Networks' Junos OS routers," the cybersecurity company explained. "Mandiant attributed these backdoors to the China-nexus espionage group, UNC3886. Mandiant uncovered several TINYSHELL based backdoors operating on Juniper Networks' Junos OS routers."
UNC3886 is known for orchestrating sophisticated attacks exploiting zero-day vulnerabilities in edge networking devices and virtualization platforms.
Earlier this year, Black Lotus Labs researchers said that unknown threat actors have been targeting Juniper edge devices (many acting as VPN gateways)
with J-magic malware
that opens a reverse shell if it detects a "magic packet" in the network traffic.
The J-magic campaign was active between mid-2023 and at least mid-2024, and its goal was to gain long-term access to the compromised devices while evading detection.
Black Lotus Labs linked this malware with "low confidence" to the
SeaSpy backdoor
. Another Chinese-nexus threat actor (tracked as UNC4841) deployed this malware more than two years ago on Barracuda Email Security Gateways to
breach the email servers
of U.S. government agencies.
Mahmoud Khalil’s Wife Speaks Out on His Unconstitutional Arrest
Portside
portside.org
2025-03-13 17:22:54
Mahmoud Khalil’s Wife Speaks Out on His Unconstitutional Arrest
jay
Thu, 03/13/2025 - 12:22
...
Mahmoud Khalil’s Wife Speaks Out on His Unconstitutional Arrest
Published
Protesters demonstrate in support of Palestinian activist Mahmoud Khalil at Washington Square Park, Tuesday, March 11, 2025, in New York.,AP Photo/Yuki Iwamura // Progressive Hub
My husband, Mahmoud Khalil, is my rock. He is my home and he is my happy place. I am currently 8 months pregnant, and I could not imagine a better father for my child. We’ve been excitedly preparing to welcome our baby, and now Mahmoud has been ripped away from me for no reason at all.
I am pleading with the world to continue to speak up against his unjust and horrific detention by the Trump administration.
This last week has been a nightmare: Six days ago, an intense and targeted doxxing campaign against Mahmoud began. Anti-Palestinian organizations were spreading false claims about my husband that were simply not based in reality. They were making threats against Mahmoud and he was so concerned about his safety that he emailed Columbia University on March 7th. In his email, he begged the university for legal support, “I haven’t been able to sleep, fearing that ICE or a dangerous individual might come to my home. I urgently need legal support and I urge you to intervene,” he said in his email.
Columbia University never responded to that email.
Instead, on March 8th, at around 8:30 pm, as we were returning home from an Iftar dinner, an ICE officer followed us into our building and asked, “Are you Mahmoud Khalil?”
Mahmoud stated, “Yes.”
The officer then proceeded to say, “We are with the police, you have to come with us.”
The officer told Mahmoud to give me the apartment keys and that I could go upstairs. When I refused, afraid to leave my husband, the officer stated, “I will arrest you too.”
The officers later barricaded Mahmoud from me. We were not shown any warrant and the ICE officers hung up the phone on our lawyer. When my husband attempted to give me his phone so I could speak with our lawyer, the officers got increasingly aggressive, despite Mahmoud being fully cooperative.
Everyone who knows Mahmoud knows him to be level-headed even in the most stressful situations. And even in this terrifying situation, he was calm.
Within minutes, they had handcuffed Mahmoud, took him out into the street and forced him into an unmarked car. Watching this play out in front of me was traumatizing: It felt like a scene from a movie I never signed up to watch.
I was born and raised in the Midwest. My parents came here from Syria, carrying their stories of the oppressive regime there that made life unlivable. They believed living in the US would bring a sense of safety and stability. But here I am, 40 years after my parents immigrated here, and just weeks before I’m due to give birth to our first child, and I feel more unsafe and unstable than I have in my entire life.
US immigration ripped my soul from me when they handcuffed my husband and forced him into an unmarked vehicle. Instead of putting together our nursery and washing baby clothes in anticipation of our first child, I am left sitting in our apartment, wondering when Mahmoud will get a chance to call me from a detention center.
I demand the US government release him, reinstate his Green Card, and bring him home.
GitLab released security updates for Community Edition (CE) and Enterprise Edition (EE), fixing nine vulnerabilities, among which two critical severity ruby-saml library authentication bypass flaws. [...]...
GitLab released security updates for Community Edition (CE) and Enterprise Edition (EE), fixing nine vulnerabilities, among which two critical severity ruby-saml library authentication bypass flaws.
All flaws were addressed in GitLab CE/EE versions 17.7.7, 17.8.5, and 17.9.2, while all versions before those are vulnerable.
GitLab.com is already patched, and GitLab Dedicated customers will be updated automatically, but users who maintain self-managed installations on their own infrastructure will need to apply the updates manually.
"We strongly recommend that all installations running a version affected by the issues described below are upgraded to the latest version as soon as possible,"
warns the bulletin
.
The two critical flaws GitLab addressed this time are
CVE-2025-25291
and
CVE-2025-25292
, both in the ruby-saml library, which is used for SAML Single Sign-On (SSO) authentication at the instance or group level.
These vulnerabilities allow an authenticated attacker with access to a valid signed SAML document to impersonate another user within the same SAML Identity Provider (IdP) environment.
This means an attacker could gain unauthorized access to another user's account, leading to potential data breaches, privilege escalation, and other security risks.
GitHub discovered the ruby-saml bugs and has published a technical deep dive into the two flaws, noting that its platform hasn't been impacted as the use of the ruby-saml library stopped in 2014.
"GitHub doesn't currently use ruby-saml for authentication, but began evaluating the use of the library with the intention of using an open source library for SAML authentication once more,"
explains GitHub's writeup
.
"This library is, however, used in other popular projects and products. We discovered an exploitable instance of this vulnerability in GitLab, and have notified their security team so they can take necessary actions to protect their users against potential attacks."
Of the rest of the issues fixed by GitLab, one that stands out is a high-severity remote code execution issue tracked under
CVE-2025-27407
.
The flaw allows an attacker-controlled authenticated user to exploit the Direct Transfer feature, which is disabled by default, to achieve remote code execution.
The remaining issues are low to medium-severity problems concerning the denial of service (DoS), credential exposure, and shell code injection, all exploitable with elevated privileges.
GitLab users who cannot upgrade immediately to a safe version are advised to apply the following mitigations in the meantime:
Ensure all users on the GitLab self-managed instance have 2FA enabled. Note that MFA at the identity provider level does not mitigate the problem.
Request admin approval for auto-created users by setting 'gitlab_rails['omniauth_block_auto_created_users'] = true'
While these steps significantly reduce the risk of exploitation, they should only be treated as temporary mitigation measures until upgrading to GitLab 17.9.2, 17.8.5, or 17.7.7 is practically possible.
I am a
public-interest technologist
, working at the intersection of security, technology, and people. I've been writing about security issues on my
blog
since 2004, and in my monthly
newsletter
since 1998. I'm a fellow and lecturer at Harvard's
Kennedy School
, a board member of
EFF
, and the Chief of Security Architecture at
Inrupt, Inc.
This personal website expresses the opinions of none of those organizations.
Careless People by Sarah Wynn-Williams review – Zuckerberg and me
Guardian
www.theguardian.com
2025-03-13 17:00:01
An eye-opening insider account of Facebook alleges a bizarre office culture and worrying political overreach If Douglas Coupland’s 1995 novel about young tech workers, Microserfs, were a dystopian tragedy, it might read something like Careless People. The author narrates, in a fizzy historic present...
I
f Douglas Coupland’s 1995 novel about young tech workers, Microserfs, were a dystopian tragedy, it might read something like Careless People. The author narrates, in a fizzy historic present, her youthful idealism when she arrives at
Facebook
(now Meta) to work on global affairs in 2011, after a stint as an ambassador for New Zealand. Some years later she finds a female agency worker having a seizure on the office floor, surrounded by bosses who are ignoring her. The scales falling from her eyes become a blizzard. These people, she decides, just “didn’t give a fuck”.
Mark Zuckerberg’s first meeting with a head of state was with the Russian prime minister, Dmitry Medvedev, in 2012. He was sweaty and nervous, but slowly he acquires a taste for the limelight. He asks (unsuccessfully) to be sat next to Fidel Castro at a dinner. In 2015 he asks Xi Jinping if he’ll “do him the honor of naming his unborn child”. (Xi refuses.) He’s friendly with Barack Obama, until the latter gives him a dressing-down about fake news.
In 2016, Facebook embeds staff in Donald Trump’s campaign “alongside Trump campaign programmers, ad copywriters, media buyers, network engineers, and data scientists”, helping him win. This inspires Zuckerberg to consider running for president himself, and he tours US swing states in 2017. Wynn-Williams describes his speeches as sounding “like what a kid thinks a president sounds like”. One goes: “The occasion is piled high with difficulty, and we must rise with the occasion. As our case is new, so we must think anew, act anew.”
Meanwhile, in an effort to do business in China, his company has been offering the Chinese Communist party a “white-glove service”, and a genocide has occurred in Myanmar following a flood of false anti-Muslim stories posted on Facebook. In time Facebook abandons internet.org, its idea to give developing countries free access to the internet, or at least Facebook, pivots to the “metaverse”, a bad virtual-reality game populated by people who for a long time did not have legs, and finally pivots again to AI.
Zuckerberg, in short, turns out to be a giant man-baby suffering from a severe case of the Dunning-Kruger effect, whereby people overestimate their own cognitive abilities. His colleagues obsequiously let him win at board games. He calls politicians unfriendly to Facebook “adversaries” and instructs his team to apply pressure to “pull them over to our side”. He blames his assistants when he forgets his own passport.
Floating through the book like a toxic ice queen, meanwhile, is Facebook’s COO, Sheryl Sandberg. Wynn-Williams isn’t buying her “Lean In” talk. In one of two remarkable body-horror interludes in the book (the first is when she is almost eaten by a shark as a child), Wynn-Williams nearly dies in childbirth, but she is still harassed for work updates during her recovery. When she returns to the office her male boss gives her an unflattering performance review. “You weren’t responsive enough,” he says. “In my defense,” she replies, “I was in a coma for some of it.”
This sounds like a job for a famous champion of women in the workplace. “Friends who have fallen for Sheryl’s Lean In schtick,” Wynn-Williams writes, “earnestly recommend going to her with my concerns.” But she is not convinced. She has already been sent to a Zika hotspot while heavily pregnant, and to Japan while pregnant again, to help promote Sandberg’s book.
Wynn-Williams left Facebook in 2018 to work on “unofficial negotiations between the US and China on AI weapons”. Has the company’s office culture improved since then? A clue might be Zuckerberg’s recent appearance on Joe Rogan’s podcast. He complains that corporate culture has become too “neutered” and needs a new injection of “masculine energy”. In February, he visited the White House to talk to Donald Trump about AI.
Editor’s note: since this review was written Meta has responded to Wynn-Williams’ book, calling it “a mix of out-of-date and previously reported claims about the company and false accusations about our executives”.
‘Smokescreen Antisemitism’: How the Trump-Fueled Arrest of Mahmoud Khalil Endangers Jews
Portside
portside.org
2025-03-13 16:57:32
‘Smokescreen Antisemitism’: How the Trump-Fueled Arrest of Mahmoud Khalil Endangers Jews
jay
Thu, 03/13/2025 - 11:57
...
People demonstrate outside Thurgood Marshall United States Courthouse, on the day of a hearing on the detention of Palestinian activist and Columbia University graduate student Mahmoud Khalil, in New York City.,Photo credit: Shannon Stapleton / Reuters // Haaretz
"Shalom Mahmoud" are not words I'd ever thought I would see from an official White House social media account – certainly not when used to announce the government's arrest and detention of Mahmoud Khalil, a recent graduate of Columbia University.
Everything about this incident is nightmarish, but the White House's decision to announce his arrest with "Shalom" must not be ignored. The message is abundantly clear: U.S. President Donald Trump wants the world to know that he's
pursued this unconscionable act on behalf of Jews.
This is just one piece of this administration's plans to use the false promise of Jewish safety as a smokescreen to carry out their repressive and dangerous agenda. Indeed, the White House has made it terrifyingly clear that this is only the beginning of the administration's assault on activists – all in the name of fighting antisemitism.
American Jews understand that these rights and institutions are critically important for protecting our community, as well as all marginalized groups. In fascist regimes, these rights and institutions were historically first on the chopping block; that the Trump administration should follow suit is no surprise. What is alarming, however, is the U.S. government's positioning of Jewish safety as somehow a goal for this plan.
Khalil is a U.S. permanent resident. He is married to a U.S. citizen, and his wife is due to give birth to their first son in a month. He is also Palestinian and was a leader of the Columbia University protest encampment following Israel's war in Gaza. The entire manner in which the U.S. government has carried out Khalil's arrest appears aimed at using his Palestinian identity and his beliefs to stoke fear and division.
If Trump actually cared about Jewish people or wanted to end antisemitism, he would be upholding our freedoms to speak, study, and stand up for what we believe, not arresting students and tearing them away from their families. It also wouldn't allow MAGA movement leaders and administration officials to actively promote antisemitic and racist conspiracy theories, use antisemitic messages to win elections, or – lest we forget – perform Nazi salutes from its biggest platforms, and then turn around and claim to be enacting unpopular policies on behalf of Jewish people.
People demonstrate ahead of a hearing on the detention of Palestinian activist and Columbia University graduate student Mahmoud Khalil, in New York City (Photo credit: Shannon Stapleton / Reuters // Haaretz)
Confused? That's the point. This latest episode is a blatant example of how this administration uses "smokescreen antisemitism" to obscure, confuse, and create cover for its despotic plans that harm everyone. Their goal is to generate division and fear to grow their power and wealth, and most importantly, to distract from their own, very real, antisemitism.
They have learned that positioning repressive actions as benefiting Jewish people obfuscates their audience and drives wedges between Jews and their neighbors who might otherwise join together to oppose these actions. And like all expressions of antisemitism, this strategy directly harms Jews.
At Bend the Arc: Jewish Action, the organization I lead, we have devoted much of our organizational time, resources and power toward ending antisemitism – regardless of its source. We do this because we understand how damaging antisemitism is, for Jewish people and for everyone in the United States.
The American Jewish community has a wide range of views on the war in Gaza, and college campuses have been a space of intense conflict, protest, and in some cases incidents of true antisemitism and harassment. Jews in the United States and worldwide are increasingly experiencing antisemitism against the backdrop of the war, and this poses a real threat.
All antisemitism is unacceptable. That includes when antisemitism happens on college campuses and in protests. It also includes the Trump administration's exploitation of Jewish fear, Jewish trauma, and Jewish experience to enact mass deportations, to punish political opponents, and to build a fascist regime.
This is an important moment that will test the American Jewish community's values and resolve. I draw inspiration from the fact that the judge who ruled that Mr. Khalil should not be deported is himself Jewish. We cannot allow ourselves to be used as an excuse to justify the Trump administration's plans to tear families apart – plans which much of the American Jewish community opposes.
Pro-Palestinian protestors rally in support of Mahmoud Khalil outside of the Thurgood Marshall Courthouse, where a hearing is underway regarding Khalil's arrest, in New York City )Photo credit: Agence France-Presse (AFP) / Charly Triballeau // Haaewtz)
Whatever we think about Mahmoud Khalil's beliefs, he is a legal resident of the United States who appears to be acting lawfully within his rights in this country. When a government starts threatening people's legal status because they disagree with the
content of their protected free speech, it is unquestionably dangerous to all
, Jewish people very much included. It's doubly dangerous when they claim to do so in the name of Jewish safety.
There is no path to safety in an increasingly fascist regime that involves destroying public and higher education, stifling free speech, or hunting down protesters. There is no path to safety that involves banning people because of their race or religion. There is no path to safety that involves tearing immigrant families and communities apart. The only way we can truly end antisemitism is by locking arms across religions and races, across genders and generations, and insisting that freedom and safety is for us all – no exceptions.
If we buy into the false promise of safety for some at the expense of others – even others we may disagree with strongly – we may find ourselves in a United States where we are no longer able to speak up at all.
[
Jamie Beran
is the CEO of
Bend The Arc: Jewish Action
, the largest national Jewish organization that works exclusively on U.S. domestic policy.]
It's not cheating if you write the video game solver yourself
My wife and two little boys sometimes go on trips to see friends or family for whom my presence isn’t strictly required. While my wife books the flights I make a show of weighing up my options and asking if it’s really OK if I don’t come. Eventually I’m persuaded that it truly would be the best thing for all of us for me to have five to seven days to myself with no nappies and all Nintendo.
My wife hits the “Pay Now” button and when the confirmation email comes through I call my secretary and tell him to clear my calendar. When no one answers I remember that I have neither a secretary nor all that much going on, so instead I find a prestige TV series, a selection of local takeaway menus, and a short but immersive video game. This time I messaged my buddies and asked them what I should play. Steve gushed about a game called Cocoon; Morris said that he’d played it a year ago and it was “alright.” Sold.
One month later there were hugs, kisses, wave goodbye to taxi, shut the door, where’s the HDMI cable, how did it get under the sink, do I have a Nintendo Account, what’s my password, whatever I’ll make a new one, right let’s do this, estimated download time 45 minutes, start on tax returns, am I relaxed yet?
Eventually I was able to boot up
Cocoon
. I learned that I was going to play as a little bee guy who has to solve artsy puzzles in a lonely, abstract world for no adequately explained reason. Bee guy makes his way through the world using four glowing orbs. He can pick orbs up, carry them around, and put them back down on switches in order to open doors and unfold bridges - standard orb stuff. However, bee guy soon learns that the orbs also contain other worlds, which he can jump in and out of to help with his puzzles. If he jumps inside one orb-world whilst carrying another one then he can put orb-worlds inside each other. He can even - towards the end - put a world inside itself. Conundrums ensue.
By the end of Day 1 of my vacation I was having about three-quarters as much fun as I’d hoped. Cocoon’s atmosphere was absorbing and its puzzles made me smile; I only wish there had been a little bit of a story and not just a fuzzy metaphor for entropy and decay. Still, I kept going, trekking through crumbling ruins and overbearing symbolism. By the end of Day 2 I was 51% through the game. I started the next puzzle and got completely stuck. I was surely just tired, I thought, so I watched an episode of The Sopranos and went to bed. The next morning I got up at 6am, made some coffee, and went into my office to crack on. But I was still stuck. I spent an hour filling a bin bag with toys that I didn’t like, since no one was around to stop me. I tried again. Still stuck.
Then I realised. I didn’t know how to solve the puzzle, but I did know how to write a computer program to solve it for me. That would probably be even more fun, and I could argue that it didn’t actually count as cheating. I didn’t want the solution to reveal itself to me before I’d had a chance to systematically hunt it down, so I dived across the room to turn off the console.
I wanted to have a shower but I was worried that if I did then inspiration might strike and I might figure out the answer myself. So I ran upstairs to my office, hit my Pomodoro timer, scrolled Twitter to warm up my brain, took a break, made a JIRA board, Slacked my wife a status update, no reply, she must be out of signal. Finally I fired up my preferred assistive professional tool. Time to have a real vacation.
How I wrote a Cocoon solver
In order to write a Cocoon solver, I needed to:
Model the game’s logic using a
Finite State Machine
Use this model to work out the sequence of actions that would get me from a puzzle’s start to its end
1. Model the game’s logic
Cocoon’s mechanics are simple but elegant. In the first level you find an orange orb sitting on a stand. You pick it up, obviously, and start to carry it around. You come to a gate with an orb-stand next to it. You put the orb down on the stand and the gate opens. You pick it up again and continue through the gate.
Eventually you come to a small reflecting pool with a special-looking orb-stand in the middle. There’s no obvious path forward, so you put your orange orb down on the stand. You stand next to it and press A, because that’s the only button in the game that does anything. You watch yourself dive into the orb, and land in another world. You’re now inside the orange orb. You start to walk, You find a new, green orb, which you use to keep solving puzzles and opening doors and moving forwards through the orange orb-world. Throughout the course of the game you find a total of four orbs, and the final puzzles require you to lug them all around and dive in and out of them in just the right order to get across the next bridge.
An example: in one puzzle you need to bring your red orb to the top of a vine so that you can use it to reveal a magic walkway to get to the next area. However, you can only climb up vines while holding your green orb, and you can only hold one orb at a time. This means that you can’t carry the red orb up the vine. The solution is therefore to pick up the red orb, dive inside the green orb, drop the red orb, jump out, use the green orb to climb the vine, dive back inside the green orb, retrieve the red orb, and jump back out.
In order for my solver to analyse Cocoon, it would need to be able to programatically manipulate a copy of it. My solver would need to know how the world was laid out, what actions were allowed, and whether it had finished solving its puzzle yet. The easiest way to do this wasn’t to have the solver interact with the real game, but to instead write a new, stripped-back copy of it that contained all its logic but none of the graphics.
This was made easier by the fact that each of the puzzles that I got stuck on could be represented as a
finite state machine
. A finite state machine is a system that can be in exactly one of a finite number of states at any given time. The system is able to transition between some pairs of states using a set of known rules.
Most games with a large 3-D world can’t be modelled as a FSM because they have too many possible states and too many possible transitions. The player can be in a near-infinite number of slightly different locations; so can their enemies. The player might have a huge number of items they could be carrying and past actions they could have taken, each of which could affect the world in some important way. Some parts of the game may depend on timing and agility, which are hard to represent in a FSM. In most game levels there’s too much going on for a simple model like an FSM.
However, Cocoon’s is a simple world. There are no enemies. The only items you can carry are 4 orbs. And whilst its 3-D landscape is lovely to look at, it masks a simple topology that can be modelled as a small
graph
- a collection of nodes (important locations like orbs and switches) and edges (pathways that connect nodes that are immediately accessible from each other). Beyond these characteristics, the exact layout of the world rarely matters.
This means that a “state” in Cocoon is easy to define and manage. A state is defined by the combination of:
The node you’re nearest
Which orb you’re holding
Where the other orbs are
(possibly a few other things, depending on the exact puzzle)
Transitions between states are similarly constrained. Players can transition between game states via only a small set of actions, such as:
Walking to an adjacent node
Picking up or putting down an orb
Jumping in or out of an orb
This means that it’s relatively simple to write a program that:
Defines the layout of a puzzle world
Defines the state that the game is currently in
Is able to transition between states
Knows how to identify a goal state
Once I’d written this program, I had a representation of the game that I could programmatically manipulate. I could tell my game to do things like “take such-and-such an action” or “tell me all the possible next actions that can be taken from the current state.” This meant that I could use my simplified game to analyse and solve the real one.
2. Work out how to get from the start to the goal state
To solve a puzzle I needed to find a sequence of actions that would take me from the puzzle’s start state to its goal state (for example, opening a door). This was a job for an algorithm called Breadth-First Search (BFS).
BFS starts at an initial state in an FSM and simultaneously takes a step to every new state that can be reached from it. For example, suppose that a puzzle starts with bee guy holding the green orb, next to a door and to a corridor to another section.
My solver takes steps both through the door and down the corridor, and begins keeping track of each path. From each of these new states, it takes another step to each further state that can be reached from them.
It keeps stepping and tracking the expanding number of paths that it’s exploring, until one of its paths reaches a goal state (for example, a state in which a particular orb is on a particular stand, which opens a bridge to the next area). At this point the solver stops, and returns the path that led to the goal state. Because the solver takes a single step down every possible path at once, the first path to the goal state that it finds is guaranteed to be the shortest one possible.
I can then input the sequence of actions that my solver returned into the real game (for example: pick up red orb, walk to orb holder, put down red orb, etc), and move on to the next level, without having to do any thinking whatsoever.
Here’s my code
.
How hard are these puzzles really?
My solver stops as soon as it has reached the goal state. However, I was also interested in fully mapping out every possible sequence of actions one could take inside a puzzle. I wanted to see how big and hard the puzzles actually are.
To do this I wrote a script that explores every possible state and transition. It keeps stepping through states transitions, even after one of its paths has reached a goal state. It only stops when none of the paths it’s exploring have any available actions that lead to a new state that it hasn’t seen before.
This script generates a new graph, in which each node is a state and each edge is a transition. The graph shows every single sequence of actions that you can take inside the puzzle. For example, here’s the puzzle that I found hardest, represented as a graph of states and transitions:
Drawing a fully-expanded FSM graph like this shows how small and simple Cocoon’s puzzles really are once you write them down. That path on the right, between the green and red nodes, requires you to put one orb inside another orb, then inside another orb, and then shuffle them around just-so. This is a counter-intuitive strategy and hard to find if you’re playing the game like a normal person.
However, writing everything down makes it obvious. Writing everything down makes all actions visible and strips away all of the misleading heuristics that make humans focus on some strategies and completely miss others. It reduces a puzzle to “find a path from the green dot to one of the red dots,” as you saw above, which can be solved by a moderately gifted two year-old. To be very facetious, Cocoon is a fluffy layer of graphics and game mechanics on top of an unbelievably simple maze.
This isn’t meant as a knock on the game. Cocoon is a tidy mind-bender that wasn’t designed to withstand exhaustive search. If you write a program to solve your Cocoon puzzles for you then you’re only cheating yourself. Or at least, you would be if writing such a program wasn’t so much fun.
Life lessons
Real life isn’t a finite state machine though. All my solver really proves is that small, low-dimensional problems can easily be solved using exhaustive search. By contrast, the world has an infinite number of states. It’s hard to be sure which properties are important, and the allowed transitions between states are terribly defined. No problem of any importance can be reduced to a tiny graph, which means that even the most determined two year-old won’t be able to help with any of the big challenges in your life.
However, you don’t need to fully explore all possible solutions in order to benefit from systematic search. As I’d feared, the process of writing my solver forced me to consider the game with so much rigour that I’d solved all 3 challenges that I used it for before I’d finished programming them in. When I came to encode every location and transition on the map in my program I was forced to pay attention to each of them individually for at least a few seconds, even the ones that the game had tricked me into ignoring. This helped me see that some of the nodes and actions that I’d thought were irrelevant were actually the key to the whole thing.
That evening I talked to my family on the phone. I showed them all the diagrams I’d created and all the fun I’d had. They showed me pictures of them inside St. Peter’s Basilica. My son asked if I’d beaten my game. I explained that I’d transcended it by creating a mathematical representation of its entire possibility space. He asked if that meant “no.”
A protester holds up a sign outside the USAID office building in Washington, DC, on Feb. 3, 2025, after the Trump administration announced he would be slashing the agency.,EPA/Yonhap // Hankyoreh
During the Cold War, the United States waged a war of ideas with the Soviet Union to capture the hearts and minds of the rest of the world. It was in many ways a lop-sided contest, at least in terms of popular appeal. The United States had pop songs, Pop art, Pop-Tarts, not to mention Hollywood films, McDonalds, and democracy. The Soviets had Shostakovich, the Bolshoi Ballet, the collected works of Lenin, Tetris, and not much else.
The two sides also offered various forms of assistance to other countries: disaster relief, humanitarian medical missions, scientific cooperation, and financial loans. Because the U.S. economy was considerably larger than the Soviet one, the advantage here also went to the Americans.
After the Cold War, the United States continued to cultivate what it called “soft power”—the power of ideas and culture—alongside its enormous military. This soft power continued to be a major vector of American influence, for both good and ill. On the positive side, American activists helped construct the world of international law, American aid workers responded to earthquakes and typhoons, American scientists participated in developing new vaccines and other medical breakthroughs, and American consultants were instrumental in building a range of democratic institutions from election monitoring to a free press.
More than
half of U.S. aid
goes to humanitarian assistance (21.7 percent), health (22.3 percent), governance (3.2 percent), education and social services (2 percent), and environment (1.9 percent).
But there has also been a dark side to this soft power. Foreign aid, for instance, is highly politicized, often distributed according to political allegiances rather than actual need. A good chunk of this aid has been security-related (14.2 percent), and both Israel and Egypt have traditionally been the top recipients of military assistance. And aid is often used by corrupt recipients to line their own pockets or hand out to their own patronage networks, though the level of corruption is
nowhere near
what anti-aid campaigners claim (and the United States
has also used its aid programs
to target corruption and organized crime).
Moreover, much of foreign aid is tied to purchases of U.S. goods and services, which means that the U.S. government is basically just giving a lift to U.S. business. Economic development absorbs 27 percent of U.S. foreign aid, much of it spent within the United States. The U.S. Agency for International Development (USAID), which the Trump administration has illegally suspended, was so proud of this fact that it once
boasted
on its website that “the principal beneficiary of America’s foreign assistance programs has always been the United States. Close to 80 percent of the US Agency for International Development’s contracts and grants go directly to American firms.”
Combine this boosterism with the fact that foreign aid has often included a disproportionate number of loans, not grants, and it becomes clear that a lot of so-called development doesn’t actually help the poorest of the poor. It’s why so many countries, despite decades of foreign assistance, remain locked in poverty.
So, perhaps Donald Trump is right to eliminate USAID by ending most of its programs and reducing its staff from 14,000 to under 300?
But alongside the problematic disbursements, the agency funds a lot of absolutely critical projects around the world. Here is a sample,
from
Time
magazine
, of the chaos that Trump’s shut-down order has already caused:
The Trump administration has been spreading considerable misinformation about U.S. soft power. It has suggested that cutting USAID will save the United States a lot of money when foreign aid amounts to a mere 1 percent of the federal budget—and much of that money is either spent in the United States or for purchases of U.S. goods and services. It has claimed that a lot of it has gone to overhead and bureaucracy, not to the intended recipients, but USAID has worked with a number of subcontractors like Catholic Relief Services and the Red Cross to implement programs,
which is standard practice
and ensures a measure of accountability. Trump supporters have claimed that USAID money has gone to celebrities for trips abroad, which is simply
not true
.
For some people around the world, anything the United States does is malign. When the U.S. government provides military assistance to Israel for its horrifying war in Gaza, when it is pushing American corn or soybeans down the throats of consumers abroad in place of locally grown alternatives, or when it is promoting U.S.-style democracy instead of a more diverse approach to governance, then it is easy to agree that U.S. soft power is indeed problematic.
But much of U.S. foreign assistance comes from a different place: a genuine desire by U.S. civil society to partner with groups around the world to improve medical, educational, agricultural, and political outcomes.
It’s unrealistic to expect that the Trump administration, which is determined to destroy social services in the United States, will help other countries improve their own services.
So, for the next four years, fighting for the best programs of U.S. foreign aid will be part of the resistance to Trump’s overall agenda. Saving the program to create a better AIDS vaccine in Africa is an integral part of saving similar initiatives for medical innovation within the United States. The sad truth is: the U.S. population now needs these exceptional “soft power” programs just like people all around the world.
The Royal Game of Ur, a 4500-year-old mystery, is now solved.
Our new AI has not only defeated the world's best -
it has surpassed the skill of every player to ever live.
We know it has, because we solved the game.
Let me tell you the story of how our group of passionate, but mostly clueless, players went from struggling to get our bots to run without crashing, to unearthing the secrets of the Royal Game of Ur. Oh, and we also created a ruthless little bot called
the Panda
in the process!
Our supercharged Panda bot breaking the Royal Game of Ur in two.
A few mornings before Christmas in 2017, I hunched over the table in my parents' house. My dad was asleep on the couch, and I had a pint of coffee in hand. It was one of those sleepy family days that I cherish. I opened up Eclipse and started writing plain HTML, JavaScript, and CSS. I wanted to make an online version of the Royal Game of Ur.
Not long after, I had something I thought was so cool! I remember the first time I showed it to four new friends I had made at uni. I was hyping it up to them, telling them about how cool this game was that had been around for millenia, and how Irving Finkel had deciphered a cuneiform tablet with the rules. I suggested that we play a game.
The website didn't load. I remember them shuffling around, hands in pockets, as I tried to fix it. A few minutes later I fixed that, and then the online game wouldn't connect... and a little while later after we had managed to play a few moves, the game reset. I remember them wearing the facial expression version of a pat on the back, these little awkward smiles, and saying "No, you know, that's cool that you've made this. Good job on it".
The earliest screenshot I have of the homepage of RoyalUr.net. It might not have been pretty, but I still had a lot of fun with it!
I didn't show anyone the game for a while after that, opting to instead work on it quietly. And eventually, it got better. Little by little it stopped falling over every time someone tried to use it. I even had a few people find the website online and give it a go! From there, everything slowly started to trend upwards.
Six years later, I leaned back in my chair and watched Londoners shuffle past my window in the cold. They were on their way to work, and I was working from home in my pyjamas. There had been 401 games played on RoyalUr.net the day before, a little bit of a slower day than usual. But we were now the #1 website for the Royal Game of Ur in the world. My foot tapping, I would glance outside and then back down to my laptop. I was waiting for the final steps of training to complete.
Eventually, my laptop fans quieted. My foot stopped tapping. I looked down and read a generic message in my terminal: "Write took 1,036ms".
That was the moment we found the perfect move to play from every possible position in the game.
That was the moment the Royal Game of Ur was solved.
...
Creating this website and solving this game have become really meaningful to me. I now spend a lot of my time with friends I've made through the Royal Game of Ur. And solving this ancient game is one of the most significant things I have ever done. I'm also pretty confident that I was the first person to solve it, which is a pretty cool claim!
But I didn't get here on my own, and I definitely don't think of solving the game as only
my
achievement. No, I may have started the ball rolling and given it its final tap into the net, but there were many more people working to move the ball forward.
You see, when you make a nerdy website like this one, it tends to attract like-minded, motivated, people who are perhaps also a little nerdy. And that is how our band of nerds -
ahem,
I mean researchers - came to spend our nights having four-hour-long discussions about the central tile. Or three-hour-long debates on measuring luck. Picking apart the strategy of the Royal Game of Ur became our hobby, our obsession.
We dug into the corners of the Royal Game of Ur, hauled out whatever nuggets of insight we could find, and hammered them into something useful. That is how our bots slowly improved week after week. We spent 3 years working on it together.
Then one day, we solved it.
And just like that, we had changed the face of a ~4500-year-old board game forever. For the last four millennia the optimal strategy for the Royal Game of Ur has been a mystery. Now, for the rest of time, it will be solved and accessible to all. How cool is that!!
In the Royal Game of Ur, you throw dice to see how far you can move. These dice bring a lot of fun to the game. But they also make it hard to objectively evaluate strategy. You could never
really
tell whether you got lucky, or whether you played really skillfully.
Now you can!
Solving the game has brought your skill in the Royal Game of Ur out of obscurity and into sharp focus.
This is a big deal for a game where luck plays a significant role in who wins each game, especially at higher levels.
I am ecstatic and slightly nervous to finally share what we achieved, what it means for the game, and to provide you with a full report on how you can solve the game yourself! It has been a fun story to live, and hopefully it is fun to read as well.
Watch flawless play
Before we go any further, here's a little live demo. Below, two bots are engaging in a flawless duel - perfect play! I find it mesmerising. How often do you get to watch something and know it is mathematically perfect?
We spent years hiking through the woods to improve our AI for the game. And then we realised there was a highway slightly to our left...
Before solving the game, our bots were simple and search-based. They would simulate millions of moves to figure out the
one
move they should make. This approach was unsophisticated, and so it took years of fine-tuning to get it to work well.
Throughout the years, we would always give the same codename to our latest and best bot:
the Panda!
We optimised the Panda, updated how it searched, and improved its heuristics. We made a lot of progress! Before solving the game, the Panda was playing with an average accuracy of 87%, whilst also running on most of our player's phones. That's only a few percentage points shy of Arthur, one of the strongest Royal Game of Ur players in the world.
However, just as we thought we might narrowly pass Arthur, a new researcher, Jeroen, joined our Discord. Soon after, we had not only beaten Arthur's skill -
we had smashed it!
We set the Panda on a completely new training regimen. And only a couple of weeks later, we had done something we didn't even realise was possible. We had calculated the
statistically best move
to make from every possible position in the game. 100% accuracy. Perfect play.
The Panda now ruthlessly cares about just one thing:
winning.
It does not care about trivial details such as losing pieces, scoring, or capturing. No, it only cares about how each and every move it makes affects its final chance of winning the game.
If sacrificing a piece gave the Panda a 0.01% improved chance of winning, the Panda would play that move. In this way, the Panda plays the Royal Game of Ur
flawlessly.
There is no strategy that you can use to outplay the Panda. You can only hope to match its skill!
How does it know what is best? It's pretty simple actually. We have calculated your chance of victory from
every single position
in the game. Once you know that, you can just pick the move that leads to your highest chance of winning. Easy!
Your winning chance from all positions in the Royal Game of Ur, grouped by how far each player's pieces have moved.
How did we do it?
The method we used to solve the game is actually rather old, even if it is still a youngster compared to the Royal Game of Ur itself. It's called
value iteration,
and it is one of the most well-known reinforcement learning algorithms around. In fact, similar methods have been used to solve endgames in Backgammon for decades!
But if the algorithm has been around so long, why hasn't someone solved the game sooner?
We were blinded by our incremental progress. Our group had spent so much time looking down the rabbit-hole of search-based AI that we failed to see that there was a simpler solution. We had to take our blinders off!
I had dismissed the idea of solving the game by brute-force for years, because I overestimated its difficulty. The Royal Game of Ur contains hundreds of millions of positions, so calculating your chance of winning from every single one was daunting. Oh boy was I wrong.
Jeroen showed us it was possible
Out of the blue, I received an email. Within it, Jeroen shared how he had solved a two-piece version of the Royal Game of Ur using value iteration. A month later, he joined our Discord and presented his work. It outlined exactly how value iteration could be used to solve the game.
At that point, I knew I had a great week ahead of me. We already had all the libraries we'd need to simulate the game, so all we had left to do was build the training program to solve it.
I spent the next week working in my pyjamas and subsisting on baked beans. Wake up. Classical music. Code. Repeat. One week of that, and the Royal Game of Ur was solved!
It's crazy to me that this was possible for the entire three years of us working on AI for the game.
The Panda chilling, knowing it is the best.
The final model we produced is very simple. It is just a big table of all the positions in the game, and the likelihood that you would win starting from each one. These simple tables have unlocked countless possibilities for the Royal Game of Ur.
I'll dive further into the
technical challenges
of solving the game later in this article. But first, let me try to illustrate how big of a deal it is that we managed to solve the game!
Our Panda bot now plays flawlessly. We can produce in-depth game reviews, calculate player's accuracy in games, and create a stable competitive rating system. We can even generate pretty graphs of your winning chance throughout a game!
This changes how the Royal Game of Ur is played.
A graph of the light player's winning chance throughout a game.
The game has changed 👑
Solving the game has unlocked
strategic depth
for the Royal Game of Ur. The dice made it hard to determine whether you won or lost due to strategy or luck. No longer.
Using the solved game, we can now perceive the full strategic depth in every move, every decision, and every careful placement of your pieces. You may still be playing the same game that people played four millenia ago. But now we can appreciate the true quality of your play. The dice no longer hide that from us.
This has created a whole new game within a game - a sophisticated and challenging puzzle that asks not merely
"Can you win?"
but rather
"Can you find the perfect answer to each position?"
Many of our best players now aim to play the best possible game, not just to win.
This is a fundamental shift in the game.
If you lost, but you played with an accuracy over 80%,
you still played an outstanding game.
This raises the ceiling on how good players can get.
Now our job is to build the tools to help our players reach these new heights!
The average accuracy of players, grouped by the number of games they have played, and the speculated accuracy of future players.
Let the percentages flow like water 🌊
Let us take a moment, breathe, ... and
dive
in to the depths of the algorithm we used to master the Royal Game of Ur. Value iteration. I know just saying the word "algorithm" can lead to people's eyes glazing over, but please don't doze off just yet! I promise it's worth it.
I find value iteration
breathtakingly elegant,
and I have a great desire to express that here!
I ask you to look at value iteration through my eyes. Imagine a dam. Water up high on one side, and low on the other. Artificial.
Now explode it!
Chaos!
Thrashing water. Rubble. Waves of debris.
But wait,
give the water some more time... The debris will be washed away. The water will start to run clear. Fish will return. The damaged riverbanks will be worn smooth. Birds will start chirping. Eventually, you will have a beautiful and tranquil river.
That is how I see value iteration.
We have set up a dam with your wins on the high side, and your losses on the low side. We then remove the dam wall, and value iteration takes over. All the possible wins and losses in the game cascade through
every possible way the game could play out.
The probabilities will flow, flow, flow. Until they stop. And at that point,
the game is solved.
You can now sail this quiet river of probabilities, and the current will bring you through your best path to victory.
An animation showing the process of calculating the light player's winning chance from every position in the game.
To solve the game, you just have to let the water settle.
...
Perhaps I'm alone in this. But I really find the mathematical perfection and simplicity in value iteration to be
profoundly beautiful.
Value iteration allows probabilities to flow as if they were water. All you have to do is set up the dam, and it does the rest.
This simple method
could actually determine
the impact of a butterfly flapping its wings on the result of the game. All through the flow of
probabilities.
That blows my mind! I will explain exactly
how
it does that later in this article.
What does it mean to solve a game containing chance? 🎲
The waves of chance are fundamental to playing the Royal Game of Ur. Probabilities really play a huge role, and they matter a whole lot when we consider what it means to have solved this game.
Unlike in Chess or Go, when you play Ur you cannot plot a perfect victory in advance. The dice would never allow it. This makes solving the Royal Game of Ur quite different to solving deterministic games like Connect-Four.
In Connect-Four we know that the starting player, if they play perfectly, will
always
win. This is not the case when you play perfectly in a game where luck is involved.
Playing perfectly in the Royal Game of Ur requires you to constantly adapt to whatever the dice throw your way. Players must balance making the most of good dice rolls, whilst also minimising the downsides of bad dice rolls.
Sometimes a swing for the fences is what you need. Other times, you may be better off playing cautiously. Deciding between these strategies isn't always straightforward, but solving the game allows us to do it perfectly. The solved game tells us exactly when each strategy is better, down to the slimmest of margins. If one move gives you a 0.01% better chance to win over another, we know it!
Playing perfectly in a game like Connect-Four means you'll never lose. Playing perfectly in the Royal Game of Ur means you'll lose
fewer
games than anyone else. The real challenge of the Royal Game of Ur is not in winning itself - anyone can get lucky. The real challenge is in maximising your odds of winning. And the perfect player is a
master
of that.
There were fun technical challenges getting here! 🤓
We have actually solved a wide range of rulesets for the Royal Game of Ur, not just the most common Finkel ruleset. This, combined with the difficulty of creating, storing, and updating hundreds of millions of game positions, poses a few fun technical challenges!
We needed to 1) devise an efficient way to store all those positions, and 2) write and optimise the algorithm that calculates your winning chance from each position. That's what we'll get into next.
I dive into a lot of technical detail in this section. If that's not your thing, it might be worth jumping to later when we discuss
solving other games
.
To everyone else interested in the technical nitty-gritty, you're my type of person! Let's get into it.
We will dive into the map we use to store positions, the binary encoding of positions to use as keys, and all the little map optimisations and quirks we use. We'll get into value iteration itself, and how we can optimise it for the Royal Game of Ur. Then we'll finish it all off with links to all our open-source projects if you want to give solving the game a try yourself!
We made a custom densely-packed map
Our goal was to create a map, or lookup-table, from every position in the game to the likelihood that you would win starting from that position. For the most common Finkel ruleset, this means creating a map of 275,827,872 unique positions. The Masters ruleset is even larger, with over a billion positions to store.
So how do we build these big maps? We decided our best bet was to overcomplicate the process! We were worried about memory, so we built our own map implementation that is optimized for it.
Our maps are made of densely-packed binary keys and values. Instead of the normal approach of hashing keys to find entries, we sort the keys and use binary-search. This allows our maps to be as small as possible, while still being really fast.
Using our overcomplicated map, we are able to keep memory usage down to just 6 bytes per map entry!
A visualisation of the different memory structures of hash maps and our custom map implementation.
Game states in 32-bits, win percentages in 16
The lookup keys in our map represent each position that you can reach in the game. These positions, which we usually refer to as "states", are snapshots of the game at a fixed point, including whose turn it is, each player's pieces and score, and where all the pieces are on the board. For each of these states, we then store the chance that light wins the game if both players played perfectly from that point onwards.
To use as little memory as possible, we encode the states and their associated win percentages into small binary numbers. The states of the game are encoded into 32-bits, and the win percentages are quantised to 16-bits when we save our final models. The percentages are easy to manage, but getting the game states into 32-bit keys turns out to be a bit tricky…
If you were to naively pack a Finkel game state into binary, you might end up with 54-bits:
1-bit: Whose turn is it?
1-bit: Has the game been won?
2 x 3-bits: Light player's pieces and score.
2 x 3-bits: Dark player's pieces and score.
20 x 2-bits: Contents of each tile on the board (tiles can be empty, contain a light piece, or contain a dark piece).
A simple, but inefficient, binary format that you could use to encode game states into keys.
54-bit keys aren't so bad. We could use standard long 64-bit numbers to store these keys, and it would still be quite fast and memory efficient. But
wouldn't it be nice if we could fit it all in 32-bits instead!
Luckily, there's some tricks we have up our sleeves to reduce the number of bits we need.
The first thing we can tackle is that some information in the keys is
redundant.
The number of pieces players have left to play, and whether the game has been won, can both be calculated from other information we store in the keys. Therefore, we can remove those values and save 7 bits!
Another 12 bits can also be axed by changing how we store tiles that can only be reached by one player. We call these tiles the "safe-zone" tiles. Since they can only be accessed by one player, we only need 1-bit to store each: 0 for empty, and 1 for containing a piece. This halves the space we need to store these tiles, bringing us down to an overall key-size of 35 bits. That's pretty close to our goal of 32!
The safe-zone (S) and war-zone (W) tiles under the Finkel ruleset. The safe-zone tiles are green, and the war-zone tiles are red.
The final 3 bits can be removed by compressing the war-zone tiles. Before compression, each war-zone tile uses 2-bits to store its 3 states (empty, light, or dark). That means there's an extra 4th state that 2-bits can represent that we don't use. In other words, an opportunity for compression!
To compress the war-zone, we list out all legal 16-bit variations that the war-zone could take, and assign them to new, smaller numbers using a little lookup table. This allows us to compress the 16 bits down to 13, bringing us to our 32-bit goal!
Ur is symmetrical - symmetrical is Ur
As a cherry on-top, we can also remove the turn bit, as the Royal Game of Ur is
symmetrical.
This is actually a pretty big deal! It means that if you swap the light and dark players, their chance of winning stays the same. This is really helpful, as it means we only have to store and calculate half the game states. That's a really significant saving!
The binary format we use for encoding Finkel game states into keys.
We now have a 31-bit encoding that we can use for the keys in our map! Theoretically, the minimum number of bits we'd need to represent all 137,913,936 keys would be 28. So we're a bit off, but I'm pretty happy with getting within 3 bits of perfect!
Other rulesets need more than 32-bits
Unfortunately, when we try to move on to different rulesets, we hit a snag. For the Masters and Blitz rulesets the same techniques can only get us down to 34 bits, due to the longer path around the board. That means we have 2 more bits than we wanted...
But our trick bag isn't empty yet!
Instead of doubling our key size to 64-bits, we can instead split the map into 4 sections. The 2 extra bits can then be used to select the section to look in, before we search for the actual entry using the remaining 32-bits. This allows us to fit the half a billion entries for the Masters ruleset into 3 GB instead of 5!
The binary format we use for encoding Blitz and Masters game states into keys.
And that's a map, folks! There's nothing too special in our implementation, but it works well for our purposes of reducing memory usage and allowing fast reads and writes. Now we get to move on to some real work.
Value iteration, for real this time
Enough with the poetical descriptions of this algorithm. We want the nuts and bolts!
Here's the kernel of value iteration in a couple equations. The value we are calculating is the chance of light winning, so naturally light wants to maximise value, and dark wants to minimise it.
You iteratively run this operation over every non-winning state, updating your state map, until the maximum amount that a state changes in value is below a threshold. You read old values of V(state) from the map, and store new values back to the map. If you try to get recursive, you're going to have a bad time with infinite loops, so make sure to avoid that!
...
Okay... but what are these equations actually doing?
They are updating the estimate of light's winning chance in one state based upon the estimates of light's winning chance in other states. If you do this for all the states enough times, they will eventually converge to light's
actual
winning chance (assuming both players are playing perfectly). Let me break it down a bit further!
I will walk you through the following steps of value iteration: setting up the state map, estimating your chance of winning in one state based upon other states, doing that lots of times for all the states, and how you consider what moves players would make.
Another detail for those familiar with value iteration:
In our use of value iteration, we will not give value to
actions.
Instead, we will give value to
states
where a player has won. This is an important distinction! When you use value iteration by giving value to actions, you will probably also need to use a discount factor and your results will be imperfect. By assigning value to the win states directly, we don't have to worry about this.
The Setup
First, let's make sure to give the win states their values. The initial value of most states in the map don't matter
that
much, but the value of states where light or dark wins are vitally important. These states should be set to 100% if light wins, or 0% if light loses. These are the values that we will propagate through the entire state space.
The values within the state map before running value iteration.
Estimating Win Percentages
Now comes the core idea of value iteration: propagating wins and losses backwards.
The basic idea is this:
If you know how good the next positions you might reach are, then you can calculate how good your current position is. For example, if the positions you can reach after each roll are great, then your current position is also great! And now that you know how good your current position is, you can use that to calculate how good the positions before it are, and the positions before those, and so on.
This "backwards pass" is implemented using a simple weighted average. Your chance of winning in one state is equal to the weighted average of your chance of winning in all the states you could reach
after
each roll of the dice.
This simple operation acts as the means by which wins and losses cascade through the state space.
Deriving your winning chance based on the states you could reach after each roll.
Now do it again, and again, and again...
So, can we just do one backwards pass through the Royal Game of Ur, calculate your chance of winning from each state, and call it a day? Unfortunately, it's not so simple. One single pass backwards through all the states is not quite enough to solve the Royal Game of Ur. Why? Loops. When you reach a state in the Royal Game of Ur, you can play a few moves and then get
back
to the same state as before you played them.
These loops cause a single backwards pass to be insufficient, because the win percentage of some states will be incorrect when you try to use them to calculate the winning chance from another state. Their value is incorrect because that state you just calculated can influence the states you used in the calculation (a circular dependency)!
A diagram showing how a capture of a piece can lead to a loop in the states that are visited in a game.
So, how do we solve this problem?
We just do lots of backwards passes!
Every time you do a backwards pass the estimates of the win percentages get better and better until eventually they barely change at all. For our models, we stopped the backwards passes when the maximum change made to any state in the state map was less than 0.0001% after running through all the states.
The animation below is showing these backwards passes as they happen. This animation is particularly chaotic, but it doesn't actually need to be. It is chaotic because we edit the map in-place, and because of the order in which we loop through all the states to update them. If we used a copy of the map, or if we used a different ordering, then value iteration might converge faster and look more calm. But the animation would look much more boring. So, I like this chaotic version better!
An animation showing the process of calculating the light player's winning chance from every position in the game.
But wait, what moves do players make after each roll?
This is a crucial question, but we've only skimmed over it. In reality, the best strategy to win depends heavily on what your opponent does. However, we can't predict every individual player's quirks and tendencies. Instead, we assume that both players play optimally, making the best possible decision at every turn.
By doing this, we can calculate the Nash Equilibrium. That is, we calculate a strategy where neither player can improve their chance of winning by changing their moves. A strategy that is impossible to beat. But also a strategy that does not necessarily notice and exploit the weaknesses in your opponent's play. This is a tradeoff we make.
To calculate the Nash Equilibrium, it turns out that a simple greedy approach is all you need. You just pick the move that would lead to the highest winning chance for the current player. This is the same strategy that the Panda uses to play the game flawlessly. Even though the win percentages you are using to make this decision might be incorrect at first, this is still good enough to let the win percentages converge over time.
Within our value iteration kernel, the selection of the optimal move for each player is represented by the maximum or minimum over the value of their possible moves.
Putting it all together
So that's all there is to it!
We use a simple weighted average to calculate your chance of winning in each state based upon your chance of winning in other states (the sum and multiply by the probability of each roll). We assume that light and dark will make the move that is best for them (the min/max over the available moves). And we do that lots of times until it converges!
You just run this one simple kernel over the entire state space a bunch of times and hey presto! A solved game. I find that beautiful.
Now chop the game up for speed!
Beauty is nice, but let's be real. We're often happy to sacrifice it for some
speed!
And in the Royal Game of Ur, there's one big optimisation that we have available to us: chop the game up into many smaller
mini-games.
Performing value iteration on the entire game at once is expensive, and inefficient. During the thousand iterations it takes to train our models, there is really only a narrow boundary of states that are converging to their true value at a time. States closer to the winning states have already converged, and states further away don't have good enough information to converge yet.
We'd really love to only update the states that are close to converging in every iteration, to avoid wasting time updating everything else. Luckily, we have one delicious little trick that lets us do just that!
Once you score a piece, you can never unscore it.
Once you score a piece, you can never unscore it. This detail gives us one huge benefit: it allows us to split the Royal Game of Ur into many smaller “mini-games”. Once you score a piece, you progress from one mini-game to the next. We get an advantage in training using this, as each of these mini-games is only dependent upon its own states and states of the next mini-games you could reach.
The groups we organise states into, the “mini-games”, and their dependencies. The groups shown here represent a ruleset restricted to 3 starting pieces.
Instead of training the whole game at once, with all its waste, we are able to train each smaller game one-by-one. And since each of these mini-games is much smaller than the whole game, they can each be trained more efficiently, and with much less waste! This reduces the time it takes to solve the Finkel game from 11 hours to less than 5 hours on my M1 Macbook Pro.
We open-sourced the solved game
You now have all you need to solve the Royal Game of Ur yourself! Although, if you'd rather experiment with the solved game straight away, we have open-sourced a few different implementations for training and using our models.
Jeroen's
Julia implementation
can really quickly train models that use a different file format.
Raph released the
Lut Explorer
, which you can use to explore all the different positions in the solved game.
If you do experiment with the solved game, we'd love to hear what you manage to do! If you're interested, I invite you to
join our Discord.
We have a channel dedicated to discussing research topics, just like everything I've talked about above.
Now, let's zoom-out from the technical details and discuss how solving the Royal Game of Ur fits into the larger landscape of AI.
Can we solve more games? 🔮
It's difficult to find a game that aligns as perfectly with value iteration as the Royal Game of Ur. With its element of chance, non-finite gameplay, and limited number of positions, value iteration is the ideal choice for solving the game. Value iteration is rarely as ideal when applied to other games.
Some games we'd love to solve, such as Backgammon, have too many positions for value iteration. Ur has ~276 million positions, while Backgammon contains
~100 quintillion
... This means that solving Backgammon using value iteration would require so much storage it verges on the unimaginable. You would need nearly 3% of the world's total data storage, or a trillion gigabytes.
Other popular games, such as Connect-Four or Yahtzee, would be inefficient to solve using value iteration. Connect-Four contains no dice, so it is more efficient to solve using search. Yahtzee has its dice, but it is simplified by its inescapable forwards motion. Once you have made an action in Yahtzee, you can never get back to a state before you took that action. This leads to Yahtzee being more efficient to solve by evaluating the game back to front instead.
We estimate that any game with a state-space complexity on the order of 1E+11 or lower could feasibly be solved using value iteration. This cuts out using value iteration to solve most complex games, including Backgammon.
The state-space and game-tree complexity metrics of popular games, including the Royal Game of Ur.
This is why, despite solving Ur being a remarkable achievement for the game, it is not a breakthrough for solving more complex games. There may be some games of chance with perfect information that could be elevated to the title of
strongly solved
using value iteration. But value iteration alone will not allow us to solve complex games like Backgammon.
Yet, we should not discount value iteration entirely! In the tense final moves of a board game, value iteration can still provide a lot of value. Advanced AI systems for games often rely on databases to play flawlessly in endgames, and value iteration is one method that can be used to build those databases. Bearoff databases, for instance, are commonly used for perfect endgame strategies in Backgammon. Similar techniques could also allow flawless play at the end of other games like Ludo or Parcheesi.
You can defeat perfection! 🐼
You can now play the perfect player right here on RoyalUr by selecting
the Panda
as your opponent. The Panda is a ruthless opponent, but with some skill and a lot of luck, it is possible to win! If you'd like to see if you can beat the perfect AI yourself, you can try
playing the Panda here.
Solving the game has also allowed us to release another major feature that we've been working on for a very long time:
game review!
Game review employs the perfect player to analyse your moves, highlight your best moments, and suggest improvements.
In other words, we've created a computer teacher to help you learn the game. I am truly proud of it, and we will continue working to make it the best teacher it can be. If you'd like to give it a try, you can
access an example game review here.
Screenshot from a game review.
This was not a solo effort 🌍
Let me illustrate this with a
vastly shortened list
of the people who contributed to solving the game. There are also many more people who contributed through the research discussions we have held since 2021!
Jeroen Olieslagers.
The problem-solving master. Jeroen was the first to prove the feasibility of solving the Royal Game of Ur using value iteration by solving the 2-piece game. I wouldn't be writing this article if it weren't for Jeroen joining our Discord a year ago!
Raphaël Côté.
The man that gets stuff done. Raph is the reason that you can load the perfect-play maps in Python. He is also
the
person that brings us all together to research the Royal Game of Ur!
Mr. Carrot.
The catalyst for critical thinking. Mr. Carrot came up with the idea that the game is symmetrical, which halved the memory requirements of our models.
Padraig Lamont.
I created this website, RoyalUr.net, back in 2017, and our Discord community in 2021. This is also me writing this, hello!
Join us! If you are interested in history, probability, AI, or if you enjoy a fun casual game with friends, you will probably like the Royal Game of Ur. Join our community, try your hand against our perfect AI, and who knows? You might just defeat perfection.
Head to our homepage
to start your journey with the Royal Game of Ur. I look forward to playing you some time :D
...
Wow. That was a long one... I have never ever spent so much time on a single piece of writing, not even for my thesis. 100s of hours writing, rewriting, scrapping, and starting again.
A massive thank you to Jeroen for helping to write and correct this post, and also to my family that I constantly pestered to make sure my writing was clear. It is a strange relief and melancholy to finally be done with it.
Now I need to go and find something else to spend all my time on... like maybe Mancala?
Thanks :)
~ Paddy L.
Ask HN: Where Do Seasoned Devs Look for Short-Term Work?
In a short form question: If you do, where do you look for a short time projects?
I'd like to put my skill set to use and work on a project, I'm available for 6-9 months. The problem seems to be for me, that I cannot find any way of finding such project.
I'm quite skilled, I have 15 years of experience, first 3 as a system administrator, then I went full on developer - have been full stack for 2 of those years, then switched my focus fully on the backend - and ended up as platform data engineer - optimizing the heck out of systems to be able to process data fast and reliably at larger scale.
I already went through UpWork, Toptal and such and to my disappointment, there was no success to be found.
Do you know of any project boards, or feature bounty platforms, that I could use to find a short time project?
Thank you for your wisdom :)
[$] Warming up to frozen pages for networking
Linux Weekly News
lwn.net
2025-03-13 16:01:58
When the 6.14 kernel is released later this month, it will include the
usual set of internal changes that users should never notice, with the
possible exception of changes that bring performance improvements. One of
those changes is frozen pages, a
memory-management optimization that should fly mos...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on March 27, 2025)
ClickFix attack delivers infostealers, RATs in fake Booking.com emails
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 16:00:00
Microsoft is warning that an ongoing phishing campaign impersonating Booking.com is using ClickFix social engineering attacks to infect hospitality workers with various malware, including infostealers and RATs. [...]...
Microsoft is warning that an ongoing phishing campaign impersonating Booking.com is using ClickFix social engineering attacks to infect hospitality workers with various malware, including infostealers and RATs.
The campaign started in December 2024 and continues today, targeting employees at hospitality organizations such as hotels, travel agencies, and other businesses that use Booking.com for reservations.
The threat actors' goal is to hijack employee accounts on the Booking.com platform and then steal customer payment details and personal information, potentially using it to launch further attacks on guests.
Microsoft security researchers who discovered this campaign attribute the activity to a threat group it tracks as 'Storm-1865.'
ClickFix meets Booking.com
ClickFix is a relatively new social engineering attack that displays fake errors on websites or in phishing documents and then prompts users to perform a "fix" "captcha" to view the content.
However, these fake fixes are actually malicious
PowerShell
or other malicious commands that download and install infostealing malware and remote access trojans on Windows and Mac devices.
In the phishing campaign discovered by Microsoft, the threat actors send emails impersonating pretending to be guests inquiring about a negative Booking.com review, requests from prospective clients, account verification alerts, and others.
Email sent to targets
Source: Microsoft
These emails contain either a PDF attachment containing a link or an embedded button, both taking the victim to a fake CAPTCHA page.
A
fake CAPTCHA in ClickFix campaigns
has become popular as it adds a false sense of legitimacy to the process, hoping to trick recipients into lowering their guard.
When solving the malicious CAPTCHA, a hidden mshta.exe command will be copied to the Windows clipboard to perform the "human verification" process. The target is told to perform this verification by opening the Windows Run command, pasting the clipboard's contents into the Run field, and executing it.
Malicious CAPTCHA page
Source: Microsoft
The victims only see keyboard shortcuts, not the content copied to the clipboard, so they have no indication they're about to execute a command on their system. Hence, those with less experience with computers are likely to fall for the trap.
In this campaign, Microsoft says that the copied code is a mshta.exe that executes a malicious HTML file [
VirusTotal
] on the attacker's server.
Malicious 'mshta' command copied to Windows clipboard
Source: Microsoft
Executing the command downloads and installs a wide variety of remote access trojans and infostealing malware, including XWorm, Lumma stealer, VenomRAT, AsyncRAT, Danabot, and NetSupport RAT.
"Depending on the specific payload, the specific code launched through mshta.exe varies,"
explains Microsoft's report
.
"Some samples have downloaded PowerShell, JavaScript, and portable executable (PE) content."
"All these payloads include capabilities to steal financial data and credentials for fraudulent use, which is a hallmark of Storm-1865 activity."
Overview of the Storm-1865 ClickFix attack
Source: Microsoft
To defend against these attacks, Microsoft recommends always confirming the legitimacy of the sender's address, being extra careful when met with urgent calls to action, and looking for typos that could give away scammers.
It is also advisable to verify the Booking.com account status and pending alerts by logging in on the platform independently instead of following links from emails.
Seven new stable kernels
Linux Weekly News
lwn.net
2025-03-13 15:53:51
Greg Kroah-Hartman has announced the release of the 6.13.7, 6.12.19, 6.6.83, 6.1.131, 5.15.179, 5.10.235, and 5.4.291 stable kernels. They all contain a
relatively large number of important fixes throughout the kernel tree....
Meta puts stop on promotion of tell-all book by former employee
Guardian
www.theguardian.com
2025-03-13 15:51:11
Social media company wins emergency arbitration ruling on book, Careless People by Sarah Wynn-Williams Meta on Wednesday won an emergency arbitration ruling to temporarily stop promotion of the tell-all book Careless People by a former employee, according to a copy of the ruling published by the soc...
Meta
on Wednesday won an emergency arbitration ruling to temporarily stop promotion of the tell-all book Careless People by a former employee, according to a copy of the ruling published by the social media company.
The book, written by a former director of global public policy at Meta, Sarah Wynn-Williams,
was called by the New York Times
book review “an ugly, detailed portrait of one of the most powerful companies in the world,” and its leading executives, including CEO Mark Zuckerberg, former chief operating officer Sheryl Sandberg and chief global affairs officer Joel Kaplan.
Meta will suffer “immediate and irreparable loss” in the absence of an emergency relief, the American Arbitration Association’s emergency arbitrator, Nicholas Gowen, said in a ruling after a hearing, which Wynn-Williams did not attend.
Book publisher Macmillan attended and argued it was not bound by the arbitration agreement, which was part of a severance agreement between the employee and company.
The ruling says that Wynn-Williams should stop promoting the book and, to the extent she could, stop further publication. It did not order any action by the publisher.
Meta spokesman Andy Stone said in a post on Threads: “This ruling affirms that Sarah Wynn Williams’ false and defamatory book should never have been published”. Wynn-Williams and Macmillan did not immediately respond to a request for comment on the ruling.
Security updates for Thursday
Linux Weekly News
lwn.net
2025-03-13 15:47:09
Security updates have been issued by Debian (chromium), Fedora (ffmpeg, qt6-qtwebengine, tigervnc, and xorg-x11-server-Xwayland), Red Hat (fence-agents and libxml2), SUSE (amazon-ssm-agent, ark, chromium, fake-gcs-server, gerbera, google-guest-agent, google-osconfig-agent, grafana, kernel, libtinyxm...
It’s 11 o’clock. Do you know where your variables are pointing?
defshout(obj)obj.to_s+"!"end
It’s hard to tell just looking at the code what type
obj
is. We assume it
has a
to_s
method, but many classes define methods named
to_s
. Which
to_s
method are we calling?
What is the return type of
shout
? If
to_s
doesn’t return a
String
, it’s
really hard to say.
Adding type annotations would help… a little. With types, it looks like we have
full knowledge about what each thing is but we actually don’t. Ruby, like many
other object-oriented languages, has this thing called inheritance which means that type
signatures like
Integer
and
String
mean an instance of that class… or an
instance of a subclass of that class.
Additionally, gradual type checkers such as Sorbet (for example) have features
such as
T.unsafe
and
T.untyped
which make it possible to
lie
to the type checker. These annotations unfortunately render the type system
unsound
without run-time type checks, which makes it a poor basis for something we
would like to use in program optimization. (For more information, see
this
blog post
for how it affects
Python in a similar way.)
In order to build an effective compiler for a dynamic language such as Ruby,
the compiler needs precise type information. This means that as compiler
designers, we have to take things into our own hands and track the types
ourselves.
In this post, we show an interprocedural type analysis over a very small Ruby
subset. Such analysis could be used for program optimization by a sufficiently
advanced compiler. This is not something Shopify is working on but we are
sharing this post and attached analysis code because we think you will find it
interesting.
Note that this analysis is
not
what people usually refer to as a type inference
engine or type checker; unlike
Hindley-Milner
(see also
previous
writing
) or similar constraint-based type systems, this type analysis
tracks dataflow across functions.
This analysis might be able to identify all of the callers to
shout
,
determine that the
to_s
of all the arguments return
String
, and therefore
conclude that the return type is
String
. All from an un-annotated program.
The examples after the intro in this post will use more parentheses than the typical Ruby
program because we also wrote a mini-Ruby parser and it does not support method
calls without parentheses.
Static analysis
Let’s start from the top. We’ll go over some examples and then continue on into
code and some benchmarks.
Do you know what type this program returns?
That’s right, it’s
Integer[1]
. Not only is it an
Integer
, but we have
additional information about its exact value available at analysis time. That will
come in handy later.
What about this variable? What type is
a
?
Not a trick question, at least not yet. It’s still
Integer[1]
. But what if we
assign to it twice?
Ah. Tricky. Things get a little complicated. If we split our program into
segments based on logical execution “time”, we can say that
a
starts off as
Integer[1]
and then becomes
String["hello"]
. This is not super pleasant
because it means that when analyzing the code, you have to carry around some
notion of “time” state in your analysis. It would be much nicer if instead
something rewrote the input code to look more like this:
Then we could easily tell the two variables apart at any time because they have
different names. This is where
static single assignment
(SSA) form comes
in. Automatically converting your input program to SSA introduces some
complexity but gives you the guarantee that every variable has a single unchanging type. This is why we analyze SSA instead of some other form of intermediate
representation (IR). Assume for the rest of this post that we are working with
SSA.
Let’s continue with our analysis.
What types do the variables have in the below program?
We know
a
and
b
because they are constants, so can we constant-fold
a+b
into
3
? Kind of. Sort of. In Ruby, without global knowledge that someone has
not and will not patch the
Integer
class or do a variety of other nasty
things, no.
But let’s pretend for the duration of this post that we live in a world where
it’s not possible to redefine the meaning of existing classes (remember, we’re
looking at a Ruby-like language with different semantics but similar syntax) or
add new classes at run-time (this is called a closed-world assumption). In that
case, it is absolutely possible to fold those constants. So
c
has type
Integer[3]
.
Let’s complicate things.
ifconditiona=3elsea=4end
We said that each variable would only be assigned once, but SSA can represent
such a program using Φ (phi) nodes. Phi nodes are special pseudo-instructions that
track dataflow when it could come from multiple places. In this case, SSA would
place one after the
if
to merge two differently named variables into a third
one.
ifconditiona0=3elsea1=4enda2=phi(a0,a1)
This also happens when using the returned value of an
if
expression:
a=ifcondition3else4end
The
phi
function exists to merge multiple input values.
For our analysis, the phi node does not do anything other than compute the type
union of its inputs. We do this because we are treating a type as a set of all
possible values that it could represent. For example
Integer[3]
is the set
{3}
. And
Integer
is the infinite and difficult to fit into memory set
{..., -2, -1, 0, 1, 2, ...}
.
This makes the type of
a
(
a2
) some type like
Integer[3 or 4]
, but as we
saw, that set could grow potentially without bound. In order to use a
reasonable amount of memory and make sure our analysis runs in a reasonable
amount of time, we have to limit our set size. This is where the notion of a
finite-height lattice comes in. Wait, don’t click away! It’s going to be okay!
We’re using a lattice as a set with a little more structure to it. Instead of
having a
union
operation that just expands and expands and expands, we give
each level of set a limited amount of entries before it overflows to the next,
less-specific level. It’s kind of like a finite state machine. This is a
diagram of a subset of our type lattice:
An example lattice similar to the one we use in our demo static
analysis. At the bottom are the more specific types and at the top are the
less specific types. Arrows indicate that results can only become less
precise as more merging happens.
All lattice elements in our program analysis start at
Empty
(unreachable) and incrementally
add elements, following the state transition arrows as they grow. If we see one
constant integer, we can go into the
Integer[N]
state. If we see another,
different
integer, we have to lose some information and transition to the
Integer
state. This state symbolically represents all instances of the
Integer
class. Losing information like this is a tradeoff between precision and analysis time.
To bring this back to our example, this means that
a
(which merges
Integer[3]
and
Integer[4]
) would have the type
Integer
in our lattice.
Let’s complicate things further. Let’s say we know somehow at analysis time
that the condition is truthy, perhaps because it’s an inline constant:
Many analyses looking at this program would see two inputs to
a
with
different values and therefore continue to conclude that the type of
a
is
still
Integer
—even though as humans looking at it, we know that the
else
branch never happens. It’s possible to do a simplification in another analysis
that deletes the
else
branch, but instead we’re going to use some excellent
work by Zadeck and co called
Sparse Conditional Constant Propagation
(SCCP)
.
Unlike many abstract interpretation based analyses, SCCP uses type information
to inform its worklist-based exploration of the control-flow graph (CFG). If it
knows from other information that the condition of a branch instruction is a
constant, it does not explore both branches of the conditional. Instead, it
only pushes the relevant branch onto the worklist.
Because we’re working inside an SSA (CFG), we separate control-flow into basic
blocks as our unit of granularity. These basic blocks are chunks of
instructions where the only control-flow allowed is the last instruction.
fnsctp(prog:&Program)->AnalysisResult{// ...whileblock_worklist.len()>0||insn_worklist.len()>0{// Read an instruction from the instruction worklistwhileletSome(insn_id)=insn_worklist.pop_front(){letInsn{op,block_id,..}=&prog.insns[insn_id.0];ifletOp::IfTrue{val,then_block,else_block}=op{// If we know statically we won't execute a branch, don't// analyze itmatchtype_of(val){// Empty represents code that is not (yet) reachable;// it has no value at run-time.Type::Empty=>{},Type::Const(Value::Bool(false))=>block_worklist.push_back(*else_block),Type::Const(Value::Bool(true))=>block_worklist.push_back(*then_block),_=>{block_worklist.push_back(*then_block);block_worklist.push_back(*else_block);}}continue;};}// ...}// ...}
This leaves us with a phi node that only sees one input operand,
Integer[3]
,
which gives us more precision to work with in later parts of the program. The
original SCCP paper stops here (papers have page limits, after all) but we took
it a little further. Instead of just reasoning about constants, we use our full
type lattice. And we do it interprocedurally.
Let’s look at a small example of why interprocedural analysis matters before we
move on to trickier snippets. Here we have a function
decisions
with one
visible call site and that call site passes in
true
:
If we were just looking at
decisions
in isolation, we would still think the
return type is
Integer
. However, if we let information from all the call
sites flow into the function, we can see that all (one) of the call sites pass
true
to the function… and therefore we should only look at one branch of
the
if
.
Now, a reader familiar with SCCP might be wondering how this works
interprocedurally. SCCP by definition requires knowing in advance what
instructions use what other instructions: if you learn new facts about
the output instruction
A
, you have to propagate this new information to all
of the uses. In a single function’s control-flow graph, this isn’t so bad; we
have full visibility into definitions and uses. It gets harder when we expand
to multiple functions. In this example, we have to mark the
condition
parameter as a use of all of the (currently constant) actual arguments being
passed in.
But how do we know the callers?
Interprocedural SCCP
Let’s start at the entrypoint for an application. That’s normally a
main
function somewhere that allocates some objects and calls a couple of other
functions. These functions might in turn call other functions, and so on and so
forth, until the application terminates.
These calls and returns form a graph, but we don’t know it statically—we
don’t know it at the start of the analysis. Instead, we have to incrementally
build it as we discover call edges.
In the following code snippet, we would begin analysis at the entrypoint, which
in this snippet is the
main
function. In it, we see a direct call to the
foo
function. We mark that
foo
is called by
main
—and not just by
main
, but by the specific call site inside
main
. Then we enqueue the start
of the
bar
function—its entry basic block—onto the block worklist.
At some point, the analysis will pop the entry basic block of
foo
off the
worklist and analyze
foo
. For each of the direct calls to
bar
, it will
create a call edge. In addition, because we are passing arguments, it will wire
up
1
and
3
to the
a
parameter and
2
and
4
to the
b
parameter. It
will enqueue
bar
’s entry block.
At this point, we’re merging
Integer[1]
and
Integer[3]
at the
a
parameter
(and similarly at
b
). This is kind of like an interprocedural phi node and we
have to do the same union operation on our type lattice.
This means that we won’t be able to fold
a+b
for either call to
bar
,
unfortunately, but we will still get a return type of
Integer
, because we
know that
Integer+Integer=Integer
.
Now, if there were a third call to
bar
that passed it
String
s, every call
site would lose. We would end up with
ClassUnion[String, Integer]
at each
parameter and, worse,
Any
as the function result. We wouldn’t even get
ClassUnion[String, Integer]
because we don’t keep each call site separate, so
from the perspective of the analysis, we could be looking at
String+Integer
,
which doesn’t have a known type (in fact, it probably raises an exception or
something).
But what if we kept each call site separate?
Sensitivity
This kind of thing is generally called
something
-sensitivity, where the
something
depends on what your strategy is to partition your analysis. One
example of sensitivity is
call-site sensitivity
.
In particular, we might want to extend our current analysis with
1-call-site-sensitivity
. The number, the
k
variable that we can dial for
more precision and slower analysis, is the number of “call frames” we want to
keep track of in the analysis. This stuff is good for very commonly used
library functions such as
to_s
and
each
, where each caller might be quite
different.
In the above very not-representative example, 1-call-site-sensitivity would
allow us to completely constant fold the entire program into
Integer[10]
(as
1 + 2 + 3 + 4 = 10
). Wow! But it would slow down the analysis because it
requires duplicating analysis work. To side-by-side the rough steps:
Without call-site sensitivity / 0-call-site-sensitive (what we currently have):
See call to
bar
with arguments 1 and 2
Mark
bar
s parameters as being
Integer[1]
and
Integer[2]
See
bar
add node with constant left and right operands
Mark
bar
add result as
Integer[3]
Mark
bar
return as
Integer[3]
Mark result of
bar(1, 2)
as
Integer[3]
See call to
bar
with arguments 3 and 4
Mark
bar
s parameters as being
Integer
and
Integer
(we have to union)
Mark
bar
add result as
Integer
(the arguments are not constant)
Mark
bar
return as
Integer
See
foo
’s own add with operands
Integer
and
Integer
Mark
foo
’s add as returning
Integer
With 1-call-site-sensitive:
See call to
bar
from function
foo
with arguments 1 and 2
Make a new call context from
foo
Mark
foo0->bar
parameters as being
Integer[1]
and
Integer[2]
See
foo0->bar
add node with constant left and right operands
Mark
foo0->bar
add result as
Integer[3]
Mark
foo0->bar
return as
Integer[3]
Mark result of
bar(1, 2)
as
Integer[3]
See call to
bar
with arguments 3 and 4
Make a new call context from
foo
Mark
foo1->bar
parameters as being
Integer[3]
and
Integer[4]
Mark
foo1->bar
add result as
Integer[7]
Mark
foo1->bar
return as
Integer[7]
See
foo
’s own add with constant operands
Integer[3]
and
Integer[7]
Mark
foo
add as returning
Integer[10]
See how we had to analyze
bar
once per call-site instead of merging call
inputs and returns and moving up the lattice? That slows the analysis down.
There is also
context sensitivity
, which is about partitioning calls based
on some computed property of a given call site instead of where it is in the
program. Maybe it’s the tuple of argument types, or the tuple of argument types
with any constant values removed, or something else entirely. Ideally it should
be fast to generate and compare between different other call sites.
There are other kinds of sensitivity like
object sensitivity
,
field
sensitivity
, and so on—but since this is a bit of a detour in the main
article and we did not implement any of them, we instead leave them as
breadcrumbs for you to follow and read about.
Let’s go back to the main interprocedural SCCP and add some more trickiness
into the mix: objects.
Objects and method lookup
Ruby doesn’t just deal with integers and strings. Those are special cases of a
larger object system where objects have instance variables, methods, etc and
are instances of user-defined classes.
This means that we have to start tracking all classes in our static analysis or
we will have a hard time being precise when answering questions such as “what
type is the variable
p
?”
Knowing the type of
p
is nice—maybe we can fold some
is_a?
branches in
SCCP—but the analysis becomes even more useful if we can keep track of the
types of instance variables on objects. That would let us answer the question
“what type is
p.x
?”
Per
this paper
(PDF), there
are at least two ways to think about how we might store that kind of
information. One, which the paper calls
field-based
, unifies the storage of
field types based on their name. So in this case, all potential writes to any
field
x
might fall into the same bucket and get
union
ed together.
Another, which the paper calls
field-sensitive
, unifies the storage of field
types based on the receiver (object holding the field) class. In this case, we
would differentiate all possible types of
p
at a given program point when
writing to and reading from
p.x
.
We chose to do the latter approach in our static analysis: we made it field
sensitive.
fnsctp(prog:&Program)->AnalysisResult{// ...whileblock_worklist.len()>0||insn_worklist.len()>0{// Read an instruction from the instruction worklistwhileletSome(insn_id)=insn_worklist.pop_front(){letInsn{op,block_id,..}=&prog.insns[insn_id.0];// ...matchop{Op::GetIvar{self_val,name}=>{letresult=matchtype_of(self_val){Type::Object(classes)=>{// ...classes.iter().fold(Type::Empty,|acc,class_id|union(&acc,&ivar_types[class_id][name]))}ty=>panic!("getivar on non-Object type {ty:?}"),};result}}}}}
This means that we have to do two things: 1) keep track of field types for each
instance variable (ivar) of each class and then 2) at a given ivar read, union
all of the field types from all of the potential classes of the receiver.
Unfortunately, it also creates a complicated
uses
relationship: any
GetIvar
instruction is a use of all possible
SetIvar
instructions that could affect
it. This means that if we see a
SetIvar
that writes to
T.X
for some class
T
and field name
X
, we have to go and re-analyze all of the
GetIvar
s that
could read from that class (and propagate this information recursively to the
other uses, as usual).
All of this union-ing and reflowing and graph exploration sounds slow. Even
with pretty efficient data structures, there’s a lot of iteration going on. How
slow is it really? To answer that, we build some “torture tests” to
artificially create some worst-case benchmarks.
Testing how it scales: generating torture tests
One of the big challenges when it comes to anything related to compiler design
is that it’s difficult to find large, representative benchmarks. There are
multiple reasons for this. Large program tend to come with many dependencies
which makes them hard to distribute, install and maintain. Some software is
closed source or copyrighted. In our case, we’re working with a mini language
that we created to experiment, so there are simply no real-world programs
written in that language, so what can we do?
The first question to ask is: what are we trying to measure? One of our main
concerns in implementing and testing this analysis was to know how well it
performed in terms of execution time. We would like to be confident that the
analysis can cope with large challenging programs. We know
from experience
that YJIT compiles over 9000 methods when running Shopify’s production code.
If YJIT compiles 9000 “hot” methods, then one could guess that the full program
might contain 10 times more code or more, so let’s say 100,000 methods. As such,
we figured that although we don’t have human-crafted programs of that scale
for our mini-language, we could generate some synthetic programs that have
a similar scale. We figure that if our analysis can cope with a “torture test”
that is designed to be large and inherently challenging, that gives us
a good degree of confidence that it could cope with “real” programs.
To generate synthetic test programs, we want to generate a call graph of
functions that call each other. Although this isn’t strictly necessary for
our type analysis to work, we’d like to generate a program that isn’t infinitely
recursive and always terminates. That’s not difficult to achieve
because we can write a piece of code that directly generates a Directed Acyclic
Graph (DAG). See the
random_dag
function in the loupe repository described at the end of this post. This function
generates a directed graph that has a single “root” node with a number of
interconnected child nodes such that there are no cycles between the nodes.
For our first torture test (see
gen_torture_test
), we generated a graph of
200,000 functions that call each other. Some functions have leaf nodes, meaning
they don’t call anybody, and these functions directly return a constant
integer or
nil
. The functions that have callees will sum the return value of
their children. If a child returns nil, it will add zero to the sum. This means
that non-leaf functions contain dynamic branches that depend on type information.
As a second torture test (see
gen_torture_test_2
), we wanted to evaluate how
well our analysis could cope with polymorphic and megamorphic call sites. A
polymorphic call site is a function call that has to handle more than one class.
A megamorphic call site is one that has to handle a large number of classes, such
as 5-10 or more. We started by generating a large number of synthetic classes, we
went with 5000 classes because that seemed like a realistic figure for the number
of classes that might be contained by a large real-world program. Each class has 10
instance variables and 10 methods with the same name for the sake of convenience
(that makes it easier to generate code).
In order to generate polymorphic and megamorphic call sites, we generate an instance
of each class, and then we sample a random number of class instances from that set.
We use a
Pareto distribution
to
sample the number of classes because we believe this is similar to how real programs
are generally structured. That is, most call sites are monomorphic, but a small number
of call sites are highly megamorphic. We generate 200 random DAGs with 750 nodes each,
and call the root node of each DAG with a random number of class instances. Each DAG
then passes the object it receives from the root node through all of its children. This
creates a large number of polymorphic and megamorphic call sites. Our synthetic program
contains call sites that receive as many as 144 different classes.
The structure of each DAG in the second torture test is similar to the first one, with
the difference that each function calls a randomly selected method of the object it
receives as a parameter, and then calls it child functions in the DAG. Conveniently,
since methods always have the same name for each class, it’s easy to select a random
method that we know by construction is defined on all of our classes. This creates more
polymorphic call sites, which is what we wanted to stress-test. The methods of each
class are all leaf methods that can either return
nil
, a random integer, or a randomly
selected instance variable.
How does it scale really?
Using the torture test generators, we generated two programs: one with classes
and one without.
The program with classes has 175,000 reachable functions of 205,000 total
generated, 3 million instructions, and megamorphic (up to 144 classes) method
lookups. We complete the analysis in 2.5 seconds on a single core.
The program without classes has 200,000 functions and we analyze it in 1.3
seconds on a single core.
Now, these numbers don’t mean much in absolute terms—people have different
hardware, different codebases, etc—but in relative terms they mean that this
kind of analysis is more tractable than not. It doesn’t take
hours
to run.
And our analysis is not even particularly well-optimized.
We were actually surprised at how fast the analysis runs. Our initial hope was
that the analysis could run on a program with 20,000 methods in less than 60
seconds, but we can analyze programs about 10 times that size much faster than
expected. This makes it seem likely that the analysis could work on large
human-crafted software.
Adding object sensitivity or increasing the
k
for
call-site sensitivity
would
probably slow things down quite a bit. However, because we know the analysis is
so fast, it seems possible to imagine that we could selectively split/specialize
call sites of built-in methods to add sensitivity in specific places without
increasing the running time by much. For example, in a language with methods on
an
Array
class, such as Ruby, we could do splitting on all
Array
method calls
to increase precision for those highly polymorphic functions.
Wrapping up
Thanks for reading about our little big static type analysis prototype. We published
the code on GitHub
as a static companion
artifact to go with this article and nothing more; it is an experiment that we built, but not a prelude to a
bigger project nor is it a tool we expect others to contribute to.
If you would like to read more about the big wide world of program analysis, we
recommend searching for terms such as control-flow analysis (CFA) and points-to
analysis. Here is an
excellent
lecture
(PDF) by Ondřej Lhoták that gives a tour of the area.
New York’s skyscrapers soar above a century-old steam network that still warms the city. While the rest of the world moved to hot water, Manhattanites still buy steam by the megapound.
Since 1882, Manhattan has delivered steam into the homes and businesses of its citizens. It is used for: pressing linens at The Waldorf Astoria; cleaning crockery and heating food in restaurants; washing clothes at dry cleaners; sterilizing medical equipment at NewYork-Presbyterian Hospital; by the Metropolitan Museum of Art to control the humidity levels and temperatures around its artwork. And it is used by Manhattanites, in both iconic buildings and regular apartment blocks, to heat their space and water.
Steam functions like any other utility: produced centrally, metered, and delivered into homes and businesses through a 105-mile-long grid of pipework. Like electricity, sewage, and water, it plays an integral part in the daily operation of the city.
Today, the Manhattan steam system is responsible for heating 1.8 billion square feet of residential, 700 million square feet of commercial, and 90 million square feet of industrial floorspace. This represents over three quarters of Manhattan’s total residential footprint.
Steam enabled the growth of Manhattan, providing efficient heating to an increasingly vertical city. Many other cities have adopted district-level heating systems, but very few use steam to distribute that energy. Why does some infrastructure survive while others become obsolete?
A short history of heating
Heating represents around
half of all global end-user energy consumption
. Much of this energy is spent warming homes, offices, and water – for cooking, cleaning, and washing – but roughly half goes toward various industrial uses and a smaller percentage toward agriculture. Keeping spaces warm is amongst the most foundational uses of energy there is.
Before modern central heating, keeping homes warm was inefficient, inconvenient, and sometimes deadly. Most residential buildings would be centered around a fireplace or stove, often burning wood, coal, surplus crops, or dung. Traditional fireplaces are immensely inefficient,
drawing in 300 cubic feet of air per minute
for combustion and expelling up to 85 percent – along with the heat – up the chimney. They are inconvenient because the fuel source needs to be regularly replenished: felling, hauling, and splitting firewood could take
up to two months of human labor
per person per year. And they are deadly because these sorts of solid fuels, full of impurities, are highly polluting.
A recent study found
that a fireplace pumps out 58 milligrams of particles under 2.5 microns in diameter (PM2.5s) per kilogram of firewood burnt. This means that every hour you spend in a room with a fireplace burning wood reduces your lifespan by about 18 minutes, equivalent to smoking 1.5 cigarettes..
Efficiency and pollution concerns motivated the rediscovery and refinement of central heating systems, moving the production of heat away from its consumption. James Watt laid the groundwork in the late 1700s, connecting a system of pipes to a central boiler in his own home.
In 1805, William Strutt heated cold air using coal and distributed it throughout homes using ducts; around the same time, some wealthier homes in France began using similar fire-tube hot air furnaces.
By 1863,
the radiator had taken definitive form
, perhaps the most significant breakthrough in modern household heating. Running steam through pipes that have been folded over many times allows for a large amount of surface area in a compact space. The heat is then transferred from the steam into the ambient environment via convection and by directly radiating into the room. Individual radiators can also be fitted with valves, which allows for heating control room-by-room.
While the radiator allowed for much better distribution of heat within a building, each building needed its own boiler. But on-site boilers, especially those for larger buildings, came with both economic and ergonomic limitations. Steam heating required large boilers, coal storage areas, and extensive piping systems, which take up valuable floor space. As the systems became more advanced, they often also became more complex, necessitating skilled technicians for installation and maintenance.
Coal logistics presented its own set of problems. The regular delivery, storage, and handling of large quantities of coal is labor- and space-intensive: heating the Empire State Building would need more than 45 tons of coal per day, roughly two shipping containers’ worth.
Thomas Edison’s
first electric power station on Pearl Street opened in 1890 and needed upwards of 20 tons of coal each day. All this coal needed to be moved across bridges or via boat onto Manhattan Island for storage in a city where space was increasingly at a premium. The
price of prime parcels of land increased tenfold
between the 1790s and the 1880s, spiking from $400 an acre in 1825 to over $1 million an acre at the end of the century (over $30 million in today’s dollars).
Nor is all coal created equal. Boilers and furnaces need an expensive class of coal to burn hot enough.
Anthracite coal
, with its high carbon content (between 92 and 98 percent) and few impurities, burns hot and clean but is difficult to light: one exasperated user declared that ‘if the world should take fire, the Lehigh coal mine would be the safest retreat, the last place to burn’. (This would have been a bad strategy: in 1962, the anthracite seam under Centralia, PA – 25 miles to the west of Lehigh County – caught on fire and is still burning today.)
The unpleasantness of soot and smoke led US cities to begin to legislate against it in the 1880s, although these restrictions were often ignored. When strikes drove up the price of anthracite in 1902 and some New York City plants switched to bituminous coal, the resulting smoke choked the city, violated local ordinances,
and alarmed residents
. Pollution encouraged invention: from the 1830s to the 1880s, the ‘patent office . . . groaned with inventions to save fuel, abolish smoke, evaporate water and utilize steam . . . still leaving a great need unsupplied’,
according to a contemporary account
.
Overcrowding led to both moral and public health concerns, and anti-slum legislation was introduced to try to improve living conditions. This made space in Manhattan still more scarce. Elevated train lines, such as the Ninth Avenue Line, which opened in 1878, allowed the city’s effective size to increase and the partial suburbanization of the outer city. But fares weren’t cheap enough to allow low-wage laborers to commute regularly to and from the new communities forming uptown. If the city were to continue to grow, it would need to build upward.
Two major innovations allowed the development of ever-taller buildings. Physically climbing up stairs – bringing food, water, and fuel with you – places a practical limit on how high a building can practically be: as high as land prices rose, no Ancient Roman, medieval Byzantine, or Hausmannian residential building went above six storeys. The introduction of the safety elevator, by Elisha Otis in 1852, made taller buildings practical and desirable. Otis’s design incorporated a safety brake that would engage if the hoisting cable broke, alleviating fears of catastrophic falls.
Concurrent with safety elevators was the use of steel building frames. Traditional brick and stone structures face several limitations that prevent them from exceeding around 12 storeys, the most important of which is weight distribution: as a building gets taller, the weight of the upper floors puts enormous pressure on the floors below.
Advancements in steel production, including the Bessemer process – oxidizing impurities in molten iron by blowing air through it – made it both affordable and economical to use steel frames, which allowed for a ten times higher strength-to-weight ratio. Designing the building around a skeleton of vertical steel columns and horizontal I-beams allowed buildings to redistribute their weight into this structure, freeing up tensile pressure from the walls and floors. This meant that the rest of the building could be made from lighter non-load-bearing materials, making it possible to build higher still.
So New York shot up.
In 1890, the tallest building reached 18 storeys. By 1899, the Park Row Building stretched to 31 stories, and in 1908, the Singer Building soared even higher to 41 stories. Vertical expansion allowed the city to create more residential and commercial space, but that space needed to be heated, powered, and watered.
New York skyline, 1908.
Image
The New York Steam Company
To solve this, New York turned to district heating, an invention of Birdsill Holly (1820–94), a self-made man from Lockport, New York. Holly
understood the problems of nineteenth century cities deeply
. As urban populations swelled, fire risk became catastrophic: large parts of Manhattan would be destroyed in three great fires between 1776 and 1845. Early hand-pumped fire engines could deliver only 30 gallons of water per minute.
Among Holly’s earliest patents
were water pumps such as an ‘elliptical rotary pump’ – the precursor for his
steam-powered fire engine
in 1856.
His ‘Holly System of Direct Water Supply and Fire Protection for Cities, Towns and Villages’ provided a constant supply of water at pressure to both homes and fire hydrants, lowering costs and removing the need for reservoirs and standpipes. It was so successful that 23 other cities copied it, without crediting Holly, and were later forced by the U.S. Supreme Court to pay him damages. In the 1870s, he began work on a skyscraper project in Niagara Falls, which he then took to Long Island, suggesting that overcrowding could be improved by building upward. Mocked as ‘the farmer from the west’, he abandoned the project altogether and went home to Lockport.
Holly returned to plumbing. Drawing from his research in steam-powered pumps, pipework, and urban density, he began to create plans for a district-level heating system. If he could distribute steam to buildings as he had water, then multiple buildings could be heated by a single boiler, reducing fire risk and centralizing heating logistics. He began by running steam through one and a half inch pipes, later upgraded to three inch pipes supported by wooden trenches insulated with
asbestos
. The steam would be produced in a boiler in his basement at 31 Chestnut Street and passed across his garden into the houses of his neighbors.
By 1877, he launched the first practical district-scale test of his system. Over the course of the year, the network was extended to the surrounding area as Holly ironed out the problems and registered customers. Each stage necessitated new inventions: valves used to draw steam off the mains, regulators to stabilize pressure, steam traps to handle condensation, brackets to handle the thermal expansion of the pipes, and meters to measure the flow rate and bill customers accordingly.
By the time winter had arrived, 20 houses had been registered. By this stage, the network encompassed three miles of pipe, supplied by three boilers, at a pressure of 25 to 30 pounds per square inch.
According to a contemporary account
, ‘all the houses, to which steam had been supplied during the winter, had been most comfortably heated, and the discomforts of the old methods done away with . . . heat could be supplied at a much lower cost.’
The test was a success, so he incorporated the Holly Steam Combination Company, patenting his more than 50 inventions and exporting district steam heating to other cities. By 1882 at least twenty Holly steam systems were in operation, including: Denver, CO; Auburn, NY; Troy, NY; Springfield, MA; Detroit, MI; and Hartford and New Haven, CT. (The oldest, in Denver,
has been running continuously since 1880
.) By far the biggest system, however, was in New York City.
Manhattan had some municipal utilities already: in the 1830s, the city had begun to lay water pipes, bringing fresh water from the Croton River in Westchester County through 41 miles of enclosed aqueduct. In the 1850s, private companies such as the Manhattan Gas Light Company started to run gas mains down the avenues, replacing oil lamps across the city.
But the idea of distributing steam at city scale was a little more audacious, even in the context of Gilded Age America. It was an associate of John D. Rockefeller, an entrepreneur and ‘popular dandy with a flair for equipage and flowered vests’ called Wallace Andrews, along with his engineering partner, Charles E. Emery, who came across Holly’s designs and decided to implement them in New York City. Andrews worked to arrange the financing and convince the New York City council to permit the development of the steam system. (A task made easier by Edison, who was already digging up the streets to install electrical cabling; Emery and Edison
even crossed paths underground
.)
In 1882, its first year of operation,
New York Steam
made $200,000 (about six million dollars adjusted for inflation) in gross revenue. An early steam station was built at 172–176 Greenwich St, on the site of today’s World Trade Center memorial; it contained 64 boilers of 250 horsepower each, distributed across four floors. Coal was brought into the upper levels of the building, where it fed the boilers through a chute; ashes were dropped into the basement of the building in the same fashion. By 1884 the Company had five miles of pipes in active use – feeding steam to 250 consumers, from the Battery in the south up to Murray St, by the modern-day Civic Center – and was profitable.
Coal pulverizing equipment at Kips Bay.
Image
Replacing individual boilers with a central system had significant town planning and public health benefits. In a city as dense as Manhattan, coal logistics was a growing concern –
the New York Steam Corporation estimated
that its central system displaced 1.2 million tons of coal and the smoke from more than 2,500 chimneys, requiring 700 five-ton truckloads per day for 300 days per year to handle the coal and ash – and the risk of dying in a fire was increasing (as were insurance premiums).
Andrews himself would die in a house fire in 1899. But the system he helped create outlived him and proved remarkably flexible, adapting to the city’s rapid growth both uptown and skyward. As New York expanded, numerous new buildings tapped into its growing steam network, saving them the costs and complexity of building their own heating systems. Each new building made the system that much more efficient because the costs of laying down the pipework and moving the coal could be amortized over more customers’ bills.
The first customers were mostly office buildings, such as the First National Bank at Broadway and Wall Street, but users included busy restaurants (one serving ten thousand meals per day), electrical power generators, laundries, and public baths. Network steam was even used to melt snow in the streets. Most of the Art Deco skyscrapers of the 1920s and 1930s were supplied by the New York Steam Company, including the Standard Oil Building, the Ritz Tower Building, the New York Life Insurance Company, the Paramount Building, the Chrysler Building, the Waldorf-Astoria, the Rockefeller Center, and the Empire State.
An advert for the New York Steam Corporation from the early 1930s.
Image
In 1931 – emerging from the financially turbulent 1910s, when the company went into receivership in 1918 –
its annual profits had reached $2 million
. By 1932, it was the largest district heating system in the world, with 65 miles of pipes, bringing heat to more than 2,500 buildings.
The modern New York steam system
Today, the network, now owned by utilities giant Consolidated Edison – ConEd – operates four steam production sites in Manhattan and one in Queens. They also purchase additional steam from a 322 megawatt plant in Brooklyn, which is managed by a separate company. ConEd’s largest facility sits along the East River, taking up the lion’s share of a city block on 14th Street. At this plant, a dozen massive boilers heat water to approximately 177 degrees celsius and push it into the network at 150 pounds per square inch (roughly equivalent to ten atmospheres).
Consolidated Edison plant, E 14th Street.
Image
The scale of all this is considerable. The six boiler sites together have a total capacity of roughly
11 and a half million pounds of steam per hour
; at peak times they use over nine million. Every gallon of water produces just over eight pounds of steam, which means that the system consumes nearly two Olympic swimming pools’ worth of water per hour during the winter.
Once heated, steam is sent out from plants through large pipes called mains, which are usually between two and three feet in diameter, before spreading through a network of 105 miles of smaller pipes latticed under the city’s streets.
The older pipework is made of cast iron, often still coated in the asbestos in which Emery clad them, which can cause problems during regular maintenance and when the pipes crack. (When the last major rupture occurred in 2023, the city shut down seven blocks and washed neighboring buildings and streets out of ‘an abundance of caution’.) New pipes are made from a carbon steel alloy and wrapped in insulation, usually mineral wool, and sit within prefabricated concrete ducts, which both lessens heat loss during transit and also prevents nearby pipes and wiring from being affected.
Most of these pipes
sit deeper than the other utilities, anywhere from four to 15 feet below street level
. The deepest mains pipe runs below the Park Avenue Tunnel, 30 feet underneath the trains shuttling commuters to and from the Bronx and Grand Central. The pipes are accessible through manhole covers in the street, with special telescopic tooling used by workers to open and close the valves that regulate system pressure. Individual sections of pipe are welded together and bound into the concrete with strips of metal called anchors. Metal expands and contracts when heat levels change, so at various points along the mains, the pipes run through expansion joints to absorb this movement safely.
Whenever a new building wants to tap into the network
, it installs a service valve and the corresponding piping, which rises up a few feet and carries the steam into the basement, where its pressure is lowered to 125-ish pounds per square inch. (This is structurally similar to electricity grids, where current is transmitted over longer distances at a higher voltage and then reduced at a local substation for local distribution into homes and offices.) Once the steam has passed through the meter and the pressure-reduction substation, most buildings then pipe it through a heat exchanger for distribution into individual apartments and rooms.
Consolidated Edison most commonly uses an ‘orifice plate’ meter to bill its customers. A thin, flat metal plate with a precisely machined hole in the center, an orifice plate is set within the pipeline, perpendicular to the direction of flow. As the steam is forced through the hole, sensors measure a drop in pressure, and work backward to calculate the flow rate.
Steam is billed by the megapound (one million pounds), calculated from
a fairly baroque hierarchy of service types and tiers
. All apartment building connections pay a standing service charge of $3,555.43 per month. Usage fees are then added on top, paying anywhere from $11.35 per megapound in the summer to $35.013 during peak season.
According to data
from the NYC Department of Buildings, 201 East 62nd Street, a mid-century residential building in Midtown with 70 apartments, consumed approximately 5,962 megapounds of steam in 2021. Its steam service cost the building roughly $16,741.23 per month or $239.16 per apartment per month. Lincoln Towers, at 150 West End Avenue, built around the same time, has 454 apartments and pays no less than $47,370.80 per month for its 19,483 megapounds of steam, or $104.34 per apartment per month; the pricing structure strongly rewards density.
A large part of the engineering challenge in maintaining the network is managing how the pressure changes as steam moves throughout the system, being drained into buildings as it goes. Engineers can control the inbound pressure for any given customer very accurately using valves on either side of the service valve and monitor usage across the system with sensors. When the pressure gets too high, valves open and vent the steam through a grate, delighting tourists and enraging taxi drivers. If the pressure is too high for a sustained period, ConEd shuts off some of the boilers. If the pressure is too low, ConEd can increase the supply by switching on new boilers or closing valves elsewhere. Large parts of this feedback mechanism are now automated, and modern electronic sensors and high-precision valves allow the system as a whole to run at a high level of efficiency:
60 percent, end-to-end
.
As steam cools, it condenses into water droplets called
condensate
, a substance that accumulates at low points in the piping system. Condensate poses several significant problems. It absorbs oxygen and carbon dioxide, which makes it highly corrosive. The difficulty of accessing the underground infrastructure, especially in older parts of the network, can make this corrosion costly to repair and, in some cases, highly dangerous. Condensate left in pipes can freeze during the winter, leading to blockages and bursts.
But the most dramatic failure is called a
waterhammer
.
Sometimes the condensate can form a ‘slug’ of water that fills the entire cross-section of the pipe. When a slug comes into contact with the high-pressure steam, it gets pushed through the piping and accelerated to ridiculous velocities, often reaching over 100 miles per hour (50 meters per second).
When this fast-moving water reaches a closed valve, sharp bend, or other obstruction in the pipe, it crashes into whatever it has found, producing a spike of pressure much higher than the standard operating pressure of the system. This can cause significant damage and in some cases total system failures.
The last total system failure
was in 2007
, when an 82-year-old pipe at 41st and Lexington exploded, showering Midtown in debris. Heavy rainfall had cooled the pipes, producing large amounts of condensate quickly, and a clogged steam trap meant that the system was unable to expel the water. When this build-up hit a critical level, the internal pressure shot up, causing
the explosion
. Almost fifty people were injured, and one woman
died of a heart attack while fleeing
:
The 2007 New York steam explosion.
Image
District heating in the twenty-first century
Steam was a logical choice of heat transfer medium at the turn of the twentieth century, since there was a ready source of it from existing power plants, and its higher temperatures meant it was useful for industrial processes as well as domestic needs. Steam rises, making it a natural choice for tall buildings in dense urban environments. Although steam loses heat as it travels – especially through older systems with poor insulation – its initial high temperature and high energy content mean it can deliver sufficient heat over longer distances.
By the 1930s, designers of new systems had switched from using steam to hot water to carry heat. Hot water is now the preferred medium for several reasons.
Water systems distribute heat via hot water at temperatures as high as 180 degrees celsius (under a high 147.9 pounds per square inch of pressure so that it doesn’t turn to steam) and as low as 40. Water produces a much higher operating
efficienc
y, measured as the ratio of usable heat delivered to customers to the total energy used at the heat source. Lower temperatures mean a smaller heat differential between the water and the ambient environment, which means less heat loss overall. Similarly, insulation materials are generally more effective at lower temperatures since thermal conductivity tends to increase with temperature. All in all, steam systems usually lose 10-25 percent of their heat during distribution; low temperature water systems can get this as low as 3-5 percent.
Lower temperatures and pressures have other benefits: they reduce the risk of high-pressure accidents and reduce the risk of leaks, meaning easier, safer – and therefore cheaper – system installation and maintenance.
Managing condensate also adds much complexity. Condensate needs to be separated from the steam and drained away or pumped through a return pipe back to the boilers to be reused. A majority (usually 50–60 percent) of the capital costs of district heating systems are in the pipework and pumps, which can make return pipes far too expensive to use for larger systems. But draining away the condensate causes a loss of 15 percent of the total energy input, which harms operating efficiency.
Besides their natural efficiency and amenability to using a variety of renewable heat sources, water-based systems have another interesting benefit: water has a
thermal capacity
2.08 times greater than steam at 100 degrees, which means that it stores twice as much heat per unit mass. This capacity allows water-based systems to capture energy during periods of low electricity demand or high renewable generation and discharge it as heat later, acting as a form of city-wide energy storage. Using a city’s heating system as a battery is not just useful for the electricity grid, it also enhances the overall efficiency of district heating networks since the networks are able to heat their water when electricity prices are cheaper.
There are now tens of thousands of district heating systems across the world, especially in Northern and Eastern Europe. Helsinki (870 miles), Copenhagen (over 1,000 miles), Stockholm (1,200 miles), and Vienna (over 750 miles) are all heavy users of district heating. Former Soviet states, with their long history of centralized planning and heating, are particularly reliant:
Moscow’s network alone extends over 10,000 miles
. These systems are used by a large percentage of cities’ citizens: Copenhagen provides more than 99 percent of the city’s heating needs via district heating, and the Moscow United Energy Company provides heat to 90 percent of Moscow residents.
In Britain, there are 2,000 heating systems that serve more than one building, more than one third of which are in London. This represents around two percent of UK heat. Perhaps the most famous UK system was a hot-water system in Pimlico, which was opened in 1950 and drew waste heat from Battersea Power Station, providing heat for over 3,000 homes. That system still functions today, although
the City of Westminster is considering
refurbishing or closing it due to its age and carbon intensity.
Some systems
are moving away from steam
, and new systems are likely to run on hot water. But engineering is the science of the art of trade-offs, and steam does have its benefits. Steam moves around by itself, so it doesn’t need to be pumped to where it’s wanted; water systems have to pump the water back. Steam systems that drain their condensate require roughly half the amount of pipework. And many industrial processes require steam directly. Thus, there may still be circumstances where steam is preferred.
Where district heating makes sense
District heating is a truly district-level technology. It needs a high density of heat demand: heat loss and infrastructure costs generally make it unviable below a linear heat density (heat sold per meter of network pipe) of two to three megawatt hours per meter per year. This corresponds to a minimum density of roughly 12 dwellings per acre (about 30 dwellings per hectare),
approximately the average for new suburban developments in the UK
, but a fairly high level of density for American suburban development: R1 residential zoning in Los Angeles, with its minimum lot size of 0.114 acres, wouldn’t make the cut,
ruling out three quarters of all residential land in the city
.
District heating systems therefore target smaller, denser, developments: housing estates, central business districts, government complexes, and campuses. Many systems are anchored around hospitals, universities, or industrial facilities, which give a reliable base load demand helping to nudge viability upward.
But the upfront capital costs are high: according to
one 2014 study
, British district heating networks cost on the order of £1,242 per meter of piping. The dysfunction of infrastructure development in the Anglophone world, with
its high construction costs
and
baroque planning regimes
, only compounds these costs. There are commercial incentives for residents to support these schemes: in Denmark,
94.4 percent of district energy
(admittedly via a regulated tariff) was cheaper than an alternative independent source; and
some UK calculations
suggest annualized saving of £350 per year compared to gas boilers and £950 per year compared to air-source heat pumps. However, if district heating is to thrive, it will likely need significant policy support and financial incentives to overcome these initial barriers.
New York City’s municipal steam system is an iconic anachronism: a fascinating part of the city’s daily life and visual language, a foundational part of its history, and a system that has been exported and refined worldwide. It was an essential technology for its time as the buildings of Manhattan began to reach up and nudge the sky even higher. Future cities likely won’t be powered by hundreds of miles of steam pipes, but significant parts of them may well be heated and cooled by district systems pumping water through highly insulated pipework, stretching out like tentacles under the streets.
1
This drop is proportional to the square of the flow rate through the orifice, which can be calculated using a modified form of the Bernoulli equation.
2
Calculations here assume a seasonal usage pattern of 80 percent in winter and 20 percent in summer
3
60 percent end-to-end efficiency is impressive for a system designed in the 1890s, but modern systems often achieve
70–90 percent efficiency
. The lion’s share of the difference comes from losses during distribution: losing energy by draining the condensate away.
4
The UK Government
seems broadly supportive
of the technology. It has recently regulated the market explicitly in the Energy Act 2023, and launched a consultation to explore heat network zoning; the consultation summary suggests that up to 20 percent of UK heat could be distributed through a heat network some day.
This is a text about how to write a quine (
https://en.wikipedia.org/wiki/Quine_%28computing%29
), which is a program that prints its own source code without using introspection. I assume the reader has encountered quines before, maybe even tried to write one, and understands why they are difficult and interesting. The text consists of two parts: in the first, I explain through an allegory how it works, and in the second, I describe in detail how to write a quine.
How a Quine Works
Now I'll explain how one could build a machine that can create its own copy. Of course, the simplest way would be for the machine to look at itself, see how it's built, and build something identical. But could a machine be built that could create its own copy without introspection?
First, I would build a machine that can create anything if it has a plan for it. One could say that such a machine is not really a machine until a plan is inserted into it, because if you turn it on, it does nothing, but that doesn't matter. It's still cool because you just need to insert a plan for a kayak, and you'll have a machine that builds kayaks; insert a plan for a wheelbarrow, and you'll have a machine that builds wheelbarrows, etc. So, could we insert a plan of itself and get a machine that builds its own copy? There's one problem: what will be drawn on the plan where the plan holder is depicted? Will the holder be empty? If so, the machine built according to this plan won't be able to build anything (so it won't really be a copy of its parent, because the parent could build a descendant machine, but it can't). Or maybe the plan will show the exact plan of this machine? That's better, but what should be drawn on the plan within the plan where the holder is depicted? Unfortunately, there are two possibilities: either the holder on one of the plans will be empty, meaning the machine won't create its perfect copy, or we will never finish drawing this plan.
I encourage you: stop here for a moment. Imagine all this until you feel comfortable with it. Imagine a machine that can make anything if the plan for that thing is inserted into its plan holder. Imagine inserting a plan for a desk (car, frying pan, headphones, phone), pressing a button, and it makes it. Now imagine drawing a plan for this machine, inserting this plan into the plan holder, and turning on the machine. The parent machine produces a child machine; we turn on the child machine, and what happens then? Now another thought experiment: we draw a plan for this machine, but this time the plan holder holds a plan for a desk, then we turn on the machine, the child machine pops out, we turn on the child machine, and what happens now? And now another thought experiment: we draw a plan for this machine, but this time the plan holder shows the plan of this machine with an empty plan holder. Then we turn on the machine, the child machine pops out, we turn on the child machine... do you see what happens in your mind's eye?
But you can modify this machine to work a bit differently: to produce what's drawn on the plan and then to copy the plan, find the place marked "here" on the produced item, and insert the copy there. And on the plan, we will draw the exact plan of this machine, but with an empty holder marked "here". And now watch what happens when we turn on the machine: it will build its copy without any plan, copy the plan, and insert it into the holder. And this way, it created its exact copy.
If at this point you feel like I'm cheating because making a copy of the plan is a kind of introspection, if you are upset that I didn't clearly define which actions are forbidden introspection and which ones, like copying the plan, are not, then push those thoughts away. This whole story about the machine is not an independent puzzle. It doesn't even have to make perfect sense. It only serves to help you understand better why the quine I'll write in a moment is built the way it is and how it works. This comparison to such a machine helped me a lot with understanding the quine (the one I'll write shortly), so I'm sharing this story with you.
Let’s pay attention to one detail which is insignificant for the machine but will be important when I actually write the quine. This machine, after producing the machine, cannot immediately give it to the person who pressed the button — it has to hold it for a moment to insert a copy of the plan.
summary of how this machine works
And now, a summary of how this machine works:
someone presses the button, the machine starts
based on the plan, it produces a machine capable of producing anything based on the plan
it makes a copy of the plan
it inserts this copy of the plan into the machine it made
it gives the machine equipped with the plan to the person who pressed the button
I encourage you: stop here for a moment. Imagine all of this until you feel comfortable with it. Imagine it precisely, step by step, properly, visually, how this machine works, how it produces the child-machine, how it copies the plan, how it inserts the copy into the child-machine, how we start the child-machine, what this child-machine does, how we then start the grandchild-machine... and so on.
writing a quine
easy quine in Python
And now I will write a quine — a program that displays itself (without using introspection). First, I will write a simple quine in Python to show you how to write a quine, and then I will write a few variations in Python and other languages to show what problems might arise and how to solve them.
First, note that a program displaying itself is a special case of a program that displays something. So I will start by writing a program that displays something. In Python, the simplest way to do this is:
print()
This is a program that can display anything — you just need to insert that anything into the parentheses. As long as you don't insert anything, this program is useless (like a machine without a plan), but when, for example, I insert the word
Poland
into these parentheses, I get a program that displays the word
Poland
:
print('Poland')
Next, note that a program displaying itself is a program that displays a program that displays itself. And this, in turn, is a special case of a program that displays a program that displays something. So let's write a program that displays a program that displays something. In Python, the simplest way to do this is:
print('print()')
This is a program that creates a program that can display anything — you just need to insert that anything into the parentheses. It is like a machine that produces a machine that can produce anything, given a plan. As long as nothing is inserted into these parentheses, this program is useless (like a machine producing a machine not equipped with a plan), but when, for example, I insert the word
Poland
into these parentheses, I get a program displaying a program displaying the word
Poland
:
print('print("Poland")')
Let's compare this again with the machines I wrote about in the first part. The program
print()
is like a machine that can make anything, given a plan for that anything. The notation
'print()'
is like the plan for this machine. The program
print('print()')
is like a machine capable of making anything if given a plan, which has in its plan holder the plan of a machine that can make anything if given a plan, whose plan holder is empty. And the program
print('print("Poland")')
is like a machine capable of making anything if given a plan, which has in its plan holder the plan of a machine that can make anything if given a plan, which has in its plan holder the plan of the word
Poland
. Note that this is not a program that prints its own source — though it is a program that prints the source of a program similar to itself, so we are quite close.
Now we need to perform a maneuver similar to what we did with the machines in the first part. Remember how in the plan of the child-machine I wrote
here
in the plan holder? Well, now I will do something similar. Watch:
print('print(here)')
Do you remember how I modified the machine so that before returning the child-machine, it would place a copy of the plan it was made from in its container? Now I will do something similar: I will modify this program so that after creating the string but before displaying it, it places a copy of this string in the spot marked with the word
here
. For now, this is not easy to do because creating the string and displaying it is done in one command, so there is no time to make this substitution. So, I will separate the creation of the string from its display in this program:
string='print(here)'; print(string)
But wait — since I changed the program, I should also change the string literal with the program source so that the parent creates a child similar to itself (do you understand? In analogy with the machine, I would say: when we change the machine, we also have to change the machine's plan so that the parent-machine creates a child-machine similar to itself):
Notice that I do this substitution not by finding the string
here
(that is, not by the
find and replace
method, not by the Python method
replace
), but by slicing this string at a given place. I do it this way because it will be the easiest. Later I will show you what would happen if I tried to do it differently.
I run this program and see if it works well. Here is the result of its operation:
string=string=here; print(string); print(string)
Not bad, but there are still two things to fix. First, the string inserted in place of the word
here
should be enclosed in quotation marks. There is a built-in function in Python for this,
repr
. So I use it:
Good, the quotation marks appeared. Now the second thing. Since I changed the program (because I added code that inserts what is needed in place of the string
here
), I also need to change the string literal with the program source. So I change the program to this:
When you think about how this quine works, it’s worth comparing it to the machine-making machine I wrote about in the first part. There is only one difficulty: in the case of machines, the difference between the machine and the machine's plan is stark. In the case of programs, the difference between the text and its literal is subtle.
summary of a simple quine in Python
So once again, I’ll summarize in three steps how I wrote this simple quine in Python — because the same steps can be used to write quines in other languages:
I wrote a program that creates and prints the source of a program that prints something
Between the command creating the text and the command printing the text, I added code that places its literal in the appropriate place in the text
Since I changed the program, I also changed the text literal to include the source of this program
what if we used
find and replace
in this Python quine
Do you remember the moment when I was writing a simple quine in Python and had to insert its literal into the text? I mean the moment when I had a program like this:
text='text=here; print(text)'; print(text)
And I changed it to this:
text='text=here; print(text)'; text=text[:7] + text + text[11:]; print(text)
One might think that instead of splitting the text into two parts — one up to the seventh character and the other from the eleventh character — and inserting what is needed between them, it would be simpler to use
find and replace
, i.e., use the
replace
method. So I’ll give it a try. I add the replacement command both to the program and to the program's source literal:
The program didn’t work as it should because in the text literal, the word
here
appears twice: once where it marks the place where something needs to be inserted, and a second time where in the
find and replace
command we specify what fragment of the text we’re looking for. Only the first occurrence of the word
here
should be replaced, not the second. This problem can be solved in several ways. For example, in the
find and replace
command, we could try to refer to
here
without using the word
here
. For example, like this:
Not bad. Now we just need to change the inserted text into a text literal: surround it with quotes and escape any special characters (for example, the quotes around
'ereh'
). Fortunately, Python has a built-in function
repr
for this (but imagine how much work this would be if we were doing it in JavaScript and didn’t have such a function):
Well, almost perfect. Almost the same result. Only when the
repr
function converted the program source text into a text literal, it approached the problem of escaping single quotes around
'ereh'
differently than I did: instead of escaping them, it surrounded the text with double quotes. If I want the program’s output to be exactly the same as its source (not just similar with respect to quotes), I need to either make the
repr
function escape like I do, or I can modify the program to escape like
repr
does. It’s easier to do the latter. Hmm, actually very easy. I don’t need to do anything: the above text that my program printed already has the quotes escaped as
repr
does. It is the quine I wanted to write.
So the quine using
find and replace
looks like this:
And now one more variation: I will write a quine in Python using
find and replace
without using the convenient repr function. I’ll start from the point where I had this program:
As you can see and might remember, it’s almost a quine. All that’s left is to make the string inserted in place of the word
here
be changed into a string literal: meaning it needs to be enclosed in quotes, and all special characters within it need to be escaped.
Since I added a piece of code to the program, I need to add the same piece of code to the string literal containing the program source. But because I will add this piece in the literal, I need to escape the quotes and backslashes within it. Like this:
Since I added a piece of code to the program, I need to add the same piece of code to the string literal containing the program source. But because I will add this piece in the literal, I need to escape the quotes and backslashes within it. Like this:
Since I added a piece of code to the program, I need to add the same piece of code to the string literal containing the program source. But because I will add this piece in the literal, I need to escape the quotes and backslashes within it. Like this:
Look closely. Do you see that it didn’t produce what it was supposed to? Do you see that the result of this program is different from the source of this program? Do you see that it’s not even syntactically correct Python code? Something went wrong.
If you’re tough, sit down now and think about where we made a mistake and why this quine doesn’t work. Try to fix it yourself.
There is one thing done wrong in this quine. When escaping the string, you need to escape the backslashes first, and only then escape the quotes. So I need to swap the order of the two replace methods, like this:
Phew, it works. But it wasn’t easy, was it? That’s why remember my advice: when writing a quine, avoid using quotes. Once you start escaping them, you’ll never get out to go sledding.
At this point, I stop writing the text and go sledding.
quine in JavaScript
Now I will write a quine in JavaScript. This time, I will explain less than I did when writing in Python — because I assume you already understand how quines work.
I am writing a program that displays a program that displays anything:
var text='var text=here; alert(text)'; alert(text)
Between creating the text and displaying it, I insert this text in place of the word
here
:
var text='var text=here; alert(text)'; text = text.substring(0, 9) + text + text.substring(13, text.length); alert(text)
I surround the inserted text with quotes, but without using quotes (to avoid escaping):
var text='var text=here; alert(text)'; text = text.substring(0, 9) + String.fromCharCode(39) + text + String.fromCharCode(39) + text.substring(13, text.length); alert(text)
Since I added some code to the program, I add the same code to the text literal with the source of the program:
var text='var text=here; text = text.substring(0, 9) + String.fromCharCode(39) + text + String.fromCharCode(39) + text.substring(13, text.length); alert(text)'; text = text.substring(0, 9) + String.fromCharCode(39) + text + String.fromCharCode(39) + text.substring(13, text.length); alert(text)
And done. This is a complete quine.
quine in Excel
Alright, so how would one write a quine in Excel (or any other spreadsheet)? It won't be easy since we can't store a string in a variable to use it later, but it is possible. Let's see.
What I'm going to write now, I'll start with a simple Python quine, a bit — but not much — different from those I’ve discussed before. This will be a quine that inserts the literal of the string into the string by
find and replace
(using the Python method replace), but without using quotes (to avoid the escaping nightmare).
Here's the template for a program that prints a string:
string=here; print(string)
Here's the program that prints the template for a program that prints a string:
You could also say that it's the template for a program that prints a program that prints a string — it’s the same thing.
Now I want to modify this outer program so that before displaying the string, it performs a
find and replace
— replacing the placeholder
here
with the literal of that string. But, as we remember, it’s worth doing this in such a way that we don’t use quotes in the
find and replace
command. But how to write the word
here
without using quotes? Maybe instead of the word
here
we use some other placeholder that’s easy to write without using quotes? For example, the character
#
, which can be written as
chr(35)
? So, I write it like this:
Okay. Now, using this quine as an example, I want to tell you a different recipe for making a quine. A bit different, a bit not. Different because it consists of different steps, but not different because it leads to the same goal, producing the same quine. Here’s the recipe:
Write a template for a program that takes a string, does
find and replace
in it by inserting the literal of that string in place of the placeholder, and then prints the result:
And now take
the program template
and where the string goes, insert
the literal
(in the code below, I've highlighted the inserted literal in blue so you can see what's happening):
Now I'll summarize this new recipe for writing a quine:
Write a program template that takes a string, does
find and replace
in place of the placeholder by inserting the literal of that string, and then prints the result. Remember this program template. I'll call it
the program template
.
Write the literal source of this program template somewhere.
Now, in this literal, where the string to be operated on should be, put the placeholder. Remember this literal. I'll call it
the literal
.
And now take
the program template
and where the string goes, insert
the literal
.
This recipe is — as I remind you — equivalent to the previously discussed recipe (meaning it produces the same quines). It has the drawback that — in my opinion — it’s harder to understand how it works. It has the advantage that — in my opinion — it’s easier to use to write a quine in difficult situations.
And now I'll use this recipe to write a quine in Excel. More precisely: I’ll write it in Libre Office, but what I write should also work in Excel.
First, I'll write a formula template that takes a string, does
find and replace
in it by inserting the literal of that string in place of the placeholder, and then prints the result. I'll need to use the
SUBSTITUTE
function, which is used like this:
In the third argument, we need — as we remember — the literal of the first argument. This involves two problems: how (in the third argument) to change the given string into a literal and how to write
the same string as in the first argument
in the third argument.
Changing a string into a string literal is easy. If it doesn't contain special characters (and, as we remember, we don't want them because it will become a nightmare), just surround it with quotes. Like this:
=CHAR(34)&somestring&CHAR(34)
For example:
=CHAR(34)&"Romania"&CHAR(34)
And saying that the third argument should be the same string as in the first argument is also easy. Just write the same string in the third argument as in the first. Like this:
I combine both techniques and now I have a formula template that takes a string, does
find and replace
in place of the placeholder by inserting the literal of that string, and then prints the result. Here it is:
So, I have completed the first step of the plan. Now I’ll complete the second step, which says:
write down the literal source of this formula template
. So, I write down this string:
Now I'll complete the third step of the plan, which says:
now, in this literal, where the string to be operated on should be, put the placeholder
. Here you go:
"=SUBSTITUTE(#,CHAR(35),CHAR(34)&#&CHAR(34))"
Remember this literal. I'll call it
the literal
.
And now I do the last step of the plan. I'll take
the formula template
and where the string goes, I’ll insert
the literal
. It turns out like this (I've highlighted
the literal
in blue to make it easier to see what's happening):
Land value taxes are once again becoming a popular all-purpose solution to housing issues. But implementing them in early 1900s Britain destroyed the then-dominant Liberal Party.
Across the Atlantic from George, the wealthy landowners who dominated British politics predominantly supported the Conservative Party. In response, their opponents, the Liberals and Irish nationalists, increasingly harnessed tenant agitation for political support and with it, support for land taxation.
George’s argument was perfectly timed for the Edwardian left. Local government was in crisis as new infrastructure enabled people to move out of the city center while budgets were burdened with ever more statutory obligations. The existing property tax base could not be managed. By the early 1900s,
Progress and Poverty
was more popular than Shakespeare among Labour MPs.
In 1910, the Liberal government of Henry Asquith implemented a tax on increases in land value and undeveloped land with a view to reforming Britain’s system of property taxes.
Asquith’s gambit failed spectacularly. Britain in the early 1900s became a case study in how administrative complexity can derail land value taxation. The tax cost more to administer than it collected, and it was so poorly worded that it ended up becoming a tax on builders’ profits, leading to a crash in the building industry. As a result, David Lloyd George, the man who introduced the taxes as chancellor in 1910, repealed them as prime minister in 1922. The UK has never fully reestablished a working property tax system.
This history serves as a cautionary tale for modern Georgist sympathizers who believe a
land value tax will solve
the world’s housing shortages
. While Georgists argue that land markets suffer from inefficient speculation and hoarding, Britain’s experience reveals more fundamental challenges with both land value taxes and the Georgist worldview. The definition of land value was impossible to ascertain properly and became bogged down in court cases. When it could be collected, it proved so difficult to implement that administration costs were four times greater than the actual tax income. Instead of increasing the efficiency of land use, it became a punitive tax on housebuilders, cratering housing production.
Worst of all, it not only failed to solve the fundamental problem with British local government – that it had responsibilities that it could not afford to cover with its narrow base – but actually contributed to the long-term crumbling of the property tax systems Britain did have.
Not all countries failed as spectacularly as Britain, dooming not only the land value tax itself but also the existing property tax system it replaced, but few countries have successfully implemented a land value tax. Most countries that claim to have land value taxes, like Australia and Taiwan, exempt the two biggest uses of land: agriculture and owner-occupied housing.
How Liberals came to love land value taxes
By 1900, 77 percent of Britain’s population lived in cities, and around two thirds of men (but no women) could vote after three Great Reform Acts of 1832, 1867, and 1884. Still, a relatively small number of landowners continued to wield outsized influence in politics. In 1880, 322 of the 652 MPs owned more than 2,000 acres. The House of Lords, which possessed the power to veto all legislation (although by centuries-old convention, it had typically given way when the two houses disagreed), was even more landowner dominated, with 308 of its roughly 800 members being among the 6,000 largest landowners in England.
Henry George had long maintained an interest in Irish affairs. Despite being an evangelical Protestant of English ancestry,
he had in 1869 edited a small Catholic weekly newspaper called
The Monitor
. Later, in 1879, he argued, for the
Sacramento Bee
, that the landlords of Ireland should be expropriated to lift the burden of rents on farmers. He sent 25 copies of his most famous book,
Progress and Poverty
, published that same year (but mostly ignored until later), to leading Irish-Americans in New York. One of them, George Ford, was the founder and editor of
Irish World
,the largest Irish-American paper at the time, for which George regularly wrote.
In October 1881, Henry George arrived in Cork in Ireland, then part of the UK, tasked with covering a country in uproar for
Irish World
. Land issues were central to the turmoil. In 1879, the Irish nationalist MP Charles Stewart Parnell and Michael Davitt, the leader of Ireland’s land reform movement, together advocated for forcibly transferring the country’s land from wealthy Protestant landlords to their poorer, mostly Catholic farmers, mobilizing tens of thousands of tenant farmers in a campaign of mass agitation against landlords known as ‘
the land war
’.
George briefly met Parnell, who was then in prison, and developed a working correspondence with Davitt. He used this as an opportunity to spread his message on the need for a land value tax to the rest of the UK, publishing the British edition of
Progress and Poverty
in 1882 and sending a free copy to every MP. The initial reception in Britain – at that point, the world’s economic and political center – was unenthusiastic. George’s ideas were panned by academics such as Karl Marx and Alfred Marshall and denounced by Liberal MP Samuel Smith as an absurd get-rich-quick scheme like the South Sea Bubble.
But by 1885, the political situation had radically changed. In that year’s elections, Parnell, now referred to as the
Uncrowned King of Ireland
, led his new Irish Parliamentary Party to win 86 out of 101 seats in Ireland and held the balance of power in Parliament. In 1886, Prime Minister William Gladstone offered him autonomy for Ireland (known as Home Rule) in exchange for his support.
This split the Liberal party. The vast majority of landowning Liberals defected, furious that Gladstone would leave Irish landlords at the mercy of a government led by tenant agitators. A Liberal Unionist bloc allied itself with the anti-Home Rule Conservative Party (and would officially merge with them in 1912). In the 1885 election, 54 of the 111 English constituencies with a significant agricultural population returned a Liberal MP; in 1886 this figure slumped to just 16.
Paying for local government
Following landowners’ mass defection from the Liberals in 1886, the Liberal coalition was able to unite around land value taxation. Land value tax fervor became increasingly dominant on the party’s left and among the grassroots, in part because of its potential as a tool to weaken the power of their political enemies, the large landowners, and in part to fix the growing crisis of local government funding. Starting in 1888, party conference attendees passed successive proposals for land taxation, leading to the inclusion of a land value tax proposal in the Liberal Party’s
1891 Newcastle Programme
.
Although Britain already had two property taxes – an income tax, which included rental income, and a property tax with rates for residential and business properties – they appeared to be unable to cover local governments’ obligations.
The first property tax was Schedule A, part of the income tax. At the time, British income tax applied not just to rental income but also to the estimated rental value of houses (aka imputed rent), though only to those earning more than £100, which in 1870 was around 400,000 people out of a population of 26 million. This went to the national government.
The second and more politically significant property tax was ‘the rates’, which went to local governments: domestic rates on residential property and non-domestic rates on business property. (These continue to this day in some form. Domestic rates were abolished in 1990 for the community charge or ‘poll tax’, and then council tax in 1992. Non-domestic rates are now known as business rates.)
British local governments’ remits had expanded with the economy during the Industrial Revolution. Bills enacted in
1848
and
1875
permitted, then mandated, local public health provisions and a basic standard of sewage and water treatment. In
1870, local government was also made to provide
compulsory primary education for children between the ages of 5 and 12. As a result, local government spending rose from less than one third of total government expenditure in 1870 to over half in 1905.
Though their remits had expanded, their funds from the central government had not. This put local governments under tremendous pressure. The central government depended upon the income tax, which was sufficient to pay the interest on the national debt, maintain the military, and employ
a civil service of 14,000
people. It also paid for approximately one quarter of local spending with government grants. But all of the remaining financing for the extended public services had to come from local taxation.
In 1900, three quarters of funding for local government activities – poverty relief, the police, education, and sanitation – came from taxing rental income. Since urban rents added up to about 10 percent of GDP at the time, this meant that one tenth of the economy was responsible for financing almost the entirety of local authority budgets.
Property, and those who occupied or rented it, thus bore the brunt of the tax burden, paying around one third of the nation’s taxes compared to one tenth today.
The problem for British local governments in the twentieth century was that they were also responsible for welfare benefits. Wealthier areas could easily afford the necessary taxes to support their small numbers of poorer residents. But poorer areas in the inner cities found this an increasingly difficult burden.
The situation deteriorated during the 1890s and early 1900s, paradoxically due to a large expansion in the housing supply. The electrification of London’s railways and trams enabled further suburban expansion and a significant increase in housebuilding. Better-off working-class families moved to the suburbs, leaving poorer areas with an even larger relief burden. Rates rose between 30 and 50 percent in all districts of inner London between 1891 and 1901. A typical increase in rates, which was recorded in boroughs like Stepney and Camberwell, was from 6 shillings to 9 shillings per pound.
This coincided with a sharp economic downturn beginning in 1905 that caused unemployment in urban areas to skyrocket to double digits, which only increased the pressure on local governments in cities and the ratepayers who supported them.
Paying for local government
After the 1880s, the radical parts of the Liberal Party became increasingly Georgist. The Liberal base comprised nonconformist Christians and business owners, who found George’s message – with its antagonism toward rent but not profit, couched in explicitly Christian terms – extremely persuasive.
Liberal Georgists argued that the root of local government’s problems and poor urban conditions was that taxes were levied on property values, not land values. In part, it was a straightforward economic argument: taxing property meant taxing improvements to land, including building up or replacing dilapidated housing stock, which disincentivized development. Taxing only land, on the other hand, would not discourage improvement. Progressive members of cash-strapped councils like London and Glasgow supported land value taxation on top of property taxation as a source of extra funds.
But these marginal improvements – reducing the disincentive to improve land and providing more funds for urban councils – were only part of the reason Liberal Georgists favored land value taxation.
George had promised his followers nothing short of Utopia
. George argued that since all production needs land, competition would push labor and capital returns down to minimum levels, leaving all remaining economic surplus to accumulate as land rent. Therefore, he concluded taxing land rent alone could fund all government activities since it captured society’s total surplus value.
Intercepting this entire social surplus with the land value tax, he argued, would provide not just all the money the government needed but enough to end poverty and create a harmonious society in which all humans could fully satisfy their innate needs and desires.
As we have seen, George’s dedicated Liberal followers lobbied for decades to convince others in the party to implement the tax – and it worked. By the 1906 election, 68 percent of English Liberal candidates endorsed land reform, and 52 percent of English liberal candidates mentioned land value taxation in their campaign manifestos.
The loss of landowners, local governments in crisis, and Georgist ideology had together driven land value taxation to the center of the Liberal platform.
Tories ride a wave of ratepayer revolts
In 1903, the Conservative government of
Arthur Balfour
imploded over its handling of the Boer War, including public revulsion at its use of
concentration camps
, and colonial secretary
Joseph Chamberlain
’s resignation to begin a campaign of tariff reform. Chamberlain had demanded that Balfour impose tariffs on goods imported from outside the British Empire to shield industry from growing foreign competition. When Balfour refused, the resulting split fatally weakened the Conservative Party.
The Liberals, now unified in their support of land value taxation, won the 1906 election. The Conservatives suffered their worst-ever electoral defeat (until 2024), with the number of Conservative Members of Parliament falling to 156. The Liberals now held a large majority in the House of Commons.
Still, opponents of land value taxation abounded. While the largest landowners in the House of Lords were unlikely to elicit broad public sympathy on their own, the million or so smaller property owners, including shopkeepers and landlords of working-class housing, were a different matter. Most landlords and shopkeepers outside of London were freeholders, and even those who did lease land often did so with 100-year leases, usually on terms that meant that they paid all taxes (despite not being the freeholders). As a result, they believed that they would ultimately have to contribute the bulk of any land value taxation.
For landowners, taxing ‘excess’ land rent from profitable investments made as little sense as compensating them for losses. From their perspective, any ‘land rent’ was the reward for taking risks and investing personal savings in the uncertain and heavily taxed urban property market. The rates were already too high, in their view, and they saw the prospect of paying a land value tax at even higher rates as an unacceptable burden.
This coalition proved a formidable political force, which the Conservatives actively courted. Following their 1906 defeat, the remaining Tory MPs pressured Balfour to make both new tariffs on imports and the extension of national welfare programs central to Conservative policy. Tariffs, which implied a source of revenue to relieve the burden of rates, won the Conservatives support in local elections from disgruntled ratepayers.
Local Conservatives capitalized on these ‘ratepayer revolts’ to end the progressive parties’ dominance of local politics (including both Liberals and the nascent Labour party). The London borough elections of 1906 and the London County Council in 1907 both returned majorities for the Municipal Reform Party, which was closely allied with the Conservatives. At a national level, the resurgent Conservatives triumphed in eight by-elections between January and September 1908.
The new Liberal government was thrown into crisis. Prime Minister
Henry Cambell Bannerman
resigned on health grounds and his successor,
Herbert Asquith
, decided that significant welfare reform was necessary to counteract the Conservatives’ newfound appeal.
Liberals shift toward tax and spend
The Liberals had traditionally been the party of low taxation and low central government spending. Apart from the military and empire, they believed that public services could – and should – rely on the rents from local land. The traditional argument was that funding from the central government represented ‘doles for squire and parson’, subsidizing the Tory-voting landlords who paid the rates while encouraging profligate spending by local governments.
The new Conservative challenge forced the Liberals to change their stance. In 1908, the government introduced state-funded old-age pensions and prepared to introduce unemployment and health insurance. But it was unclear how they would fund this nascent welfare state. Meanwhile, the government also needed money to fund a
naval arms race with Germany
.
Chancellor
David Lloyd George
saw an opportunity to tax the Conservative voter base as much as possible. In his
1909 budget
, Lloyd George proposed two major tax reforms. The first increased income taxes for high earners. The second introduced
three
new land taxes, the first new land taxes in Britain since the 1700s: an annual 2.5 percent tax on undeveloped building land, a five percent annual surtax on mining royalties, and a 20 percent capital gains tax on land transfer, known as an increment duty. The taxes were designed with the intention that shopkeepers and landlords of working-class housing were not affected. Owner occupiers were also exempted from the new increment duty.
There was just one problem: since the Liberals had lost the support of large landowners in 1886 (who had broken away to become the Liberal Unionists), their share of the House of Lords had fallen from 41 percent to seven. And the upper chamber still had the legal power to block legislation, although a convention dating back to the 1600s held that the veto power should not be used to block finance bills.
The Lords found themselves unable to obey the centuries-old convention. It made the unprecedented decision to veto the budget, committing to passing it only if the Liberals won another majority.
This prompted a constitutional crisis on the best possible terms for the Liberal Party. They called a new election to obtain a mandate to pass the budget, which they declared one of ‘Peers versus the People’, arguing that unelected Lords wished to replace the Liberal
People’s Budget
with regressive tariffs, including on basic foodstuffs.
Despite this Liberal advantage – that the Tory Lords had effectively violated the constitution of over 200 years – the
resulting election of February 1910
was closely fought. Conservative tariffs were popular in areas threatened by emerging competition from Germany and America. The propertied classes and agricultural interests rallied. In the end, the Liberals won two more seats than the Conservatives, but the Conservatives won three percent more of the vote – a 5.4 percentage point swing away from the Liberals. The Liberals managed to form a majority with the support of the Labour Party and the Irish Nationalists.
Implementation proves difficult for land value taxation
After winning the election, the Liberals passed their finance bill, enacting the new land value taxes. They then held another election to strip the House of Lords of its veto power, which resulted in another
close electoral victory
and enabled the party to pass the
Parliament Act in 1911.
The old property taxes had been simple: tax was based either on the actual rent a property went for or on what the local authority judged a property to be worth based on similar properties nearby. But the new land taxes proved exceedingly difficult to calculate. In theory, the fact that many property owners, like people who owned the homes they lived in (owner-occupiers), were not expected to pay them might have muted this problem. But the government decided to calculate a full set of property values anyway to make future changes to the legislation easier.
The tax on mining royalties was easy enough: it was just an additional tax on the revenue a landowner received from a mining company. But the other two taxes involved complex calculations. The 20 percent tax on unrealized increases to property values, which applied to agricultural, industrial, and residential land, required the government to establish a baseline price for the land against which increases could be judged. This was doubly true for undeveloped building land, which also faced the 2.5 percent annual tax.
There were close to ten million properties in the country that needed valuing, and for the majority of these properties, the land and structure had been traded together, meaning that there was no distinct market valuation of land to draw from. What’s more, in line with Georgist theory, the tax was supposed to credit owners for improvements they made to the land. But this meant calculating several hypotheticals, many of which had never been measured or recorded, including building and structure value and value contributed from plumbing, access to railways, and other infrastructure contributions.
The process was beyond the capacity of the government. In August 1910, the Liberals sent out 10.5 million copies of the notorious ‘Form 4’, which required owners to submit specific details on their income and the use and tenure of their properties. It also required them to estimate the site value themselves.
Failure to return the document carried a fine of £50, about £7,500 in current prices.
A cartoon published in Punch in August 1910 takes aim at Form 4.
Image
Landowners formed an organization known as the Land Union, which began a program of high-profile legal attrition – subjecting every ambiguity of the law to judicial review – to test the complex and hurriedly made legislation in the courts. The campaign was successful, eventually resulting in the
Scrutton judgment
of February 1914, which invalidated all valuations of agricultural land – effectively blocking the 2.5 percent tax on undeveloped land and the collection of the increment tax as it applied to farmland.
A common Georgist view was that property speculators hoard urban land, and land taxes would force them to develop it. In practice, the exact opposite happened. The additional taxes simply reduced builders’ profits and forced many to reduce production. The new taxes also devalued building land, leaving housebuilders with devalued collateral for their loans, which threatened to bankrupt many of them. As a result, instead of rising, building rates cratered, falling from 100,000 in 1909 to 61,000 in 1912.
Land taxes raise less than the cost of implementation
The land taxes did nothing to solve the critical issue of financing local governments. Liberal Georgists had suggested using the revenue from these taxes (which went to the central government as opposed to local authorities) to dole out grants to cities. But by 1914, the failed valuation schemes had cost £2 million to implement, while the taxes had brought in only £500,000. The government had lost money while damaging the urban economy. There was nothing left for cash-strapped local governments. Grants to local authorities increased by just two percent between 1908 and 1913, while spending grew by 18 percent.
By 1913, local governments across the UK were in crisis. There were warnings from local government associations that the education system was on the verge of collapse. The Association of Municipal Councils went so far as to accuse the chancellor of ‘having jeopardized the whole of the local government of England’.
Lloyd George finally made a public commitment to rating reform in February 1914, announcing that the local rates would move from a tax on property to a tax solely on land value paid to the local government. This would finally achieve the goal of the Liberal Georgists. The full weight of the rates would, allegedly, fall upon the landowner rather than the labor or capital. We have already seen how this assumption was questionable. But ultimately whether they were theoretically right was irrelevant. The program was impossible to implement quickly: the Valuation Office predicted that it would take until 1917 to finish the valuation at the earliest.
In the end, the outbreak of the First World War put the final nail in the land value tax coffin. In 1916, Asquith was replaced as prime minister by David Lloyd George, and the Liberal party split, with Lloyd George leading a coalition between the Conservatives and Liberal MPs loyal to him until 1922. The Conservatives were still opposed to the land value taxes and Lloyd George was not prepared to defend their continuation given their abject failure before the war. The valuation, frozen during the war, was formally ended in 1920, and the remaining land taxes were abolished by Lloyd George himself in 1922.
What we can learn
The land value tax debacle teaches us that local government cannot perform its function of investing in local public goods – infrastructure, sanitation, and so on – if it neither has the base to fund it nor the incentives to carry it out. Before the First World War, the British local government was a powerful part of the growth coalition. In the 1890s and 1900s, British councils borrowed money from private firms to build tram, electricity, and gas networks, paying off the loans primarily through new tax revenue generated from the resulting local economic growth. By 1910, London County Council alone had built 120 miles of electric tram routes.
Yet there was an inherent tension between the demands put on British local governments and the narrow base of taxation they were expected to subsist on. Demands for social services continued to increase throughout the early twentieth century, and two world wars coincided with the continuing expansion of the welfare state. Growing numbers of workers in politically assertive trade unions and universal suffrage (for men in 1918 and women in 1928) reinforced this trend.
Altogether, this meant that Britons were no longer obliged or willing to accept poverty or a ‘postcode lottery’ in local services. Starting in 1929, Britain began redistributing local taxation around the country through a system of ‘block grants’, which gave money from the national taxpayer to needy councils in order to fulfill their social obligations. By 1948, about 30 percent of local government funds came from grants. Today, the share has increased to nearly two thirds.
The second lesson is that the pure land value tax is chimerical. Those countries that raise substantial amounts of tax from land, such as Japan and the USA, do so through taxes on property, as Britain did before David Lloyd George. A pure tax on the unimproved value of land has never been successfully implemented anywhere. Land value taxes introduced in Australia and New Zealand have been repealed. Denmark’s land value tax is a minor quirk of the system, comprising less than two percent of total revenues. The land tax in Taiwan exempts owner-occupied housing and agricultural land – the country’s two main land uses – altogether.
From 1906 to 1914, the Liberals wasted the best chance in British history to put property taxes and local government in the UK on a sound footing for the long run. Their failure left local governments with wide responsibilities but no way to support them except through grants, which have increasingly relied on taxes on income and consumption rather than property. It has been the landowner who has benefited the most.
1
Jonathan Rose,
The Intellectual Life of the British Working Classes
(Yale University Press, 2001).
2
Although Denmark, perhaps alone among countries, does have a
land value tax
that is close to theoretically pure.
3
Charles Albro Barker,
Henry George
(New York, 1955), 393.
4
At the time, about four percent of GDP was spent through local authority budgets.
5
Avner Offer,
Property and Politics 1870–1914
(Cambridge University Press, 2010), 291
6
Avner Offer,
Property and Politics 1870–1914
(Cambridge University Press, 2010), 317.
7
Not completely dissimilar to E Glen Weyl’s self-assessed Harberger Tax for all property as proposed in
Radical Markets
.
The Long Flight to Teach an Endangered Ibis Species to Migrate
The birds left Bavaria on the second Tuesday in August. They took off from an airfield, approximated a few sloppy laps, and then, such are miracles, began to follow a microlight aircraft, as though it were one of them. The contraption—as much pendulum as plane—reared and dipped as its pilot, a Tyrolian biologist in an olive-drab flight suit and amber shooting glasses, tugged on the steering levers. Behind him, in the rear seat, a young woman with a blond ponytail called to the birds, in German, through a bullhorn. As the microlight receded west into the haze, the birds chasing behind, an armada of cars and camper vans sped off in pursuit.
The birds, three dozen in all, were members of a species called the northern bald ibis: funny-looking, totemic, nearly extinct. The humans were a team of scientists and volunteers, Austrians and Germans, mostly, who had dedicated the next two months, or in some cases their lives, to the task of reintroducing these birds to the wild in Europe, four centuries after they disappeared from the continent. The woman with the bullhorn was Barbara Steininger, or Babsi, one of the birds’ two foster mothers, who had hand-raised them since their hatching, four months before. The pilot was the project leader, Johannes Fritz, a fifty-seven-year-old scientist and Pied Piper. Almost every August for the past twenty years, he has led a flock of juveniles on a fall migration, to teach them how, and where, to travel. This was this flock’s first day on the move. They had seven weeks and seventeen hundred miles to go until, assuming more miracles, they would reach wintering grounds on the southern coast of Spain.
The 100th Anniversary Issue
Subscribers get full access.
Read the issue
»
I caught up with the migration in the Spanish countryside twenty-two days later. I’d been invited along by a group of documentary filmmakers, whose effort to shoot this undertaking was perhaps as cracked as the migration itself, as they tried to film creatures they were forbidden to approach.
The bird team and the filmmakers were camped, at some remove from each other, at an old aviation club, a farm with an airstrip of stubbled grass, near the Catalonian village of Ordis, ninety minutes northeast of Barcelona. I arrived at the camp more than an hour before dawn. Already, the bird team and the film people were bustling around with headlamps on, everyone whispering so as not to disturb the birds. No one but the foster mothers was allowed near the ibises, and the bird team’s vans and mess tent formed a buffer between the birds and the rest of the camp. The first sliver of light in the east revealed their movable aviary, a box of netted scaffolding the size of a barn, with a few dozen birds perched in silhouette, like hieroglyphs.
Fritz rolled back the door of a corrugated-metal hangar and began to wheel the microlight out toward the airstrip. He’s tall and very lean; he works out twice a day. He’d just been out in the darkness running laps around the airstrip in his underwear. Now he’d donned the flight suit. “I’ve had this for twenty years,” he said. “It has the aura. It makes me into the pilot.” Under it, he wore a catheter connected to a drainage bag, in case the day went long.
Out in the field, prepping the vessel for flight, he began to describe the microlight, in Werner Herzog-accented English. “It may not look safe, but it is safe,” he said. “We have had crashes, emergency landings, all of this, but never with injuries.” He’d customized it to include a cage around the propeller, to protect the birds from being cut to bits. With a full payload of eighteen gallons of fuel and two humans, it weighs eight hundred and fifty pounds and can fly for more than four hours, topping out (tailwind not included) at around thirty miles per hour. An extra-large parachute provides the preferred loft and drag. “I love to fly slow,” Fritz said.
He laid out the chute on the grass, more than six hundred square feet of yellow fabric. Tyler Schiffman, the film’s director, explained that the birds had been conditioned to follow the color yellow. The foster moms wore yellow, but no one else was allowed to. “That’s the one major rule,” he said, giving me a once-over. No yellow on me.
Behind him, some ten miles away, you could see the Mediterranean agleam in the rising sun; Schiffman and his director of photography, Campbell Brewer, filmed the preparations, which in such gilded backlight seemed Elysian, or Malickian. They’d installed a camera on the frame of the microlight, and could control it remotely, from a pursuit van.
Helena Wehner, the second foster mom, led the birds out of the aviary, singing to them as she went, like Maria von Trapp. I headed for a helicopter, a few hundred yards away; the filmmakers had rented it for a week. It came with a pilot, and they’d hired an aerial-camera operator, a voluble Englishman named Simon Werry. Werry had a rotating camera mounted under the cockpit, which he manipulated with a joystick. Brewer was in the front seat with another camera. I joined Werry in back. “I got a terrible bollocking from one of the bird mothers yesterday for walking too close to the birds,” he said. “In the heli, we have to stay three hundred metres away.”
A half hour passed, the two aircraft at rest in a field, the birds jabbing at the soil. “This is the light we’ve been waiting for,” Brewer said. “Bummer.” Apparently, the local air-traffic control was withholding permission to fly. Each stage had its myriad complications. In France, a crop duster—bright yellow, of course—had come a few dozen feet from the microlight. And the day before, while attempting to cross the Pyrenees, the helicopter pilot had requested permission over the radio from French air-traffic controllers, who replied, “Fuck off with your ducks.”
Clearance came at 7:49
a
.
m
. Nine minutes later, both the microlight and the helicopter were aloft, as were the birds. Fritz’s voice came over the radio: “More distance for the birds.” The helicopter backed off.
Below us were hayfields and stone barns, copses and creeks. We passed over a complex of villas girding a golf course, each with a pool.
“Fucking Spain,” Werry muttered, then said, “Oh, there go the birds!”
“They’re looking like they really want to fly today,” Brewer said.
Fritz circled the airfield, in an attempt to get the birds to fall in behind. From this distance, it was hard to tell what was happening; everyone watched the monitors instead.
Brewer: “Wait, are the birds with him?”
“I can’t see them.”
“I think he’s lost them.”
“Go on, follow, you little shits,” Werry said. “The buggers have turned back. They’re not so committed today.”
The birds alighted in the field, and, after a few more attempts to get them going, Fritz landed, too. It was 8:55
a.m
. In flying terms, the day was done.
Our devastation of nature is so deep and vast that to reverse its effects, on any front, often entails efforts that are so painstaking and quixotic as to border on the ridiculous. Condor or cod, grassland or glacier: we do what we can, but the holes in the dyke outnumber the available thumbs. Fritz’s microlight brings to mind Noah’s ark, except that it has room for only one niche victim of our age of extinction. The commitment, ingenuity, and sacrifice required to try to save just this one species demonstrate how dire the situation has become, and yet the undertaking also reflects a stubborn hope that’s every bit as human as the tendency to destroy. Fervor in the face of futility: what other choice do we have? That’s the idea behind Fritz’s northern-bald-ibis project, anyway. This is what it takes, so let’s get to it.
The species’s past was more distinguished than its present. Millennia ago, the birds dwelled in the cliffs east of the Nile, a place associated with sunrise, and thus with life and rebirth. This is how, some say, they became the source for the Egyptian hieroglyphic for Akh, which can represent the persistence of the spirit of the dead. In Turkey and the Middle East, the birds crop up in folklore as heralds of spring, guides for pilgrims, and the first creatures, along with two doves, to disembark from Noah’s ark. The species once thrived in Central Europe, too. Conrad Gessner, the sixteenth-century Swiss physician, naturalist, and linguist, and reputedly the first European to describe the guinea pig, the tulip, and the pencil, is also said to be the first to write extensively about the northern bald ibis. He described them as quite tasty, with tender flesh and soft bones. The fledglings were savored by the nobility and the clergy, who, according to Fritz’s research, passed decrees to keep commoners from killing them, to preserve them for themselves. But to no avail: pressured by hunters and, most likely, by the harsh climate of the
Little Ice Age
, the bald ibis didn’t stand a chance. By the early part of the seventeenth century, it had disappeared from Europe.
“Well, I
am
the poor now, so can I have my money back?”
Cartoon by Liana Finck
In its absence, it acquired a mythical status; depictions could be found in old drawings and altarpieces. But colonies persisted in Turkey, Syria, Algeria, and Morocco, and after European ornithologists made the connection, in the nineteenth century, they realized that these were the birds they’d seen only in reproductions—and that they had in fact once really existed.
In the nineteen-fifties, several juveniles were imported from Morocco to a zoo in Basel. In the following decades, their offspring proliferated there and in other European zoos. By then, they’d acquired a Latin name,
Geronticus eremita:
elderly hermit. It suited them.
Geronticus eremita
is chicken-legged, oily black, with a macaque’s red face, a long curving beak, and an Einsteinian shock of feathers that spike back from an otherwise bare brow.
But, until recently, the European transplants had little use for flight. They didn’t migrate; they idled away their winters at the zoo. Meanwhile, all but the last traces of wild northern bald ibises, in North Africa and the Middle East, were vanishing, too, as a result of hunting, habitat loss, pesticides, and electrocution. In 2002, an Italian ecologist discovered seven migratory northern bald ibises in Syria. The last one disappeared in 2014. “That’s the year they went extinct as a migratory species,” Fritz said.
In the early nineties, Fritz, who’d grown up on a farm in the mountains near Innsbruck, was working with ravens at a research station in the Alm River valley, under the auspices of the Konrad Lorenz Institute for Evolution and Cognition Research. He hand-raised chicks, performing some of the imprinting techniques that Lorenz, one of the founders of the field of animal behavior, had pioneered. For his Ph.D., he moved on to greylag geese, the species with which Lorenz had done his most famous work. “I trained goslings to open tiny boxes,” Fritz said. “I wanted to learn if they can learn rules.” He was especially interested in the way the skills he’d taught them spread through the population. The idea was that a human could imprint behaviors on animals which they would then pass down to subsequent generations. (His colleagues, he said, are now teaching corvids to use computers, with a touch screen: tiny boxes, of a more insidious kind.)
The research station had acquired a small colony of northern bald ibises. One morning in early August, the researchers went to the roost and found that the juveniles, all of them, had disappeared. An eagle? An owl? But then civilians started calling in sightings. Before long, the researchers were fielding reports from points as far-flung as Poland and the Netherlands.
Perhaps because the birds had flown north, no one thought at first that this juvenile wanderlust might be an expression of a latent migratory instinct. “They went the wrong direction,” Fritz said. But the following year, with a new brood of bald-ibis fledglings, it happened again: one August morning, an empty roost. This time, the reports of itinerant ibises came from as far away as Hungary and St. Petersburg. “We started to understand that these birds were motivated by migratory restlessness,” Fritz said.
This captured Fritz’s imagination. Around 2001, he was commencing his postdoctoral work, on the traditional path of becoming a serious scientist. Yet at the same time he began to wonder if he could train these wandering birds. He started taking flight lessons outside Vienna.
The idea came to him from “
Fly Away Home,
” the 1996 feature film that people of a certain age and disposition might consider a touchstone. A thirteen-year-old girl, played by Anna Paquin, loses her mother in a car accident in New Zealand and goes to live in Ontario with her father, a sculptor and inventor, played by Jeff Daniels. She finds sixteen abandoned goose eggs and secretly raises the goslings. Complications ensue, but the upshot is that father and daughter decide to help the geese migrate to a sanctuary in North Carolina. The father builds a microlight, but, since the birds have been imprinted with the idea that Paquin’s character is their mother, they won’t follow anyone but her, so he teaches her to fly.
“We’re not angry with you. We’re not disappointed. We just want to know who you are and how you got into the apartment.”
Cartoon by Pia Guerra and Ian Boothby
The film was based on the story of a Canadian artist and inventor named Bill Lishman, who lived on a farm near Lake Scugog, in Ontario. He built a home there on top of a hill but into the ground, a series of interconnecting domed circular structures, to reduce the need for heat and air-conditioning. Lishman was, among many other things, a pioneer of microlight flight, and he had a lifelong dream of flying with birds. In the late eighties, when he was in his fifties, he and his kids, under the tutelage of a Lorenz acolyte, raised some goslings by hand and trained them to follow a motorcycle and then a microlight. In 1993, he led his first goose migration, from the farm to Virginia. He made a documentary (“C’mon Geese!”), published a memoir (“
Father Goose
”), and even appeared as Daniels’s flight double in “Fly Away Home.” After a “20/20” segment about him, on ABC, the co-host Hugh Downs said, “I think it’s the most beautiful story that we’ve had on ‘20/20’ in its fifteen years.”
“What a joy,” Barbara Walters said. “Ah, if we could all fly.”
Lishman also co-founded Operation Migration, an effort to establish migratory habits and self-sustaining populations in species such as whooping cranes. It didn’t really take.
In 2001, at the age of thirty-four, Fritz started hand-raising bald ibises for the purpose of training them to fly with him. “From an objective point of view, it was not a logical decision, or a very good one,” he told me. “I thought I’d do it for one or two years maybe and then get back to serious work. Twenty-five years later, I’m still the one who tries to fly with the crazy birds.”
There were no historic records of where the European northern bald ibises might have migrated to, so Fritz settled on Tuscany; it was the closest suitably southern climate to Austria, and there were popular wintering sites there for migratory birds.
“At the time, we didn’t account for the Alps,” Fritz said.
Between 2004 and 2022, Fritz led juveniles south from Bavaria into Italy fifteen times, plotting a passage clear of the higher peaks and the more treacherous winds. Eventually, many of the birds, as hoped, then began making the annual return trip back north. Last year, fourth-generation descendants of birds he taught to migrate made the journey to Tuscany on their own. (In the first fifteen migrations, Fritz took two hundred and eighty birds to Italy, the first group of which began mating and migrating in 2011; those have so far produced three hundred and eighty-three fledglings in the wild.) It wasn’t just for sentimental purposes. Migratory ibises have a survival rate of between two and three fledglings per female, which is almost three times that of some nonmigratory populations.
And yet, each year, more and more birds were leaving Germany or Austria and failing to make it over the Alps. Warmer autumns, owing to climate change, were causing them to stick around longer; then they hit the mountains and got turned around or killed by wintry weather. Last year, forty-two birds from the Tuscany years failed to make it over the Alps, the most ever.
In 2022, while Fritz was leading twenty-eight birds through the mountains of South Tyrol, one of them disappeared. Its name was Ingrid. (Ingrid was a male. The birds get names before anyone knows their sex.) Outfitted with a G.P.S. tracker, Ingrid embarked on an alternative route, across the northern part of Switzerland and into France, south along the Rhône to the Mediterranean, then over the Pyrenees and all the way down to Málaga, on the southern coast of Spain. It was an astounding journey, a long and perilous solo improvisation. There was a population of nonmigratory northern bald ibises nearby, in Cádiz, monitored by an outfit called Proyecto Eremita, which took Ingrid in.
Maybe this ibis knew something. Certainly, it would be better for Fritz, as a pilot, to avoid crossing the Alps. Plus, as he later learned, paleontologists had found evidence of the presence of northern bald ibises in Gibraltar, twenty-five thousand years ago, and near Valencia, 2.5 million years ago. From this, Fritz concluded that the northern bald ibis may well have been migrating along this corridor for millions of years. “And so Ingrid was the first to revive this tradition after four hundred years,” Fritz said. “This is incredible, no?”
In 2023, Fritz, for the first time, followed the Ingrid route, and led the ibises to southern Spain, instead of Tuscany. It was a longer, harder, and more expensive trip, but, he reckoned, better for the species’s long-term survival.
That summer, Schiffman, who had recently made a documentary for
National Geographic
about relocating giraffes, read a story in the
Times
about Fritz, the
Waldrapp
project (
Waldrapp
is the German term for the species), and the first migration to Spain. Schiffman’s father was an I.T./A.V. technician who’d done some work for Matt Damon. Damon and Ben Affleck have a production company called Artists Equity, which agreed to fund a trip to Spain. Schiffman caught up with the migration outside Roquetes. His mind was blown: “Are you kidding me? I can’t believe this works!”
Eventually, Fritz agreed to collaborate with Schiffman, Artists Equity, and a co-producer, Insignia Films. (Later, Sandbox Films and Fifth Season signed on, too.) Now Schiffman had to figure out how to shoot a subject he couldn’t get close to. For the chicks’ nursery, in a shipping container, Schiffman and his team devised a wall with one-way soundproofed glass—each nest in its own stage-lit box—and found an Austrian glassmaker to fabricate it. The mirrored side faced in, but the birds apparently do not recognize themselves. There were nine nests, each holding as many as five birds. On the outside, the crew laid tracks along which the camera would slide from one nest to the next. Flight was another matter.
The 2024 cycle commenced in the mountains of Carinthia, in southern Austria, where Prince Emanuel of Liechtenstein owns and maintains an open-air zoo and game park on the grounds of an old castle ruin. A colony of about thirty northern bald ibises nests in an aviary, accessed through an open window. They are considered “wild”—free-flying, except in winter—but mostly sedentary, meaning they don’t migrate.
In early April, these birds hatched chicks. Within a week, the zookeeper, Lynne Hafner, had gone from nest to nest to select the fittest ones. They were removed from the parents’ nests—a heartrending process, but it’s for the species’s own good, the humans tell themselves—and transferred to the shipping container, where they came under the care of the foster moms. The foster moms basically live inside the container. They go from nest to nest, sitting with the birds, singing and talking to them, even spitting on their food, to give the birds a digestive enzyme that’s in saliva. (For this, the foster moms must forgo caffeine, tobacco, and alcohol.)
“Your mom packs the best lunches.”
Cartoon by Amy Hwang
Five weeks later, with the birds on the verge of fledging, the ibis team put them in crates and trucked them to a new site north of the Austrian border, an organic farm in the Bavarian countryside. The ibises’ first flight outside the aviary was chaos. They bumbled up in three dozen directions, as kestrels swooped in to attack. Some ibises crash-landed, while a couple of others flew off and went missing for hours. Already, the foster moms had introduced the birds to the sound of the microlight—from a Bluetooth speaker—and then to the vehicle itself. Later, they were shown the parachute. Finally, the moms turned on the engine and drove the microlight around a field, as the birds flew clumsily behind.
Of course, the day before my arrival had been the best one, for the birds and the cameras. The camp was still buzzing about it. Schiffman, over coffee, set the scene.
The first week, the birds knocked off three hefty flight segments, each several hours long, to reach the border between Germany and France. But then the weather turned sour, and they were grounded for three days. On the fourth day, as they entered France, a quarter of the flock turned around and returned to the takeoff spot. The foster mom in the van following on the ground—the two women take turns driving and flying—had to go back to the start, and then spend hours on the airfield, in the blistering sun, herding the birds into crates, before driving hours to rejoin the others. The next two weeks, as they made their way south through France, were a grind. Since the ibises couldn’t be exposed to another human voice, Schiffman, once, riding along with them, had to remain silent for seven hours, raw-dogging it in the van. It didn’t exactly make for great cinema, either.
By the time they reached Narbonne, on the French side of the Pyrenees, everyone was feeling beaten down. Morale in (and between) the film and bird crews was low, to say nothing of the mood of the birds themselves, which no one could surmise. The birds could not be crated across the Pyrenees. By Fritz’s logic, it was the only section they absolutely needed to fly, to train their inner compass for the return journey. “They need to know how to get across the mountains,” he said.
After three days and one failed attempt, despite a threat of storms, and even though the helicopter was running late, they decided to go for it again. They set off at 7:30
a
.
m
. The birds fell in behind the microlight, and they flew toward the Mediterranean, to avoid towns and air-traffic-control zones.
“Please tell me there’s a plan to cross the Pyrenees,” Schiffman radioed to Fritz.
“There’s a five-per-cent chance,” Fritz replied. “I’ll make the decision in the air.”
It started to rain. Droplets splattered the lens of the chopper cam.
“Johannes, are you going to cross the Pyrenees?”
“I’m thinking about it.”
Dark clouds pressed in over the peaks. The ground team was making its own way toward a tunnel that ran under the pass Fritz intended to fly over. “I’m taking the shot,” Fritz said over the radio. “We’re going through the Pyrenees.”
As Schiffman recounted all this to me, the following day, his eyes began to well up. He’s a bright-eyed, youthful Californian, poised and irrepressible; you could imagine him as the network’s preferred front-runner on a wilderness-survival reality show. “So then the rain gets worse,” he went on. “The lens is getting doused. We’re so close to the pass! I’m, like, ‘You can’t do this to me, lens!’ We break off the route, and the pilot goes full speed to air-dry the lens.”
As the helicopter rejoined the route, Fritz flew by with the birds trailing in perfect formation, backlit, and then crested the pass. “I got my money shot,” Schiffman said.
On the Spanish side, the flock cycled on the thermals and then followed Fritz to the landing strip in Ordis, where we were now.
“In the twenty years I’ve done this, this was the most unexpected flight I’ve ever had,” Fritz said. The foster moms chattered and laughed with the birds, plying them with grubs. The aviary was erected, and the birds, exhausted and—who knows—perhaps even gratified, didn’t dither for hours, as they typically did. The crew laid out a spread of wine and cheese. The airfield had a saltwater swimming pool, and Fritz, stripping down to his skivvies, jumped in, and then, towelling off, said, “Are we flying tomorrow?”
We were, and then we weren’t. It turns out that migrating is a lot like filmmaking: hurry up and wait.
“Every theory about why they fly or not gets discredited,” Schiffman told me. “The latest was that they’d gotten too acclimated to this or that stop. Well, that theory’s shot.”
Other theories had to do with the time of day, or fluctuations in the composition of their feed, or various other distractions. Were “the rebel birds,” as Fritz called the recalcitrant ones, always the same, in this group, or did recalcitrance move through the flock like a virus? Fritz and his team, like coaches deep in a slump, tried to discern patterns of noncompliance. They wondered if one foster mom was having more success than the other, a touchy line of inquiry.
“I could’ve been a professor, but now instead I fly with some crazy birds,” Fritz said. He was sitting in a folding chair in the shade, a hundred yards from the aviary. “But I’m convinced that this method can be of value and will be of value for other conservation needs.”
For the program’s first dozen years, it relied on money from institutions and private donors. For the past ten, it has run on grants from the European Union. The latest one provides about a million euros a year, covering some sixty per cent of the costs. The Spanish migration alone is roughly three hundred and fifty thousand euros. A lot of effort, and money, goes toward preventing poaching, which is responsible for a third of bald-ibis deaths in Italy.
Barbara Steininger and Helena Wehner, the birds’ foster moms.
Photograph by Mathias Depardon for The New Yorker
The idea of imprinting, and of intervening in nature generally, has become more fashionable in conservation circles. “Ten years ago, this was seen as too interventionist, but now it’s ‘whatever it takes,’ ” Schiffman said. To Fritz, scientists who insisted on habitat preservation alone were stuck in the past. “It’s too late to just preserve land as is,” he said. “Now we need to save the animals in a way that enables them to live with us.”
Fritz toggled between optimism and pessimism. “Being pessimistic is often just an excuse for doing nothing,” he said. “But the other side of the coin is that after twenty years of work for this one species it still is in danger—increasingly in danger—because of climate change. If you only communicate that there is reason for hope and everyone is happy, I think this is a naïve vision. It’s not the full truth.”
Overhead, there came a sudden eruption of birdsong. “Ooh, bee-eaters!” Fritz exclaimed. “Very colorful! They are migrating, too. They join us every year.” We sat and listened. Later, a flock of storks came wheeling in high on the thermals, and the camp gathered to watch.
“It’s such a great feeling to be on the way of this migration and know that millions and millions of birds migrate together with you,
ja
, so we are exactly where we should be,
ja
?” he said. “But maybe it would be better to be with birds that have the motivation to migrate!”
The film crew and a lot of the ibis team slept in vans or cars, but a dozen or so volunteers had pitched their tents in the yard of a five-hundred-year-old stone chapel on the grounds of the old farm—an aviary of their own—and I found a gravel patch there for my tent, without taking into account the concerto of zippers, snores, and farts that would serenade me through the night. On the other side of the chapel was the swimming pool, surrounded by fig and plum trees and a wire fence vined with grapes, and a kind of galilee that looked out over the foothills of the Pyrenees. Glamigrating, Catalonian style.
The film crew had a chef from Barcelona, Francesca Baixas, who sourced her ingredients fresh most days at reasonable cost in local markets. (Europe!) They took their meals at a picnic table by the clubhouse, and Baixas never repeated a meal. (That night was butifarra sausage and garbanzo beans, with a cabbage-and-potato pie.) Fifty yards away, the ibis team, accustomed or philosophically inclined to plainer living, kept to a vegan and gluten-free regimen. Their devotion to the cause, and their flinty brand of Central European bohemianism, could come off as cultish, in a benign way. Occasionally, the smell of grilled meat wafted over from the film table to the ibis table, and some people from the latter would sneak over. One evening, Fritz came by. “Good evening,” he said. “This table gets bigger and bigger. We worry you start to dominate us.”
As for the birds, a big portion of their migratory diet is meat, prepared by the Vienna Zoo, which helps run the project. “They put dead rats in a blender, tails up,” a team member told me. “Or else chicken, cow’s hearts, chopped and mixed with white cheese.” Far be it from a visiting humanist, in the company of trained biologists, to attempt to understand the brain of a bird, but one might speculate that such a diet might now and then discourage the motivation to fly.
The birds also ate mealworms. The foster moms scattered the worms out on the airfield, whenever the ibises completed a flight, to dissuade them from flying away again. During the day, Christine Schachenmeier, who owns the organic farm in Germany where the fledglings learned to fly, and Gunnar Hartmann brought out crates of mealworms, each containing thousands of grubs on a bed of sawdust and egg-carton cardboard. They culled them by hand, removing the dead ones and the cocoons.
Hartmann was the route coördinator, calculating distances, finding and arranging landings and campsites, and managing the patchwork of permissions necessary to fly through or around the gantlets of restricted airspace along the way. “Last year was much wilder,” he said. “We landed in fields, had to find natural springs to get water. We had no sanitary services.”
Schachenmeier was the camp elder, granite-faced with a seraphic air. For almost forty years, she had been a doctor at a hospital in Rosenheim, treating cancer patients and working in emergency care, but during the
Covid
pandemic she retired and devoted herself full time, with her husband, Frank, to the organic farm. In addition to cattle and chickens, she raised bats, and had learned to hand-nurse orphaned bats with artificial milk. “This migration of Johannes, it is a very courageous enterprise,” she said. “The birds are very vulnerable even if the world stays as it is, but I don’t think the world is getting any better.” She went on, “It costs a lot of sacrifice. Even for the birds. They are more secure if they are in the zoo.”
By 5
a
.
m
. the next day, the camp was astir. The foster moms, Babsi and Helena, tumbled out of their van. Helena combed her hair. Babsi brushed her teeth. Babsi and Helena wore shorts and yellow hoodies; both were very tan. Babsi took down the electric fence around the aviary, set at night to protect against predators. (In 2017, a fox got in and took two birds.) Then she began hauling bags of feed to the entrance. Helena joined her, limping in a knee brace: she had recently torn her meniscus. Some ibises flapped down to greet her.
Babsi is a carpenter’s daughter from Lower Austria, with a sharp sense of humor and, on her left biceps, a tattoo of an ibis. She had studied at the Konrad Lorenz Research Center, hand-raising greylag geese. “I came across the bald ibis when they visited me there in the woods with my goose family,” she said. It was her second year as a foster mom, and Helena’s fifth.
Babsi listed off the names of the ibises, which mostly fell into groupings corresponding to their adoptive nests. There were Voldemort, Fluffy, Aragog, and Grindelwald (“our Harry Potter nest”); Queenie, Genti, Diva (after Helena’s horse stable); Catan, Canasta, Uno, Dixit (games); Levante, Poniente, Fernanda, Marisma (wind systems, Spanish friends); Tarifa, Conil, Achilles, Meniscus, Optimas (“That is my favorite wine”). Schnapsi was the flock’s schlimazel. “In the beginning, you could always tell Schnapsi from the others, a white bird covered in shit,” Babsi said.
Babsi and Helena spend six months straight with the brood. “My grandmother doesn’t like it,” Babsi said. “She says, ‘You are never here for apricot season. You’re always with those ugly birds.’ ”
“When they’re flying, they are beautiful,” she went on. “I get why people say they’re ugly, in the cage. It’s O.K. when they call them ugly. But don’t call them stupid.” She went on, “I think they sense when we are under pressure. We have a very strong connection with them. These birds are trained but not domesticated. They follow a plane. I think it’s a miracle they do that. But everyone watching them, rooting for them but having no control—I finally understand why people like sports.”
“It is kind of magical, the love and trust from another species,” Helena said. “Other birds seem to trust us more, too. They sense the trust of the ibis. You’re kind of being invited to the bird world.”
The two of them spent most of each day inside the aviary, occasionally taking solo shifts while the other prepared food. Helena, who was pursuing a Ph.D. in applied earth observation, occasionally read to them from “Harry Potter.” They sat on mats, surrounded by their birds, which spattered them with jets of excrement. No one else was allowed in the aviary, or even near it—“The King of England could come and he’s not going in there,” Helena said—except under cover of a blind they set up outside the structure, some twenty yards away, which you could access only with permission and stealth. From there, I got to observe the moms hand-feeding the birds from Tupperware containers, a few at a time. The moms chattered and laughed and sang, while the birds made a croaking sound.
Fritz emerged from his van, fresh off a 6
a
.
m
. radio interview. “I can’t escape,” he said. He pointed west with a jug of water, toward a dale where a skein of fog skulked in the early light; as though to ward it off, he said nothing.
On the airfield, he began prepping his microlight. Schiffman and Brewer futzed with the cameras. The foster moms had opened the aviary and were letting out the birds.
I retreated with one of the producers to a patch of scrub grass, out of sight of the birds and the cameras. We lay down in the stubble, at the edge of a sunflower plot, the fallen heads strewn in the arid soil like abandoned hornet’s nests. A light breeze kicked up. The sunflower stalks rattled. As the sun warmed the field, the flies got to work.
There were cameras in place. The birds waddled out into the field, as Fritz’s engine sputtered up. The atmosphere of anticipation and reverence, this collective yearning, a combination of hope and deference, seemed liturgical in a way that connected all the human figures scattered across the acres.
The birds began to fly, as Helena called out to them. “Here she comes,” the producer said. Helena began running across the field, toward the microlight. She took big but uneven strides, on account of the knee. Fritz, in the microlight, was waving his arms like a bird. Helena reached the microlight, adjusted her ponytail, and then climbed into the back seat, as the birds flew in ragged circles nearby. Fritz revved the engine, a desperate, needling whine, and the vessel lurched down the airstrip, the chute billowing awake behind him. And then, just like that, the craft was airborne, and Fritz throttled down, and for a moment it hung there, almost ludicrously slow, appearing to swing like a plumb beneath the chute, before turning toward the east, where the rising sun flashed off the sea. You could hear Helena’s keening singsong through the megaphone, a kind of Teutonic muezzin. “
Komme, komme, Waldi!
” Come, come, Baldies. Two tones, up-down up-down, like a crowd chanting, “Let’s go, Rang-ers!”
The birds wheeled over the aviary while Fritz circled.
Komme, komme, Waldi:
the song receded as the microlight got farther away and then swelled as it neared. This rise and fall, its approaching and distancing, was at once a cheer, a prayer, and a lament, and it induced in me—and, I somehow believed, in everyone else, too—a kind of heartache, like the longing for loved ones or the pain of their aging away. The microlight’s distant motor echoing off the hangar’s corrugated shell sounded like a deranged string section. An old sailboat was propped against the tin. Swallows darted around, feeding on the flies. A commercial jet passed soundlessly overhead.
The birds wobbled toward the airfield and then landed where the microlight had been. You might not be allowed to call them stupid, but they were certainly stubborn. Fritz swung back low toward the birds and rousted them by nearly flying through them.
Lying in the desiccated grass, amid the dead sunflower stalks and the barrage of flies, the heat rising—the scorching of the sun, the wheedling of the engine—I had an uncommonly intense sense of our implacable need to bend nature to our will, for both good and ill. The air stank of fertilizer, of the excrement we spread to grow food for ourselves. The miracle of flight, the cycle of poop and protein, our elaborate efforts to undo harm: what creatures we are. Fritz made laps, orbits passing like days.
The birds fell in again behind the microlight. They all started south, receding.
Komme, komme, Waldi
—it was happening. The heart leaped. But within moments they were back, without their escort. The microlight, at the edge of earshot, tracked toward the sea. Maybe Fritz was giving up, and fleeing with Helena for Ibiza. Fuck off with your ducks. But after a few moments he turned back and landed. Helena got out and began herding the birds back into the aviary. They ducked right in, eager for the comfort of the cage, like dogs in a thunderstorm.
The theorizing resumed. Had they started too late? Was it Helena? To go by the recriminations coming across the radios, the foster mothers were miffed about several camera placements and an incursion by a photographer from a local birding club. The film crew convened at the edge of the camp, aware that perhaps they were under some fire. Schiffman, grinning in the way he did when he was especially anxious, motioned me over and pointed toward the camp, at a fence line. “What the fuck?” he said. “There’s one rule!” There was a tent hanging in the sun to dry. My tent. One side of it was bright yellow.
Out by the hangar, still in his suit, Fritz did his debrief. “This was really strange,” he said. “Don’t ask me about the reason why they behave like that. We need a seminar.”
The team convened in the mess tent. The foster moms, the Waldi oracles, were rattled, and, in spite of their better judgment, inclined to find blame.
One of the producers apologized for the positioning of a camera: “It was a total mistake. I’m so sorry.”
“It’s not
the
reason, but it’s not
not
a factor,” Helena said. “They really get easily distracted.” No one mentioned my yellow tent.
“No, no, don’t worry,” Fritz told the film crew. He refused to blame anyone, except maybe this batch of birds. He told me, “It’s not monocausal. There’s been a complex change in the psychology of the birds. The birds we had in Narbonne seemed to be other birds. Now they are switched again. It’s a very fascinating example of a group dynamic in a social context.” He reckoned that in the birds, as in humans, the causes could be “endocrinological.”
The migration was still six days ahead of the previous year’s pace, and so Fritz decreed that they would pass up the good weather to give the birds time to reset their psychology. “We must allow them to change. We must pay attention to nutritional aspects. In the past, we have increased the number of insects. More crickets. They need calcium and vitamins.”
Helena said, “We have to order more vitamins.”
That night, with no flight the next day and therefore no predawn call time, some members of the film crew had a little party, with Aperol spritzes procured at a supermarket in the nearby town of Figueres.
Schiffman, in a sparkly mood, insisted on putting on “Fly Away Home”—“This song! I love this song! Here it is. Yes! We’re all gonna be O.K.!”—and, after a while, Helena, finished in the aviary and on her way to the shower, dropped by and started doing her ibis sound, a wet intake of the breath, like a high-register slurp. They cued up a shot of Helena in a gale, huddled in an open field with dozens of birds pressed up against her, as though she were a boulder or a tree.
Weeks later, after a slog through Spain, days of aborted attempts and reluctant cratings brightened by moments of providence and triumph—all told, fifty-one days of migration, with nineteen stages covering seventeen hundred miles, Fritz’s longest ever, with the greatest number of birds—the ibises arrived in Andalusia. Some of them assented to a ceremonial final flight, a last twirl for the cameras and dignitaries, like the Parisian leg of the Tour de France. And then they went to meet their predecessors and their nonmigratory cousins in Cádiz. They’d made it.
Ingrid, their pioneer, their Brigham Young, was not there to greet them. In the spring, he had set off north, motivated apparently by that old migratory restlessness. A couple of Fritz’s other transplants had persuaded some of the nonmigratory ibises of Cádiz to fly north with them, which is what he’d hoped, but Ingrid flew solo. A tracker showed Ingrid travelling more than two hundred miles a day. And then, on the fourth day, he stopped. His carcass was found in the Pyrenees. The forensics indicated death by predation. Avian, not human. An eagle, most likely. A natural end. ♦
Being able to apply statistics is like having a secret superpower.
Where most people see averages, you see confidence intervals.
When someone says “7 is greater than 5”, you declare that they’re really the same.
In a cacophony of noise, you hear a cry for help.
Unfortunately, not enough programmers have this superpower. That’s a shame, because the application of statistics can almost always enhance the display and interpretation of data.
As my modest contribution to developer-kind, I’ve collected together the statistical formulas that I find to be most useful; this page presents them all in one place, a sort of statistical cheat-sheet for the practicing programmer.
Most of these formulas can be found in Wikipedia, but others are buried in journal articles or in professors’ web pages. They are all classical (not Bayesian), and to motivate them I have added concise commentary. I’ve also added links and references, so that even if you’re unfamiliar with the underlying concepts, you can go out and learn more. Wearing a red cape is optional.
Send suggestions and corrections to emmiller@gmail.com
One of the first programming lessons in any language is to compute an average. But rarely does anyone stop to ask: what does the average actually tell us about the underlying data?
1.1 Corrected Standard Deviation
The standard deviation is a single number that reflects how
spread out
the data actually is. It should be reported alongside the average (unless the user will be confused).
\[
s = \sqrt{\frac{1}{N-1} \sum_{i=1}^N(x_i - \bar{x})^2}
\]
From a statistical point of view, the "average" is really just an
estimate
of an underlying population mean. That estimate has uncertainty that is summarized by the standard error.
A confidence interval reflects the set of statistical hypotheses that won’t be rejected at a given significance level. So the confidence interval around the mean reflects all possible values of the mean that can’t be rejected by the data. It is a multiple of the standard error added to and subtracted from the mean.
\[
CI = \bar{x} \pm t_{\alpha/2} SE
\]
Where:
\(\alpha\) is the significance level, typically 5% (one minus the confidence level)
\(t_{\alpha/2}\) is the \(1-\alpha/2\) quantile of a t-distribution with \(N-1\) degrees of freedom
It’s common to report the relative proportions of binary outcomes or categorical data, but in general these are meaningless without confidence intervals and tests of independence.
2.1 Confidence Interval of a Bernoulli Parameter
A Bernoulli parameter is the proportion underlying a binary-outcome event (for example, the percent of the time a coin comes up heads). The confidence interval is given by:
If you have more than two categories, a multinomial confidence interval supplies upper and lower confidence limits on all of the category proportions at once. The formula is nearly identical to the preceding one.
Pearson’s chi-squared test can detect whether the distribution of row counts seem to differ across columns (or vice versa). It is useful when comparing two or more sets of category proportions.
The test statistic, called \(X^2\), is computed as:
\(N\) is the sum of all the cells, i.e., the total number of observations
A statistical dependence exists if \(X^2\) is greater than the (\(1-\alpha\)) quantile of a \(\chi^2\) distribution with \((m-1)\times(n-1)\) degrees of freedom.
If the incoming events are independent, their counts are well-described by a Poisson distribution. A Poisson distribution takes a parameter \(\lambda\), which is the distribution’s mean — that is, the average arrival rate of events per unit time.
3.1. Standard Deviation of a Poisson Distribution
The standard deviation of Poisson data usually doesn’t need to be explicitly calculated. Instead it can be inferred from the Poisson parameter:
3.2. Confidence Interval Around the Poisson Parameter
The confidence interval around the Poisson parameter represents the set of arrival rates that can’t be rejected by the data. It can be inferred from a single data point of \(c\) events observed over \(t\) time periods with the following formula:
\[
CI = \left(\frac{\gamma^{-1}(\alpha/2, c)}{t}, \frac{\gamma^{-1}(1-\alpha/2, c+1)}{t}\right)
\]
Where:
\(\gamma^{-1}(p, c)\) is the inverse of the lower incomplete gamma function
From a statistical point of view, 5 events is indistinguishable from 7 events. Before reporting in bright red text that one count is greater than another, it’s best to perform a test of the two Poisson means.
If you want to test whether groups of observations come from the same (unknown) distribution, or if a single group of observations comes from a known distribution, you’ll need a Kolmogorov-Smirnov test. A K-S test will test the entire distribution for equality, not just the distribution mean.
4.1. Comparing An Empirical Distribution to a Known Distribution
The simplest version is a one-sample K-S test, which compares a sample of \(n\) points having an observed cumulative distribution function \(F\) to a known distribution function having a c.d.f. of \(G\). The test statistic is:
\[
D_n = \sup_x|F(x) - G(x)|
\]
In plain English, \(D_n\) is the absolute value of the largest difference in the two c.d.f.s for any value of \(x\).
The critical value of \(D_n\) at significance level \(\alpha\) is given by \(K_\alpha/\sqrt{n}\), where \(K_\alpha\) is the value of \(x\) that solves:
The critical must be solved iteratively, e.g. by Newton’s method. If only the p-value is needed, it can be computed directly by solving the above for \(\alpha\).
The two-sample version is similar, except the test statistic is given by:
\[
D_{n_1,n_2} = \sup_x|F_1(x) - F_2(x)|
\]
Where \(F_1\) and \(F_2\) are the empirical c.d.f.s of the two samples, having \(n_1\) and \(n_2\) observations, respectively. The critical value of the test statistic is \(K_\alpha/\sqrt{n_1 n_2/(n_1 + n_2)}\) with the same value of \(K_\alpha\) above.
\(J_{h/2}\) is a Bessel function of the first kind with order \(h/2\)
\(\gamma_{(h-2)/2,n}\) is the \(n\)
th
zero of \(J_{(h-2)/2}\)
To compute the critical value, this equation must also be solved iteratively. When \(k=2\), the equation reduces to a two-sample Kolmogorov-Smirnov test. The case of \(k=4\) can also be reduced to a simpler form, but for other values of \(k\), the equation cannot be reduced.
Want to look for statistical patterns in your MySQL, PostgreSQL, or SQLite database? My desktop statistics software
Wizard
can help you analyze
more data in less time
and
communicate discoveries visually
without spending days struggling with pointless command syntax. Check it out!
Red Report 2025: Unmasking a 3X Spike in Credential Theft and Debunking the AI Hype
Bleeping Computer
www.bleepingcomputer.com
2025-03-13 15:01:11
Credential theft surged 3× in a year—but AI-powered malware? More hype than reality. The Red Report 2025 by Picus Labs reveals attackers still rely on proven tactics like stealth & automation to execute the "perfect heist." [...]...
Cybercriminals have turned password theft into a booming enterprise, malware targeting credential stores jumped from 8% of samples in 2023 to 25% in 2024, a threefold increase.
This alarming surge is one of many insights from the newly released
Red Report 2025
by Picus Labs, which analyzed over 1 million malware samples to identify the tactics hackers rely on most.
The findings read like a blueprint for a “perfect heist,” revealing how modern attackers combine stealth, automation, and persistence to infiltrate systems and plunder data without detection.
And while the media buzzes about AI-driven attacks, our analysis reveals that the dark allure of AI in malware remains more myth than reality.
Credentials Under Siege: 3× Increase in Theft Attempts
According to the report, credential theft has become a top priority for threat actors. For the first time, stealing credentials from password stores (MITRE ATT&CK technique T1555) broke into the top 10 most-used attacker techniques.
Attackers are aggressively going after password managers, browser-stored logins, and cached credentials, essentially “handing over the keys to the kingdom.”
With those stolen passwords, attackers can quietly escalate privileges and move laterally through networks, making credential theft an incredibly lucrative stage in the cyber kill chain.
Top 10 ATT&CK Techniques Dominate (93% of Attacks)
Another key finding is just how concentrated attacker behavior has become. Among over 200 MITRE ATT&CK techniques, 93% of malware includes at least one of the top ten techniques. In other words, most hackers are relying on a core playbook of tried-and-true tactics.
Chief among them are techniques for stealth and abuse of legitimate tools. For example,
process injection (T1055)
– hiding malicious code by injecting it into legitimate processes – appeared in 31% of malware samples analyzed.
The “Perfect Heist”: Rise of SneakThief Infostealers
If 2024’s attacks could be summed up in a metaphor, it’s The Perfect Heist. Picus Labs researchers describe a new breed of information-stealing malware – dubbed “SneakThief” – that executes multi-stage, precision attacks resembling a meticulously planned robbery.
These advanced infostealers blend into networks with stealth, employ automation to speed up tasks, and establish persistence to stick around. In a SneakThief-style operation, malware might quietly inject itself into trusted processes, use encrypted channels (HTTPS, DNS-over-HTTPS) for communication, and even abuse boot-level autoruns to survive reboots.
All of this happens while the attackers methodically search for valuable data to exfiltrate, often before anyone even knows they’re there.
The Red Report shows that such multi-stage “heist-style” campaigns became increasingly common in 2024, with most malware now performing over a dozen discrete malicious actions to reach its goal. In some cases, threat actors combined the data theft of infostealers with the extortion tactics of ransomware.
Instead of immediately deploying encryption, attackers first steal sensitive files and passwords. This evolution underlines how blurred the lines have become between classic infostealers and ransomware crews: both are after sensitive data, and both excel at staying hidden until the payoff is in hand.
AI Threats: Separating Hype from Reality
Amid the buzz about artificial intelligence being used in cyberattacks, Red Report 2025 offers a reality check.
Despite widespread hype, Picus Labs found no evidence that cybercriminals deployed novel AI-driven malware in 2024. Attackers certainly took advantage of AI for productivity (e.g. automating phishing email creation or debugging code) but AI hasn’t revolutionized the core tactics of attacks.
In fact, the top malicious techniques remained largely “human” in origin (credential theft, injection, etc.), with no new AI-born attack methods appearing in the wild.
This doesn’t mean attackers will never weaponize AI, but as of now it’s more of an efficiency booster than a game-changer for them. The report suggests that while defenders should keep an eye on AI developments, the real-world threats still center on conventional techniques that we already understand.
It’s a telling insight: fancy AI malware might grab headlines, but an unpatched server or a stolen password remains a far likelier entry point than a rogue machine-learning algorithm.
Staying Ahead of Attackers: Proactive Defense and Validation
All these findings reinforce a clear message: staying ahead of modern threats requires a proactive, threat-informed defense. The organizations best positioned to thwart attacks are those continuously testing and aligning their security controls to the tactics attackers are using right now.
For example, given that just ten techniques cover the vast majority of malicious behavior, security teams should regularly validate that their defenses can detect and block those top 10 ATT&CK techniques across their environment.
The Red Report 2025 underscores that only a proactive strategy, one that continuously assesses security controls with
adversarial exposure validation
will enable true cyber resilience. This means going beyond basic patching and occasional audits.
Techniques like
breach and attack simulation
, rigorous threat hunting, and aligning incident response playbooks to prevalent attacker behaviors are now table stakes.
Don’t Wait for the Cyber Heist – Prepare Now
The data-driven insights from Red Report 2025 paint a vivid picture of the cyber threat landscape: credential thieves roaming unchecked, a handful of techniques enabling the vast majority of breaches, and new “heist-style” attack sequences that stress-test any organization’s defense.
The good news is these are battles we know how to fight – if we’re prepared. Security leaders should take these findings as a call to arms to reinforce fundamentals, focus on the highest-impact threats, and implement security validation. By doing so, you can turn the tables on adversaries and stop the next “perfect heist” before it even begins.
For readers interested in the full deep dive into these trends and the complete list of recommendations,
download the complete Picus Red Report 2025
to explore all the findings firsthand.
The report offers a wealth of actionable data and guidance to help you align your defenses with the threats that matter most. Don’t wait for attackers to expose your weaknesses, take a proactive stance and arm yourself with insights that can drive effective, resilient cybersecurity.
After workers unionized at six Nestle-owned Blue Bottle coffee shops in Massachusetts in 2024, they have been in the midst of a pitched struggle to secure a first contract for their members. Their landslide victory against the multinational corporation has been a source of optimism for the coffee industry, and the union has enjoyed broad support from their customers, other unions in Massachusetts, and even workers along the international supply chain. Now, months into bargaining, frustrations mount as the company seems determined to drag things out as long as possible.
Bringing in the Union Busters
As with past union campaigns at Nestle-owned companies, the corporation brought in Ogletree Deakins to handle the union campaign and negotiations at Blue Bottle. According to watchdog organization LaborLab,
Ogletree Deakins is the nation’s “second largest management-side law firm specializing in union avoidance.”
Over the past 40 years, Ogletree has played a leading role in keeping many multinational corporations operating in the US union-free—one of at least four major union avoidance law firms that have taken their union-busting tactics into an international arena in recent years.
Workers at Blue Bottle understand the stakes as they continue to push for their demands at the bargaining table, and have been frustrated by the company’s attempts to drag bargaining out. “[It’s] certainly frustrating,” said Alex Pine, vice president of Blue Bottle Independent Union (BBIU). “I think that their entire bargaining strategy, and certainly Ogletree Deakins’s, is to delay bargaining to demoralize membership.”
Despite these frustrations, bargaining continues. In the last bargaining session, held on Feb. 21,
the union secured tentative agreements for a number of their noneconomic proposals, but have seen no movement on key economic issues, including wages and holidays
. The union faces an uphill battle in continuing to secure neutral meeting places—of which there are precious few. They have been able to meet in city hall locations, which are free to use, but scheduling difficulties at Cambridge City Hall have delayed bargaining even further. The company has repeatedly pushed to meet in conference halls, but the union is unable to afford the associated costs with renting those spaces. Other alternatives for bargaining, including Zoom, have been roundly rejected by the company. “The company certainly could afford to cover the cost of a bargaining space, they just don’t want to,” Pine said in an email. “They understand that the more time we have to spend looking for a location to meet means less time to organize.”
The union’s demands form a comprehensive package that would vastly improve the conditions that their baristas and other staff labor under. Chief among those demands are wages that are comparable with the cost of living in Massachusetts, democratic control in the workplace, and protection from harassment. To that end, they have asked for $30 an hour for their baristas, which would meet the minimum threshold for the high cost of living in the Boston area, as well as fairer scheduling, better PTO and holiday schedules, a more comprehensive healthcare plan, and the ability to accrue sick time for their employees.
Perhaps more important, they have asked for a “just cause” clause to be included in their contract, which would restrict management from issuing what the union alleges are retaliatory write-ups. Since the union took their campaign public last year, multiple workers have been terminated without recourse–something that the union is working diligently to fix. Additionally, the union alleges that the company continues to create a hostile work environment for its employees.
In January, the union staged a walkout in protest of the closing of their Prudential Center location without guaranteeing hours or a tip differential to workers that needed to be transferred to other locations.
In a Jan. 25 statement
, BBIU noted that they had filed 16 unfair labor practice complaints against the company, saying saying that Blue Bottle “engaged in union busting by writing up members for petty infractions, cutting hours of vocal supporters, unilaterally changing store operating hours without bargaining with the union, and more. In another unforced error by management, in September Blue Bottle fired union organizer Remy Roskin without any prior discipline. Even with the company agreeing to bargain over Roskin’s termination, workers say that Blue Bottle has unnecessarily strained the relationship between management and employees.”
Taking on the megacorp
Just as with union campaigns at Starbucks, Amazon, and other multinational corporations, the workers of BBIU have no illusions about the monumental task ahead of them. A megacorporation like Nestle,
which posted profits of over $10 billion in 2024 and projected continued growth
in its coffee portfolio for the foreseeable future, seems to tower like Goliath over the organizing efforts of its coffee shops in Massachusetts. Against these odds, BBIU remains committed to fighting for better conditions in their workplaces, no matter how incremental it may seem.
“It feels really good. I’ll tell people [at school] like, ‘Oh, I’m in a union [organizing] against a company owned by Nestle,’ and they’re immediately like, ‘hell yeah.’ The fact that we’ve already, in a very real sense, won so much, like we had this landslide union victory,” said Abby Sato, barista and BBIU organizer. “Even though at the table it doesn’t feel like these huge wins in the larger schemes of things, we are kind of tipping the scale, so it does feel really good, and it does feel like when we come together, we can make real change,” they added.
This sense of victory has helped bargaining committee members stay positive, even as the company drags things out. “This is the thing that gets me kind of excited when thinking about what we’re up against is all of the possibilities that exist,” Pine said. Since the union won their election, members of BBIU have been in contact with members of Sinaltrainal in Colombia, the union representing coffee workers farther down the supply chain. Workers in Colombia have been in a nearly year-long labor dispute with Nestle over mass layoffs–including of sick employees. Last month, bargaining sessions were meant to begin,
but have since been suspended
.
For Pine, the regular messages of international solidarity from their union siblings along the supply chain have had a buoying effect. “Although the conditions of our workplaces are very different, it means a lot to me that we’re able to send messages of support to each other, talk about issues that we have with the company, and to have that kind of shared sense of international solidarity,” Pine said. That solidarity has given hope to Pine that they and their fellow workers can join a global movement to organize Nestle. “I think that there is a very real chance that we can begin to organize across the supply chain.”
For now, members are working on keeping morale up as bargaining stretches into yet another month. The union has worked hard to build up a strong union culture within their bargaining unit by continuing to hold social events and other gatherings. Pine believes that in the absence of any really meaningful social institutions or third spaces, the union is a source of community and shared power for their membership and supporters. “Even completely new members that don’t really understand what a union is already have positive feelings about it, because they understand that this can be a source or a space of a different way of life, really,” Pine said. “This is something collectively focused that gives people a sense of autonomy in their lives.”
Mel Buer is a staff reporter for The Real News Network, covering U.S. politics, labor, and movements and the host of The Real News Network Podcast. Her work has been featured in The New York Times, The Washington Post, The Nation, and others. Prior to joining TRNN, she worked as a freelance reporter covering Midwest labor struggles, including reporting on the 2021 Kellogg's strike and the 2022 railroad workers struggle. In the past she has reported extensively on Midwest protests and movements during the 2020 uprising and is currently researching and writing a book on radical media for Or Books. Follow her on
Twitter
or send her a message at
mel@therealnews.com
.
The Real News Network (TRNN) makes media connecting you to the movements, people, and perspectives that are advancing the cause of a more just, equal, and livable planet.
We broaden your understanding of the issues, contexts, and voices behind the news headlines. We are rigorous in our journalism and dedicated to the facts, but unafraid to engage alongside movements for change, because we believe journalism and media making has a critical role to play in illuminating pathways for collective action.
TRNN is a nonprofit media organization. Financial support from our viewers and readers is critical to our ability to provide authentic and engaged journalism.
WebAssembly from the Ground Up – learn WASM by building a compiler
Lobsters
lobste.rs
2025-03-13 14:40:48
Hi!
We (pdubroy & marianoguerra) just launched an online book called WebAssembly from the Ground Up. It’s an online book to learn Wasm by building a simple compiler in JavaScript.
This is the book we wish we’d had 3 years ago. Unlike many WebAssembly resources that focus on use cases and tooling...
We (pdubroy & marianoguerra) just launched an online book called WebAssembly from the Ground Up. It’s an online book to learn Wasm by building a simple compiler in JavaScript.
This is the book we wish we’d had 3 years ago. Unlike many WebAssembly resources that focus on use cases and tooling, we wanted a deep dive into how Wasm actually works.
We focus on the core of WebAssembly: the module format and the instruction set. We think the low-level details — the “virtual ISA” — are the most interesting part, and we had the crazy idea that writing a compiler is the best way to learn it.
Over the course of the book, you’ll build up two things:
1: An “assembler library”, that can be used to produce WebAssembly modules. Here’s an example:
2: A very simple compiler for a language called Wafer, which looks like this:
extern func setPixel(x, y, r, g, b, a);
func draw(width, height, t) {
let y = 0;
while y < height {
let x = 0;
while x < width {
let r = t;
let g = x;
let b = y;
let a = 255;
setPixel(x, y, r, g, b, a);
x := x + 1;
}
y := y + 1;
}
0
}
In case you’re not a fan of JavaScript, we’ve already heard of readers who’ve worked through the book in F# and Haskell. :-D
You can check out a sample here:
https://wasmgroundup.com/book/contents-sample/
— we’d love to hear what you think! There’s also a 25% discount for Lobsters readers — just use the code LOBSTERS25 at checkout.
Last month I published
Engineering Ubuntu For The Next 20 Years
, which outlines four key themes for how I intend to evolve Ubuntu in the coming years. In this post, I’ll focus on “Modernisation”. There are many areas we could look to modernise in Ubuntu: we could focus on the graphical shell experience, the virtualisation stack, core system utilities, default shell utilities, etc.
Over the years, projects like GNU Coreutils have been instrumental in shaping the Unix-like experience that Ubuntu and other Linux distributions ship to millions of users. According to the GNU
website
:
The GNU Core Utilities are the basic file, shell and text manipulation utilities of the GNU operating system. These are the core utilities which are expected to exist on every operating system.
This package provides utilities which have become synonymous with Linux to many - the likes of
ls
,
cp
, and
mv
. In recent years, there has been an
effort
to reimplement this suite of tools in Rust, with the goal of reaching 100% compatibility with the existing tools. Similar projects, like
sudo-rs
, aim to replace key security-critical utilities with more modern, memory-safe alternatives.
Starting with Ubuntu 25.10, my goal is to adopt some of these modern implementations as the default. My immediate goal is to make uutils’ coreutils implementation the default in Ubuntu 25.10, and subsequently in our next Long Term Support (LTS) release, Ubuntu 26.04 LTS, if the conditions are right.
But… why?
Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.
The Rust language, its type system and its borrow checker (and its community!) work together to encourage developers to write safe, sound, resilient software. With added safety comes an increase in security guarantees, and with an increase in security comes an increase in overall resilience of the system - and where better to start than with the foundational tools that build the distribution?
I recently read an
article
about targeting foundational software with Rust in 2025. Among other things, the article asserts that “foundational software needs performance, reliability — and productivity”. If foundational software fails, so do all of the other layers built on top. If foundational packages have performance bottlenecks, they become a floor on the performance achievable by the layers above.
Ubuntu powers millions of devices around the world, from servers in your data centre, to safety critical systems in autonomous systems, so it behooves us to be absolutely certain we’re shipping the most resilient and trustworthy software we can.
That’s not to throw shade on the existing implementations, of course. Many of these tools have been stable for many years, quietly improving performance and fixing bugs. A lovely side benefit of working on newer implementations, is that it
sometimes facilitates
improvements in the original upstream projects, too!
I’ve written about my desire to increase the number of Ubuntu contributors, and I think projects like this will help. Rust may present a steeper learning curve than C in some ways, but by providing such a strong framework around the use of memory it also lowers the chances that a contributor accidentally commits potentially unsafe code.
Introducing
oxidizr
I did my homework before writing this post. I wanted to see how easy it was for me to live with these newer implementations and get a sense of their readiness for prime-time within the distribution. I also wanted a means of toggling between implementations so that I could easily switch back should I run into incompatibilities - and so
oxidizr
was born!
oxidizr
is a command-line utility for managing system experiments that replace traditional Unix utilities with modern Rust-based alternatives on Ubuntu systems.
The
oxidizr
utility enables you to quickly swap in and out newer implementations of certain packages with
relatively
low risk. It has the notion of
Experiments
, where each experiment is a package that already exists in the archive that can be swapped in as an alternative to the default.
Each experiment is subtly different since the paths of the utilities being replaced vary, but the process for enabling an experiment is generally:
Install the alternative package (e.g.
apt install rust-coreutils
)
For each binary shipped in the new package:
Lookup the default path for that utility (e.g
which date
)
Back up that file (e.g.
cp /usr/bin/date /usr/bin/.date.oxidizr.bak
)
Symlink the new implementation in place (e.g.
ln -s /usr/bin/coreutils /usr/bin/date
)
There is also the facility to “disable” an experiment, which does the reverse of the sequence above:
For each binary shipped in the new package:
Lookup the default path for the utility (e.g
which date
)
Check for and restore any backed up versions (e.g
cp /usr/bin/.date.oxidizr.bak /usr/bin/date
)
Uninstall the package (e.g.
apt remove rust-coreutils
)
Thereby returning the system back to its original state! The tool is covered by a suite of integration tests which illustrate this behaviour which you can find
on Github
Get started
WARNING
:
oxidizr
is an experimental tool to play with alternatives to foundational system utilities. It may cause a loss of data, or prevent your system from booting, so use with caution!
There are a couple of ways to get
oxidizr
on your system. If you already use
cargo
, you can do the following:
Otherwise, you can download and install binary releases from
Github
:
# Download version 1.0.0 and extract to /usr/bin/oxidizr
curl -sL "https://github.com/jnsgruk/oxidizr/releases/download/v1.0.0/oxidizr_Linux_$(uname -m).tar.gz" | sudo tar -xvzf - -C /usr/bin oxidizr
Once installed you can invoke
oxidizr
to selectively enable/disable experiments. The default set of experiments in
v1.0.0
is
rust-coreutils
and
sudo-rs
:
# Enable default experiments
sudo oxidizr enable
# Disable default experiments
sudo oxidizr disable
# Enable just coreutils
sudo oxidizr enable --experiments coreutils
# Enable all experiments without prompting with debug logging enabled
sudo oxidizr enable --all --yes -
# Disable all experiments without prompting
sudo oxidizr disable --all --yes
The tool should work on all versions of Ubuntu after 24.04 LTS - though the
diffutils
experiment is only available from Ubuntu 24.10 onward.
The tool itself is stable and well covered with unit and integration tests, but nonetheless I’d urge you to start with a test virtual machine or a machine that
isn’t
your production workstation or server! I’ve been running the
coreutils
and
sudo-rs
experiments for around 2 weeks now on my Ubuntu 24.10 machines and haven’t had many issues (more on that below…).
How to Help
If you’re interested in helping out on this mission, then I’d encourage you to play with the packages, either by installing them yourself or using
oxidizr
. Reply to the Discourse post with your experiences, file bugs and perhaps even dedicate some time to the relevant upstream projects to help with resolving bugs, implementing features or improving documentation, depending on your skill set.
Earlier this week, I met with
@sylvestre
to discuss my proposal to make uutils coreutils the default in Ubuntu 25.10. I was pleased to hear that he feels the project is ready for that level of exposure, so now we just need to work out the specifics. The Ubuntu Foundations team is already working up a plan for next cycle.
There will certainly be a few rough edges we’ll need to work out. In my testing, for example, the only incompatibility I’ve come across is that the
update-initramfs
script for Ubuntu uses
cp -Z
to preserve
selinux
labels when copying files. The
cp
,
mv
and
ls
commands from uutils
don’t yet support
the
-Z
flag, but I think we’ve worked out a way to unblock that work going forward, both in the upstream and in the next release of Ubuntu.
I’m going to do some more digging on
sudo-rs
over the coming weeks, with a view to assessing a similar transition.
Summary
I’m really excited to see so much investment in the foundational utilities behind Linux. The uutils project seems to be picking up speed after their recent
appearance at FOSDEM 2025
, with efforts ongoing to rework
procps
,
util-linux
and more.
With Ubuntu, we’re in a position to drive awareness and adoption of these modern equivalents by making them either trivially available, or the default implementation for the world’s most deployed Linux distribution.
We will need to do so carefully, and be willing to scale back on the ambition where appropriate to avoid diluting the promise of stability and reliability that the Ubuntu LTS releases have become known for, but I’m confident that we can make progress on these topics over the coming months.
Here is a confession: I am a very strong proponent of a robust test suite being perhaps the single most important asset of a codebase, but when it comes to auxiliary services like admin sites or CLIs when it comes to testing I tend to ask for forgiveness more than I ask for permission. Django's admin site is no different: and, because Django's admin DSL is very
magic-string
-y, there's a lot of stuff that never gets caught by CI or mypy until a lovely CS agent informs me that something is blowing up in their face.
Take this example, which bites me more often than I care to admit:
One thing that has made my life slightly easier in this respect is a parametric test that just makes sure we can render the empty state for every single admin view. Code snippet first, explanation after:
from django.urls import get_resolver, reverse
defextract_routes(resolver: URLResolver)->iter[str]:
keys =[key for key in resolver.reverse_dict.keys()ifisinstance(key,str)]
key_to_route ={key: resolver.reverse_dict.getlist(key)[0][0][0]for key in keys}for key in keys:yield key
for key,(prefix, subresolver)in resolver.namespace_dict.items():for route in extract_routes(subresolver):yieldf"{key}:{route.name}"defis_django_admin_route(route_name:str):# Matches, e.g., `admin:emails_event_changelist`.return route_name.split(":").endswith("changelist")
ADMIN_URL_ROUTE ="buttondown.urls.admin"
DJANGO_ADMIN_CHANGELIST_ROUTES =[
route.name for route in extract_routes(get_resolver(ADMIN_URL_ROUTE))if is_django_admin_route(route.name)]# The fixture is overkill for this example, but I'm copying this from the actual codebase.@pytest.fixturedefsuperuser_client(superuser: User, client: Client)-> Client:
client.force_login(superuser)return client
@pytest.mark.parametrize("url",
DJANGO_ADMIN_CHANGELIST_ROUTES
)deftest_can_render_route(superuser_client: Any, url:str)->None:
url = reverse(url, args=[])
response = superuser_client.get(url)assert response.status_code ==200
Okay, a bit of a mouthful, but the final test itself is very clean and tidy and catches a lot of stuff.
That
extract_routes
implementation looks scary and magical, and it is — I use a
more robust implementation in
django-typescript-routes
, which itself we gratefully purloined from
django-js-reverse
. Lots of scary indexing, but its held up well for a while.
The fixture and
parametrize
assumes usage of
pytest
(you should use pytest!) but it's trivially rewritable to use
subTest
instead.
Are There Opportunities to Use OODA Loops in Your Software Project?
The world is full of good ideas misapplied, so I want to be careful when discussing OODA loops. The OODA loop was created as an idea in the study of war. War is endemic to the way humans are (at least, so far), but it is always, in part or in whole, a tragedy. And characterizing software engineering, business operations or entrepreneurship as a conflict in that vein can create an overly aggressive view of the world. But, I belive, this concept can be adapted to a more civilian setting. The key difference is that, in war, the main conflict is with some adversary, and it is brutal. But in a software project, the conflict is mostly with ourselves and our environment. And if it feels brutal, that’s certainly a red flag!
Observe, Orient, Decide, and Act
OODA stands for
Observe
,
Orient
,
Decide,
and
Act
. It’s a model of how individuals and organizations take in the world (Observe), form an understanding of it (Orient), pick a strategy to pursue (Decide), and affect their environment (Act). And, as a result of their actions and the passage of time, new observations are available, and the cycle can begin again.
In the case of war, speeding up this cycle for yourself and slowing it down for your adversary allows you a more accurate model of your environment and more optimal decisions than the other side. In the software and product world, our adversaries are mainly ourselves and our environment. We’d typically focus on managing the pace at which our team and its members can run this cycle. We may have adversaries in the form of market competitors, but it’s a lot tougher to affect them directly, especially if you’re a smaller organization.
Col. John Boyd
invited the OODA loop and perfected it over many decades. He was a skilled fighter pilot. And, although he didn’t get to do much combat, he did dominate training exercises. He understood how to control his aircraft, and he could out-turn his opponents and surprise them. He helped design the F-15 and, later, as part of the “Fighter Mafia,” created the F-16. Over time, he formalized his ideas of warfare in his classic “
Patterns of Conflict
” lectures that inspired the Maneuver Warfare doctrine of the Marine Corps. Some say he even went on to influence the “shock and awe” strategy of the first Iraq war. All of his ideas had a common thread, an emphasis on maneuverability and pace of decision making.
OODA Loops Applied
Taking appropriate action quickly is desirable in any domain, so we can keep extending his ideas elsewhere. Clearly, the pace of decision-making also plays a big role in modern agile practices. Practices like two-week cycles, retrospectives, Kanban boards, and so on can be viewed as improving the pace of one or more parts of the OODA loop. And many others have drawn those parallels and written about it.
But, how did Boyd view it? The OODA loop is an interesting model, but how do we apply it? Let’s see how Boyd’s ideas apply to software development. In his lectures, he would focus on four core principles to disrupt an opponent’s decision-making cycle and preserve your own: initiative, variety, rapidity, and harmony. Here is a quote to that effect from his slides for Patterns of Conflict:
“He who is willing and able to take the initiative to exploit variety, rapidity, and harmony as the basis to create as well as adapt to the more indistinct – more irregular – quicker changes of rhythm and pattern, yet shape the focus and direction of effort -— survives and dominates, or contrariwise”
Again, we don’t typically have an opponent in a software development lifecycle. But our decision-making processes are still challenged just by the nature of developing software. Anyone who has tried to develop software can attest to the myriad of ways the cycle can stretch out and produce incorrect orientation and bad decisions. We are often, sadly, our own worst enemies and most potent opponents. So, let’s see how to apply these.
Variety
“Plans are nothing; planning is everything” — Dwight D. Eisenhower
Planning is typically viewed fairly linearly. Imagine a timeline of milestones when you kick off a project. Or even a Gantt chart, with its task dependence, is still fairly linear. Everything is expected to happen in order, with minimal disruption and in a known order. But that’s not a reflection of reality at all. A more accurate plan might be a branching tree of actions that depends on our model of the world at each step. But, I admit, that’s not exactly easy to create or explain to others.
Still, a linear plan is an unlikely path through that infinite tree of possibilities. So, being married to a plan, especially a long one, subjects you to friction. Quickly, your plan and reality diverge. As you observe and orient yourself, you either keep going with a plan that doesn’t fit the situation you’re in or start making decisions and actions that reflect reality but have no long term view. This is especially chaotic when members of your team randomly choose from those two approaches.
Boyd espoused a better approach. That is, to have a greater variety of plans for the most likely scenarios, update your plans frequently, and avoid making overly-detailed plans over a long time. And, of course, make your plans clear and understandable to everyone. Like water running down a hill, your team has to have a general direction but be able to deal with obstacles as they come up. As you’ll see, this dovetails nicely with the other principles.
Rapidity
“Everyone has a plan until they get punched in the face” — Mike Tyson
Boyd thought about rapidity as another piece of the puzzle. You execute quickly, take in the new environment, and plan again. A sort of jagged path towards the goal, where at each step you adjust your course and see how far you can get. From his point of view, if you are iterating fast enough, it doesn’t matter if you’re right at any one point in time.
Bogged Down in Execution
When we create software, we often focus on quality and whether a feature fully meets customer needs. And no doubt quality is of utmost importance. Quality has a benefit and a cost, though. The exact level of quality necessary to release a feature is a topic of great debate, but getting bogged down in execution is a huge and sometimes hidden cost.
Often, the way we view a feature and the way a user or stakeholder views it are quite different. Without a feature release, we tend to focus on the wrong elements. We get stuck thinking “this one thing” will make or break the feature, and we can’t release without it. We think we know, but we don’t really know. Boyd would say, if you can iterate fast enough, the significance of any one change goes way down. So, with that mindset, some of the most important changes you can release aren’t the features themselves, but the development of tooling and safety nets that make you comfortable releasing faster.
Getting There Faster
This is reminiscent of a lot of DevOps practices. Testing, logging, and monitoring are there to let us observe and orient ourselves. They give us signals that something is not working, and we need to act. Including feature creators as a huge part of the operation and deployment of the service gets the observations to the right place fast.
Additionally, deployment pipelines, feature flags, and other such levers let us take a variety of actions when releases go wrong. Yet, that tooling is often neglected, slow, or buggy in many projects. Or, it’s the opposite. We try to maximize test coverage with “junk tests” or maximize the amount of metrics collected. In turn, this can create a brittle pipeline, noise, and busywork that slow us down instead of speeding us up. We forget that the goal is the safety and visibility needed so the team can rapidly iterate on the product.
Harmony
“How is it they live in such harmony, the billions of stars, when most men can barely go a minute without declaring war in their minds?” — Thomas Aquinas
Boyd saw a benefit to “pushing down” the decision-making to smaller groups. Organizations intrinsically have a much slower OODA loop than the smaller groups and individuals within them. Coupling all decision cycles to some main cycle slows all cycles down to the pace of the slowest cycle.
Centralized decision-making is, in a way, a form of distrust. Distrust in the ability of others to make decisions that are equal or better than some central authority. And distrust is a force that tears groups apart.
A Common Outlook
But there is a crucial other element to this decoupling: teams need a common outlook. A “common outlook” is an implicit shared perspective about the world and a common understanding of how things are done. In the military, this is achieved with training and studying common doctrine and tactics. In the software development world, this can come through good hiring and consistent communication of expectations. This kind of communication can be explicit via an official vision statement and company values. This kind of communication can be implicit by the decisions that are made. And, of course, when communication is lacking, inconsistent, or conflicting, that just makes the lack of a common outlook more evident, and dysfunction creeps back in.
When a team has a “common outlook”, they need less communication to achieve a task. With a common outlook, they can be guided by “commander’s intent”, which is a directive that has a “what” and a “why” a thing is to be done rather than the precise steps to achieving it. By not being prescriptive about how things are done, leadership allows the executing team to adapt to the situation as it changes.
Building Trust
This is a highly effective way to lead talented people, too. They’re closer to the action, and they can undoubtedly make better decisions given the right context. Furthermore, organizations that deny that freedom to their members will often lose their top talent.
When trust creates great decisions, then great decisions create trust. This is a virtuous cycle that improves the whole organization. And a vicious destructive cycle when each decision goes poorly and destroys trust. Creating the kind of cycle that pulls apart the enemy’s harmony was a big part of Boyd’s war-fighting doctrine, so avoid waging that sort of war on your own team.
Initiative
“Have taken Trier with two divisions. Do you want me to give it back?” — George S. Patton (on receiving orders to bypass Trier because it will take four divisions to take)
John Boyd is often referred to as a “maverick,” which I believe is code for “pain in the butt”. Boyd was an agitator who had almost as many enemies as he had friends. He fought for what he thought was right. The right way to pilot, the right way to design a jet fighter, the right way to fight a war. And he had enough energy to wear down the people around him and leave a legacy of great ideas and improvements. He embodied a certain form of initiative. I’m sure many of you have had an irritating person like this on your team. And sometimes their initiatives weren’t too good, so what’s up?
What Doesn’t Work
Boyd would say that initiative requires all three elements of variety, rapidity, and harmony to be in place. You need to understand the general direction and contingency plans, processes that enable moving fast, and certainly the trust and understanding of others in your organization. Forcing initiative on an organization that can’t or won’t support it typically generates conflict. Then, one side wins (typically the status quo), and changes are made to prevent further disruptive initiatives.
Taking action when an opportunity presents itself is critical. As a software product team, this could mean taking action with an imperfect plan or design. At the individual level, this could mean keeping your code organized and tested, making sure user experience is clear and fast, and adding necessary monitoring as you see fit without input from others. Some of the best initiatives I’ve experienced have been unsolicited proof-of-concept demos of features or tools that open up a whole class of future possibilities.
Organizations that lack initiative have slow OODA loops. And, those where individuals have to clear their actions with the whole organization are a magnitude slower because they couple their loops with the whole organization. Initiative at all levels forces your team to close their loop faster by taking more actions and providing observations for the next go-around.
Encouraging Initiative
In lean product design, a related idea exists with MVPs (Minimum Viable Products), where a stripped-down version of the final product or feature is released to understand how users will react to it. In design and engineering, we have the concept of “spikes” that give us a time-boxed amount of time to test an idea or answer a question. These all
encourage initiative
in some way.
But, perhaps the most important to draw from, is the concept of “psychological safety”. Organizations that amplify fear of having an idea being ridiculed, fear of being wrong, and fear of failure, destroy initiative. Rejecting a few bad ideas in a supportive way and enduring a dozen just okay initiatives is worth that one great one.
OODA Loops: A Simple Model with Sophisticated Implications
Developing software always feels like a somewhat unique engineering activity. Unlike other engineering disciplines, our constraints are a lot less grounded in physical reality and more in the abstract. Creating a software product is a lot more about structuring information, dealing with the psychology of the user, and adapting quickly to changing expectations.
Being so abstract and flexible, it’s easy to pull in and adapt ideas from other domains. Many other fields have a strong informational component. And military strategy has a surprisingly similar set of challenges and solutions. John Boyd was not the first to focus so much on the information-action dynamic, but this focus was especially appropriate in the late 20th century. With increasingly sophisticated war machines, military strategists were more and more limited by information than by their weapons. In parallel, software was already well on its way to “eating the world” (and that included its militaries).
The OODA loop is a simple model of gathering information and making decisions, but it has some sophisticated implications. One important element that it prioritizes better than a lot of other “agile” or “lean” framework is distilling the importance of taking action, making and re-making plans and gathering information. It creates clarity that cuts through the noise of all the ceremony that can go along with running and being part of a project. Try applying it to your project and see if it creates some insights for you.
Tesla Takedown: Protests Grow Across the U.S. as Trump & Musk Brand Activists as Terrorists
Democracy Now!
www.democracynow.org
2025-03-13 13:47:13
We speak with Valerie Costa, an organizer behind the grassroots Tesla Takedown movement peacefully protesting outside Tesla showrooms to oppose billionaire owner Elon Musk’s role in government. Since Donald Trump’s return to the White House, Musk and his so-called Department of Governmen...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org. I’m Amy Goodman.
“Elon Musk targeted me over Tesla protests. That proves our movement is working.” That’s a new article — a headline of a new
article
in
The Guardian
by our next guest, who’s been helping to organize the Tesla Takedown protests at Tesla showrooms to protest Elon Musk, President Trump’s billionaire adviser, the richest man in the world and his top campaign donor.
While Musk has been helping to dismantle numerous government agencies, or leading the charge to dismantle them, stock prices for his company Tesla have fallen so much that President Trump felt compelled this week to announce he’ll personally buy a Tesla. On Tuesday, Trump and Musk held what’s compared to an informercial for Tesla outside the White House. During the event, Trump told a reporter that he considers attacks on Tesla dealerships to be “domestic terrorism.”
REPORTER
:
Mr. President, you talked about some of the violence that’s been going on around the country at dealerships. Some say they should be labeled domestic terrorists because —
PRESIDENT
DONALD
TRUMP
:
I will do that. We’ve got a lot of cameras up. We already know who some of them are. We’re going to catch them. And they’re bad guys. Let me tell you, you do it to Tesla and you do it to any company, we’re going to catch you, and you’re going to — you’re going to go through hell.
AMY
GOODMAN
:
We’re joined now by Valerie Costa, a Seattle-based activist who’s helping to organize the Tesla Takedown movement with the local group Troublemakers and others.
Valerie, can you talk about Elon Musk singling you out? And what are you doing in organizing — what is your goal — these protests around the country, from Washington state, Seattle, where you are, to here in New York, the Tesla dealership here?
VALERIE
COSTA
:
Yeah. Hi.
So, this past weekend, I woke up first Saturday to a tweet where Elon Musk had tweeted that Troublemakers, the group I organize with, along with four other organizations, were ActBlue-funded, which, at least for Troublemakers, is outright not true. And then, Sunday morning, I received — a friend sent me the most chilling text for me, which was Elon Musk had retweeted another tweet where this person had taken a clip from a podcast interview I did about the Tesla Takedown protests and made allegations that I was promoting vandalism and such. And Musk said, “Costa is committing crimes.” So, that was terrifying to receive the news of that tweet.
And the fallout from that has just been, there’s been a lot of people online trying to smear me, make connections, make up stories about who I am and where I get money from — all sorts of just harassment. And I’ve received threats. It’s been scary. It’s been scary, in response to a tweet from a person who essentially bought his way into the White House, as the largest online social media platform, and is spreading outright lies about me and the movement as a whole, the Tesla Takedown movement as a whole.
So, I am just an organizer with the Seattle-based group Troublemakers. There’s a group of us here. We’re working with a number of organizations and individuals who have been planning protests at Tesla showrooms in the area for the last month or so. They’re peaceful, nonviolent protests. They’re First Amendment-protected, free speech, peaceful assembly. And we’ve done that, and we’ll continue to do that as this movement is gaining momentum.
AMY
GOODMAN
:
Can you talk about the fake buyers, actions at Tesla dealerships where there have been reports of protesters faking interest in buying a Tesla to waste time?
VALERIE
COSTA
:
I can’t — I’m not sure — I know that people are using different strategies and tactics to engage with Tesla, but I can’t speak to that particular strategy that people are using. I think there’s different tools in the protester’s toolbox.
I mean, essentially, protesting Tesla, this protest, is ultimately about hitting Elon Musk’s bottom line. You know, every dollar that goes into Tesla, every Tesla that’s purchased increases his — essentially, his profits. And the more money he has, it’s clear, the more influence he has on the government and in the world. He was able to, like I said, buy his way into the White House with his $250 million contribution to Donald Trump’s campaign. And he bought Twitter, giving himself sort of a platform to spread lies. And this campaign is giving people a way to show their outrage about what he’s doing to dismantle critical services and government institutions that people rely on for their health and well-being in this country.
AMY
GOODMAN
:
And can you talk about the strategy of Tesla Takedown urging people to dump Tesla stock?
VALERIE
COSTA
:
I think it’s just related to what I just said. It’s really about, you know, you as a consumer, you as a person that may have stocks and investments. You can choose where to put your money. And I think right now, given the, essentially, fascist takeover of our government, one thing that you can easily do is to move your money out of Tesla into other companies, into other things in your community — move it out of the market — where your money will be used for better, for good.
AMY
GOODMAN
:
And can you address Elon Musk’s claims that — about violence at Tesla dealerships? You’re in Seattle, where on Sunday someone set fire to four Tesla Cybertrucks in a Tesla lot in Seattle’s Industrial District.
VALERIE
COSTA
:
For sure, there’s been vandalism at Tesla dealerships. I’ve seen it on the news. I want to make it really clear that I don’t, and the Tesla Takedown movement, Troublemakers, none of us have ever supported violence or called for violence. We do not believe in that as a strategy of nonviolent resistance. I think the vandalism really shows the outright anger and rage that people are feeling about what Elon Musk is doing right now in our government.
AMY
GOODMAN
:
Clearly, the movement has had an effect, even a Fox reporter questioning Elon Musk and President Trump at that infomercial at the White House, where you could see President Trump had a piece of paper that talked about the costs of various Tesla cars, you know, over $100,000, and he asked about pushing these expensive cars at a time when so many people are suffering in the United States. I was wondering if you could respond to that. Now, that was a Fox reporter asking that question. But also —
VALERIE
COSTA
:
I know. It’s wild.
AMY
GOODMAN
:
Also, Elon Musk talking about violence at dealerships as domestic terrorism?
VALERIE
COSTA
:
Yeah, I’ll first speak to the first part. I mean, that infomercial on the White House lawn was surreal. I’ve been an activist in different forms for years, and I’ve never been part of something like this that’s gained such quick momentum and sort of gotten such national attention, for better or worse. And you can really tell, with, you know, the — OK, the president of the United States of America is buying a car from the world’s richest man’s company. Why? In order to make him feel better? In order to try to prop up the stock? I think the stock barely went up that day and has only gone up a little since then.
I think it really shows that this movement is working, that it’s gained momentum. It’s not just in the United States. We’ve, I think, had over 300 protests so far throughout the country, that are known of, that have been put on the Tesla Takedown map. And there’s 95 planned for this weekend. And that’s all decentralized. Like, I’m just based in Seattle. I’ve just been involved in the Seattle-based protests. This is just — Tesla Takedown is essentially a website with a map where you can put your protest on the map and encourage people in your community to show up and speak out and defend their civil liberties, as well as, you know, their upset about what Elon Musk is doing.
The allegations or the claims of —
AMY
GOODMAN
:
Valerie, we have 30 seconds.
VALERIE
COSTA
:
Yep, OK. Thank you. The allegations and the rhetoric right now of conflating the Tesla Takedown protests with violence and domestic terrorism is chilling, but I think, more than ever, this is the time that we all need to show up, because our civil liberties are getting attacked, as we’ve seen throughout even just watching your program this morning. And I just encourage folks to step up, because the more of us who do, it will be safer for all of us.
AMY
GOODMAN
:
Valerie Costa, I want to thank you for being with us, an activist with the Tesla Takedown movement, speaking to us from Seattle, Washington.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Ex-Education Dept. Official: Trump Is Dismantling Agency as He Weaponizes It Against Gaza Protesters
Democracy Now!
www.democracynow.org
2025-03-13 13:40:00
Education Secretary Linda McMahon has issued a warning to 60 universities that they’re being investigated for antisemitism and could face penalties. Her warning came days after the Trump administration withdrew $400 million in federal grants and contracts to Columbia University and as the admi...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org,
The War and Peace Report
. I’m Amy Goodman. We’ve still lost touch with Gaza. We’ll see if we can get the two American doctors back who are volunteering there.
We turn now, though, to look at two stories at the U.S. Department of Education. On Monday, Education Secretary Linda McMahon issued a warning to 60 universities that they’re being investigated for antisemitism on their campuses and could face penalties. Her warning came days after the Trump administration withdrew $400 million in federal grants and contracts to Columbia University.
This comes as Linda McMahon is moving ahead with plans to gut her own agency. On Tuesday, plans were announced to fire an additional 1,300 workers, effectively cutting the agency’s workforce in half since Trump took office.
We’re joined now by Tariq Habash, who served as a policy adviser in the Department of Education’s Office of Planning, Evaluation and Policy Development until he resigned 14 months ago to protest President Biden’s support of Israel’s war on Gaza. He was the Education Department’s only Palestinian American appointee. He’s co-founder and director of A New Policy, a lobbying group focused on U.S. policy towards Israel and Palestine.
So, Tariq Habash, we brought you on to talk about two issues. One is what Linda McMahon, the new education secretary, has just announced, warning 60 universities equating antisemitism with the pro-Palestinian protests on campus, where many of those who are protesting are Jewish students, and then also the gutting of the department, what it means. Why don’t you start with the warning?
TARIQ
HABASH
:
Thank you so much for having me back, Amy.
I think it’s pretty clear what this administration is trying to do, and it has nothing to do with antisemitism, unfortunately. It has everything to do with Donald Trump’s vision for America, which is consolidating his authoritarian power and pressuring any political dissidents who disagree with his policies and his approach and leveraging our civil rights laws and our institutions to be able to repress any sort of dissent and active speech on college campuses or across the country. And unfortunately, this isn’t something that is just happening under Donald Trump, but this was long-standing under the Biden administration and one of the reasons that I left.
Unfortunately, this idea that student protests in solidarity with Palestinian rights, this idea that antiwar protesters are somehow hateful or discriminating against their peers, particularly when Jewish students are disproportionately represented in these protests, is really concerning. And Trump is escalating that threat against college campuses right now through the announcement that you said Linda McMahon made, 60 colleges now essentially being put on notice for investigations.
The problem with that is also that we are simultaneously seeing the Department of Education get gutted. And a huge part of the gutting that is happening, just a couple days ago, is specifically targeting the Office for Civil Rights and all of the experts and lawyers that are in charge and tasked with doing the investigations, that take a pretty substantial amount of time to do. What is happening here is not real investigations into actual discrimination. This is a political ploy. This is the weaponization of our institutions, and it’s extremely dangerous. And colleges have an obligation to stand up and protect not only their students and faculty, but our democratic institutions as we know them.
AMY
GOODMAN
:
And so, what does it mean to — I mean, it’s clear from Project 2025, what President Trump has said, and Linda McMahon even, that they’re actually trying to just shut down the Department of Education. But to lose half its staff, what this means? They say, “Just give money to the states. The Department of Education is just a bureaucracy.”
TARIQ
HABASH
:
Yeah, unfortunately, it is patently illegal for Donald Trump and the executive branch to just eliminate a federal agency. And they know that, which is why they’re doing the next best thing, in their eyes, although still illegal, is completely gutting the entire civil service and workforce without any rationale or reason for that to happen.
The Department of Education does numerous things that they can’t just wave away with money to states and money to schools and that be the end of it. There’s a lot of oversight. There’s a lot of security. There’s a lot of student privacy and data and information that the department is responsible and obligated to protect. And when you gut the people that are doing that day-to-day work, when you’re gutting the people who are doing the program compliance to make sure that schools are following the rules and protecting their students and providing equal opportunity for education, it creates a really, really dangerous circumstance, where money is not going where it needs to go, and people are misusing the money.
And when you talk about that in the context of what
DOGE
is supposed to be doing, this idea of government efficiency and ending waste, fraud and abuse, this is the exact opposite. The Department of Education is the smallest Cabinet agency. They had just over 4,200 employees just a few months ago. This is not a bloated institution. This is an institution that is working for the American people and some of our most vulnerable, our students, our disabled students, our students who are international who are here to study and to learn, to be able to contribute to society, to serve their communities. And it’s horrifying to see that the Trump administration doesn’t value any of that. And the reality is, they don’t value education. They value what they can use the Department of Education for, which is turning it into a propaganda machine, a political tool to be able to consolidate Donald Trump’s power.
AMY
GOODMAN
:
Well, Tariq Habash, I want to thank you for being with us, co-founder and director of A New Policy, lobbying organization focused on U.S. policy towards Israel and Palestine. He’s a former Department of Education official who resigned in response to President Biden’s support of Israel’s war on Gaza, was the department’s only Palestinian American appointee.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.
Carl Lundstrom,
the co-founder and early financial backer
of file-sharing website The Pirate Bay, died after a small plane he was flying crashed in the Slovenian mountains.
Lundstrom, who was also a member of the far-right Alternative for Sweden party, was travelling from the Croatian capital of Zagreb to Zurich in Switzerland when his plane crashed
.
The 64-year-old businessman was flying his Piper Mooney Ovation M20R, the party said, confirming reports of his death in a Facebook post on Tuesday. Lundstrom was alone in the plane, they added.
The propeller plane split into two after crashing into a wooden cabin in the Velika Planina mountain in the north of Slovenia on Monday. Bad weather prevented rescue teams from discovering the body and parts of the plane inside the cabin until Tuesday, AFP reported.
“Lundstrom, a legend and veteran of Swedish nationalism, died in a
plane crash
on Monday,” the far-right party wrote.
The plane likely crashed due to spatial disorientation in bad weather, local media website 24.ur reported. It added that the aircraft started spiralling downward from the altitude of 2.5km.
He was one of the early financial backers of
The Pirate Bay
, which was founded in 2003 to allow users to dodge copyright fees and share music and other files. His company Rix Telecom provided services and equipment to The Pirate Bay till 2005.
The Pirate Bay hit one million unique users in May 2006, the same month that Swedish police first raided its servers.
Lundstrom was one of the defendants charged with being an "accessory to breaching copyright law" when the website was dragged to court for promoting copyright infringement. He was sentenced to prison in 2012 and handed down heavy fines along with other founders.
However, following an appeal, his prison sentence was shortened to four months instead of one year.
Lundstrom also financed the Swedish Progress Party in 1991, which later merged with the Sweden Democrats. The Alternative for Sweden said Lundstrom joined the far-right party in 2018 and contested the 2021 Assembly election, which he lost.
Lundström was also the heir to a Swedish ‘crispbread’ brand known as Wasabröd, the Metro reported.
Revealed: How the UK tech secretary uses ChatGPT for policy advice
Peter Kyle, the UK’s secretary of state for science, innovation and technology, has said he uses ChatGPT to understand difficult concepts
Ju Jae-young/Wiktor Szymanowicz/Shutterstock
The UK’s technology secretary, Peter Kyle, has asked ChatGPT for advice on why the adoption of artificial intelligence is so slow in the UK business community – and which podcasts he should appear on.
This week, Prime Minister Keir Starmer said that the UK government should be making far more use of AI in an effort to increase efficiency. “No person’s substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard,”
he said
.
Now,
New Scientist
has obtained records of Kyle’s ChatGPT use under the Freedom of Information (FOI) Act, in what is believed to be a world-first test of whether chatbot interactions are subject to such laws.
These records show that Kyle asked ChatGPT to explain why the UK’s small and medium business (SMB) community has been so slow to adopt AI. ChatGPT returned a 10-point list of problems hindering adoption, including sections on “Limited Awareness and Understanding”, “Regulatory and Ethical Concerns” and “Lack of Government or Institutional Support”.
The chatbot advised Kyle: “While the UK government has launched initiatives to encourage AI adoption, many SMBs are unaware of these programs or find them difficult to navigate. Limited access to funding or incentives to de-risk AI investment can also deter adoption.” It also said, concerning regulatory and ethical concerns: “Compliance with data protection laws, such as GDPR [a data privacy law], can be a significant hurdle. SMBs may worry about legal and ethical issues associated with using AI.”
“As the Cabinet Minister responsible for AI, the Secretary of State does make use of this technology. This does not substitute comprehensive advice he routinely receives from officials,” says a spokesperson for the Department for Science, Innovation and Technology (DSIT), which Kyle leads. “The Government is using AI as a labour-saving tool – supported by clear guidance on how to quickly and safely make use of the technology.”
Kyle also used the chatbot to canvas ideas for media appearances, asking: “I’m Secretary of State for science, innovation and technology in the United Kingdom. What would be the best podcasts for me to appear on to reach a wide audience that’s appropriate for my ministerial responsibilities?” ChatGPT suggested
The Infinite Monkey Cage
and
The Naked Scientists
, based on their number of listeners.
As well as seeking this advice, Kyle asked ChatGPT to define various terms relevant to his department: antimatter, quantum and digital inclusion. Two experts
New Scientist
spoke to said they were surprised by the quality of the responses when it came to ChatGPT’s definitions of quantum. “This is surprisingly good, in my opinion,” says
Peter Knight
at Imperial College London. “I think it’s not bad at all,” says
Cristian Bonato
at Heriot-Watt University in Edinburgh, UK.
New Scientist
made the request for Kyle’s data following his recent
interview with PoliticsHome
, in which the politician was described as “often” using ChatGPT. He said that he used it “to try and understand the broader context where an innovation came from, the people who developed it, the organisations behind them” and that “ChatGPT is fantastically good, and where there are things that you really struggle to understand in depth, ChatGPT can be a very good tutor for it”.
DSIT initially refused
New Scientist
‘s FOI request, stating: “Peter Kyle’s ChatGPT history includes prompts and responses made in both a personal capacity, and in an official capacity”. A refined request, for only the prompts and responses made in an official capacity, was granted.
The fact the data was provided at all is a shock, says Tim Turner, a data protection expert based in Manchester, UK, who thinks it may be the first case of chatbot interactions being released under FOI. “I’m surprised that you got them,” he says. “I would have thought they’d be keen to avoid a precedent.”
This, in turn, poses questions for governments with similar FOI laws, such as the US. For example, is ChatGPT more like an email or WhatsApp conversation – both of which have historically been covered by FOI based on past precedent – or the results of a search engine query, which traditionally have been easier for organisations to reject? Experts disagree on the answer.
“In principle, provided they could be extracted from the department’s systems, a minister’s Google search history would also be covered,” says Jon Baines at UK law firm Mishcon de Reya.
“Personally, I wouldn’t see ChatGPT as being the same as a Google search,” says
John Slater
, an FOI expert. That is because Google searches don’t create new information, he says. “ChatGPT, on the other hand, does ‘create’ something based on the input from the user.”
With this uncertainty, politicians might want to avoid using privately developed commercial AI tools like ChatGPT, says Turner. “It’s a real can of worms,” he says. “To cover their own backs, politicians should definitely use public tools, provided by their own departments, as if the public might end up being the audience.”
Topics:
Mahmoud Khalil's Lawyers Press for Release of Columbia Univ. Activist Jailed in Free Speech Case
Democracy Now!
www.democracynow.org
2025-03-13 13:20:41
We get an update on the Trump administration’s attempt to deport Palestinian activist Mahmoud Khalil, whose case has alarmed immigrant advocates and civil rights groups. Khalil, a legal permanent resident who is married to a U.S. citizen, was arrested by Immigration and Customs Enforcement age...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
We’re going to go to news here at home, which is clearly directly connected. And that is Mahmoud Khalil, the recent Columbia University graduate, Palestine activist, detained by the Trump administration last week. Khalil, who is a Palestinian green card holder, has finally been able to speak to his lawyers, after a federal judge on Wednesday ordered the Department of Homeland Security to allow him to have private calls with his legal team. Prior to this, Khalil had been unable to speak to his lawyers since his arrest on Saturday night, except for a brief call Monday to confirm he had been transferred to a detention facility in Jena, Louisiana, where he is still being held.
At a hearing on Wednesday here in New York’s Manhattan federal court, U.S. District Judge Jesse Furman, who temporarily blocked Khalil’s deportation earlier this week, extended his order while he considers whether Khalil’s arrest was unconstitutional. The government’s charging document says the Secretary of State Marco Rubio has determined that Khalil’s presence or activities in the United States would have, quote, “serious adverse foreign policy consequences for the United States,” unquote.
Outside the courthouse, Khalil’s lawyer, Ramzi Kassem, told reporters what’s happening to Mahmoud Khalil is shocking and outrageous.
RAMZI
KASSEM
:
It is absolutely unprecedented. The grounds that are being cited by the government for purportedly claiming to revoke his green card and putting him in removal proceedings — this is a person who is a lawful permanent resident; this is a U.S. person — those grounds are vague, they are rarely used, and, you know, essentially, they are a form of punishment and retaliation for the exercise of free speech.
AMY
GOODMAN
:
The Trump administration has accused Khalil, without evidence, of being a so-called terrorist sympathizer for participating in Palestinian solidarity protests at Columbia University. In a social media post, President Trump celebrated Khalil’s arrest, saying it was the first of many to come. Secretary of State Marco Rubio told reporters Wednesday, “This is not about free speech.” Rubio added, quote, “This is about people who don’t have a right to be in the United States to begin with,” end-quote.
In her first media interview, Mahmoud Khalil’s wife Noor Abdalla, who is eight months pregnant, who is a U.S. citizen, told Reuters she had been naive to think Khalil was safe from arrest as a permanent legal resident. She added no one from Columbia University’s administration had contacted her to offer help, although Mahmoud had written to the president of Columbia University a few days before he was arrested, expressing concern and fear and asking for some kind of protection. He was arrested by
ICE
agents in his Colombia housing.
For more on Mahmoud Khalil’s legal case, we’re joined now by his lawyer, Ramzi Kassem, professor of law at
CUNY
— that’s the City University of New York — where he founded the legal clinic
CLEAR
. The acronym stands for Creating Law Enforcement Accountability and Responsibility.
Ramzi, thank you for joining us again. You were just in court yesterday. Explain what the judge ruled.
RAMZI
KASSEM
:
Thank you, Amy.
Yeah, yesterday was an important day in court. As you mentioned, the judge has prohibited the government from deporting Khalil until further order of the court. So, that’s important. The judge also agreed with us, the lawyers for Khalil, that it was imperative for the government to respond to our motion to bring Khalil back to New York. And so the government has to respond to that motion by tomorrow.
And the government also last night moved to change venue, meaning that the government believes Khalil’s
habeas corpus
case, the case that he brought in federal court, should not be in federal court in New York, but it should be in federal court in Louisiana. And, you know, that’s not really a surprise to us, because, of course, it’s been very clear from the beginning that the government’s strategy is to cut Khalil off not just from access to the court here in New York, not just from access to his legal team, but from access to his support base, to his community, to his family, including his expecting wife. And that’s the only conceivable reason why the government would not have held him in New York or New Jersey as they typically do, and move him a thousand miles away merely hours after he files a lawsuit in federal court challenging the constitutionality of his detention.
AMY
GOODMAN
:
Now, Ramzi Kassem, were you able to speak with Mahmoud yesterday after the judge’s ruling?
RAMZI
KASSEM
:
Yeah, Amy, we were. So, when we were in court yesterday, we asked the judge to order the government to allow us to have a privileged conversation with our client in Louisiana. Since we brought suit on his behalf and filed the motion to have him brought back home to New York City, we haven’t been able to speak to him. One of the lawyers on the team was able to have an unprivileged call. But as most of your listeners and viewers know, you know, those calls, the unprivileged calls, the regular calls and visits, can be monitored or recorded by the government. So the judge ordered the government to allow us to have a privileged conversation. We had the first privileged conversation with Mahmoud last night.
And, you know, of course, I’m limited in what I can say about that privileged conversation, but I can tell you that, thankfully, he is in good spirits, he is in fighting spirits. He appreciates all the support that he’s received, the fact that there were thousands of people, certainly hundreds, I would say over a thousand people, outside the courthouse yesterday massed to protest in solidarity with him and in solidarity with the Palestinian people.
AMY
GOODMAN
:
The government is trying to deport Mahmoud based on a rarely used provision of federal immigration law that allows the secretary of state to remove individuals whose presence it believes has “potentially serious adverse foreign policy consequences for the United States,” unquote. Can you explain this extremely rare provision? The only previous time it was used before was in 1995, according to recent press reports. According to
The New York Times
, that was an attempt to deport a former Mexican government official. Ironically, the judge who struck it down as unconstitutional in 1996 was Maryanne Trump Barry, President Trump’s sister, who was a federal judge in New Jersey. Can you talk about this?
RAMZI
KASSEM
:
Yeah. So, that provision has an interesting history, to say the least. It’s a provision that is now understood to have been enacted primarily to target Eastern European Jewish Holocaust survivors who came to the United States and were suspected, often wrongly suspected, of being Soviet agents. So, that’s the origin story of this particular piece of federal legislation.
And that origin story is the reason why Congress came back in and amended it, saying, “Well, you can’t use this law to punish people for their views and their speech. And if you want to do that, then it requires a personal certification by the secretary of state for why that person’s presence and their speech poses a compelling risk to U.S. foreign policy interests.” And it also requires the secretary of state to notify Congress with supporting evidence for why they’re making this move. So, it is an exceedingly rarely used provision. And it certainly does not trump the Constitution and Mr. Khalil’s First Amendment rights.
AMY
GOODMAN
:
So, what are you calling for now, as we wrap up?
RAMZI
KASSEM
:
Well, we’re calling for what we’ve always called for, and what Mahmoud and his family and his supporters and their supporters have always called for. Any day Mahmoud spends in detention, whether it’s in Louisiana or elsewhere, is a day too long. We want to bring him back to New York. We want to bring him back to his family. And so, we want him back here and freed as soon as possible. And we’re pursuing that through every available avenue, with the support of hundreds of thousands of people.
This idea that the movement in support of Palestinian lives and dignity, that criticism of U.S. foreign policy in Palestine and support for Israel is this fringe movement that is driven by foreign actors is a falsehood. Of course, there are noncitizens and green card holders, like Khalil, involved in the movement, but this is an American movement. And the government is in denial about that. And that’s going to backfire, possibly in court, but definitely in the court of public opinion.
AMY
GOODMAN
:
As we wrap up, Ramzi Kassem, you’re also an expert on Guantánamo. It is very interesting that in our headlines we just reported that the Trump administration has just removed the migrants it sent to Guantánamo, and sent them to Louisiana, the state that Mahmoud is also in.
RAMZI
KASSEM
:
Yeah, and, you know, this is just one example — right? — of this administration overreaching and having to back off because of the intense backlash through organizing, through legal means, in the media. And we hope that the same will happen in Mahmoud’s case soon.
AMY
GOODMAN
:
Ramzi Kassem, I want to thank you for being with us, professor of law at City University of New York — that’s
CUNY
Law School. He founded and co-directs
CLEAR
, a legal clinic.
CLEAR
joined with the Center for Constitutional Rights to file a federal
habeas
petition in the Southern District of New York challenging Mahmoud Khalil’s detention. We’ll keep you updated. Stay with us.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
OpenAI Asks White House for Relief from State AI Rules
As U.N. Accuses Israel of Genocidal Acts, Doctors in Gaza Denounce Israeli Blockade on Aid, Food & Fuel
Democracy Now!
www.democracynow.org
2025-03-13 13:14:42
A new report by United Nations experts says Israel has carried out “genocidal acts” against Palestinians in Gaza, including the destruction of women’s healthcare facilities, intended to prevent births, and the use of sexual violence as a strategy of war. This comes as talks on resu...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
This is
Democracy Now!
, democracynow.org,
The War and Peace Report
. I’m Amy Goodman.
The United Nations has released a
report
that Israel carried out genocidal acts by systematically attacking and destroying sexual and reproductive healthcare facilities in Gaza, including a fertility center, maternity wards, maternity hospitals, and preventing access to necessary medicines and equipment to ensure safe pregnancies, deliveries and neonatal care.
The Geneva-based Independent International Commission of Inquiry on the Occupied Palestinian Territory, including East Jerusalem, and Israel said, quote, “Israeli authorities have destroyed in part the reproductive capacity of the Palestinians in Gaza as a group, including by imposing measures intended to prevent births,” unquote.
The report describes Israel’s shelling of the Al-Basma
IVF
Center in Gaza City in December 2023, that destroyed 4,000 embryos, as intentional and, quote, “a measure intended to prevent births among Palestinians in Gaza, which is a genocidal act under the Rome Statute and the Genocide Convention,” unquote.
Describing the harm to pregnant, lactating and new mothers as “unprecedented,” the commission said Israeli security forces, quote, “deliberately inflicted conditions of life … calculated to bring about the physical destruction of Palestinians in Gaza as a group, … [one of the] categories of genocidal acts in the Rome Statute and the Genocide Convention,” unquote.
The U.N. report also accuses Israel’s security forces of using forced public stripping and sexual assault as part of their standard operating procedures to punish Palestinians since the beginning of the war.
Israel’s Permanent Mission to the U.N. in Geneva rejected the allegations in the report as unfounded and biased.
For more on the attack on Gaza’s fragile healthcare system, and as Israel’s total blockade of the Gaza Strip enters its 12th day, with no food, medicine or fuel allowed in, we go now to Gaza, where we’re joined by two volunteer doctors. Dr. Feroze Sidhwa is a trauma surgeon volunteering at Nasser Medical Complex in Khan Younis, Gaza, with MedGlobal. He is usually based in California. And Dr. Mark Perlmutter is an orthopedics hand surgery specialist volunteering also at Nasser Medical Complex, with the charity Humanity Auxilium. Last year, they both volunteered at the European Hospital in Khan Younis during Israel’s active assault on Gaza.
We welcome you both to
Democracy Now!
Dr. Feroze Sidhwa, let’s begin with you. You’ve just heard this breaking news of the U.N. report calling Israel guilty of genocidal acts, particularly targeting maternity hospitals, the
IVF
clinic and, overall, the healthcare system in Gaza. If you can respond to, coming back to Gaza now, how you found the hospital, and your response to what they’ve found?
DR.
FEROZE
SIDHWA
:
Yes, the hospital in Gaza is — we’re at Nasser Medical Complex on the western side of Khan Younis, and it’s certainly better than when we were at European Hospital last year. The whole hospital isn’t full of refugees and displaced people, so there’s some — the hospital can actually function.
But this hospital was actually attacked in February and March of last year. The dialysis ward was destroyed. A lot of equipment for new operating rooms was destroyed. And right now the blockade is really affecting the availability of instruments and disposables in the operating room, and it’s making life very difficult for people here.
You know, beyond the healthcare system, the prices in the market are going up again. The availability of certain types of food is going down. Meat and other proteins are very hard to come by now. Eggs are about a dollar apiece in the market just outside the hospital that I saw yesterday. So, it’s a very difficult situation. And the attacks on the healthcare system that were done in the past 16 months have really undermined its ability to help people.
AMY
GOODMAN
:
And can you talk about the Israeli energy minister just cutting off electricity to Gaza?
DR.
FEROZE
SIDHWA
:
It hasn’t been an obvious problem in the hospital, because we have a generator. But the electricity is now clicking on and off, I don’t know, three times a day, four times a day, whereas before it was not doing that. So, yeah, no, cutting off electricity to a whole population captive in an open-air prison is going to have —
AMY
GOODMAN
:
It looks like we just lost the video to Gaza, to Khan Younis. It’s remarkable that we were even able to make that connection. …
We just reconnected with Gaza — we’ll see how long we can keep them on — to return to our conversation with two American doctors who are volunteering at hospitals in Khan Younis. This is now the 12th day since Israel has completely blockaded the Strip, cutting off all food, medicine and fuel.
Dr. Feroze Sidhwa and Dr. Mark Perlmutter, welcome back to
Democracy Now!
Dr. Perlmutter, we last spoke to you in North Carolina after you had returned from Gaza. You are back. Can you talk about what’s happening, and particularly about the medical staff, the doctors, the nurses, the staff, those, for example, who have been arrested by Israel, even one, I understand, who you’ve talked to, who was recently released?
DR.
MARK
PERLMUTTER
:
Right. I have a colleague, an orthopedic surgeon, specializes in pediatric orthopedics, who was abducted from the hospital, from the operating room, in February of 2024. He was just released. When he was arrested, he was strip-searched, marched naked in front of his colleagues, beaten on a daily basis in an Israeli prison, beaten more intensely when he asked to speak to a lawyer, and even more intensely after he spoke to his lawyer. He had his fingers broken and his hand. He had, while handcuffs on, guards step on the handcuffs to torque them into the nerves of his hand, ruining the nerves in his operative hands.
And if the physical daily beatings and the loss of his hand function wasn’t enough, the psychologic torture, which is their specialty, according to almost everybody who’s been released — and there are still approximately 350 healthcare workers in Israeli prisons, all without charges — the psychologic torture is immense. They showed him photographs, surveillance photographs, of his wife, and said, “We will bring her here and gang rape her in front of you if you don’t admit that you’re Hamas,” which is what their request was when they were breaking individual finger bones. And then they showed him a
GPS
photo of his house and said that they would send a drone through his bedroom window, killing his wife and kids, if he didn’t admit that he was Hamas. So, the physical and mental torture, which is supported by many healthcare workers that were released, and is still obviously going on in Israeli prisons, presents a tremendous weight on the healthcare system here.
AMY
GOODMAN
:
Dr. Mark Perlmutter, you are a Jewish American. You are urging him, this orthopedic surgeon, to sue American Israeli — you know, American
IDF
soldiers in a civil suit, making them personally financially responsible for what has taken place? Again, you are also currently president of the World Surgical Foundation, immediate past president of the International College of Surgeons. A civil lawsuit?
DR.
MARK
PERLMUTTER
:
Well, it’s something that’s developing now. You know, we’ve all been to D.C. and spoke to our feckless senators and congressmen. We find them — I personally find them to be spineless and, along with our past two presidents, directly responsible for the carnage that we’ve witnessed here. I think soon that American 2,000-pound bombs will be dropping here again when the ceasefire ends. And Israel has ruined every ceasefire that has ever been declared in their history with Palestine. And I think that when that carnage restarts, decades from now, history will remember especially our past two presidents and our current spineless Senate and Congress as the murderers that they truly are by not having the guts to stand up to Israel.
And it’s not an anti-Jewish stance. It’s an anti-Zionism stance that they need to take. And every time somebody protests in favor of a Palestinian, just like this story that you just broadcast, it’s interpreted as an antisemitic stance and not an anti-Zionism stance. And the two are completely different. In fact, Zionism is the major cause for the destruction of Judaism.
AMY
GOODMAN
:
Dr. Sidhwa, you’re meeting with the U.N. secretary-general and demanding to know or pushing for information on where Dr. Abu Safiya is, the head of the Kamal Adwan pediatric hospital?
DR.
FEROZE
SIDHWA
:
Yes. Dr. Abu Safiya was taken after Kamal Adwan had been laid siege to for 80 or 90 days. The entire hospital has been completely destroyed at this point. And yeah, he’s just a pediatrician, nothing special about him. He was actually taken by the Israelis earlier in the war and let go, precisely because he’s not a threat to anybody. But then they decided to arrest him again, decided to torment him, as well, just like they have with so many others.
But yes, we were able to meet with António Guterres, the secretary-general of the U.N., as well as Tom Fletcher, the under-secretary-general for
OCHA
. And Secretary-General Guterres is very interested in the case. He had asked Mr. Fletcher to look into it, which he did. Mr. Fletcher raised the issue with the Israelis, met with Dr. Abu Safiya’s family, quite publicly, actually. And still no word on Dr. Abu Safiya’s release date. The Israelis keep pushing back his court dates and things like that. But we’re very much hoping that he will be released soon. This territory needs every doctor, every nurse that it can possibly muster.
It’s true that the bombs have stopped falling, and most of the shooting has stopped. The Israelis are still killing about half a dozen people a day, but the bombing has, thank God, stopped for the most part. But everyone here is still homeless. Everyone here is still hungry. Everyone here is still thirsty. And as you mentioned at the beginning of the hour, the Israelis have cut off all entry and exit to Gaza again. This is — you’ve got 1.8, 1.9, 2 million people locked in a cage, and they can’t access anything from the outside world. They can’t survive like this.
People are turning on each other here. Palestinian society is very communal and very — there’s a lot of solidarity in it, but there’s limits to everything. Just last night, we actually had a mass shooting between two different families. Over what? Who knows? But people are just at their breaking point with the necessities of life just being unavailable, not for a week or a month or a year, but for 16 months now. It has to end. And what the United States is doing about this is just shameful. It’s backing Israel all the way. And it’s got to stop. It’s got to stop.
AMY
GOODMAN
:
And the complete cutting off of aid now into Gaza, which has been condemned by humanitarian groups around the world, Dr. Sidhwa?
DR.
FEROZE
SIDHWA
:
Yeah. I mean, you know, like, I’m here with MedGlobal. It’s a group that brings doctors in. But even though I was able to get in, half of my own group was denied entry. So, only two out of the five doctors that were scheduled to come are here now. Dr. Mark Perlmutter, your hand surgery colleague was also denied entry. It’s just — you know, what is being allowed in and not being allowed in, who is being allowed in and not allowed in, it just seems to be —
DR.
MARK
PERLMUTTER
:
Arbitrary.
DR.
FEROZE
SIDHWA
:
— completely random and arbitrary. And it’s wreaking havoc on the whole society. I mean, again, the whole — people can’t survive with nothing. That’s not how human life works. And the United States needs to stop backing this insane isolation of 2 million people, half of whom are children, most of whom are refugees, that didn’t do anything to anybody. Just stop treating them this way.
AMY
GOODMAN
:
I want to thank you both for being with us, Dr. Feroze Sidhwa and Dr. Mark Perlmutter, both U.S.-based surgeons back in Gaza to volunteer. They’re currently volunteering at the Nasser Medical Complex in Khan Younis. To see Dr. Sidhwa’s
op-ed
in
The New York Times
, we’ll link to it at democracynow.org.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Shadeform (YC S23) is hiring a senior founding engineer
At Shadeform, we're building the cleanest and easiest way to rent and use GPUs anywhere. Our GPU Cloud marketplace is used by the Fortune 100, startups, and more to find and deploy affordable, reliable compute. We're building a multi-cloud distributed compute platform to run inference and training workloads anywhere.
We're a capital efficient organization, profitable, and focus on scaling by automating processes rather than growing headcount. We're looking for engineers who are excited to build in this space and leverage AI as much as they can.
Role
This role includes building out new core orchestrations and managed services on top of the Shadeform distributed compute layer along with enhancing and improving our existing infrastructure layer.
Responsibilities
Build out core orchestration capabilities leveraging existing and new tooling
Develop APIs and add new services and resources into the Shadeform platform
Work with design and platform devs to surface new user experiences through the Shadeform console
Manage cloud resources across 20+ environments
Ideal Requirements
Go programming experience
Orchestration development experience (Kubernetes, Nomad, etc.)
Cloud experience (AWS and GCP)
GPU and ML infrastructure experience
What We Offer
Competitive compensation with significant equity
San Francisco based (remote possible)
Off-sites and conferences
Career development opportunities
About
Shadeform
Shadeform is a cloud GPU marketplace that provides a unified platform and API to access cloud GPUs across 15+ cloud providers.
Shadeform
Founded:
2023
Team Size:
5
Status:
Active
Founders
Headlines for March 13, 2025
Democracy Now!
www.democracynow.org
2025-03-13 13:00:00
Mahmoud Khalil Speaks to Lawyers After NY Hearing, Judge Extends Block on Deportation, U.N. Experts: Israel Has Committed Genocide Through Attack on Women’s Health, EPA Moves to Undo Decades of Regulations on Air, Water, Polluting Industries, NOAA Faces More Layoffs; DOJ Guts Oversight Unit fo...
Mahmoud Khalil Speaks to Lawyers After NY Hearing, Judge Extends Block on Deportation
Mar 13, 2025
Image Credit: Right: Jane Rosenberg
A federal judge has extended an order blocking the deportation of detained Palestinian rights activist and recent Columbia graduate Mahmoud Khalil. Khalil is a U.S. permanent resident who was arrested for leading Gaza solidarity protests at Columbia last year. His legal team spoke after a hearing in front of a Manhattan courthouse Wednesday. This is Baher Azmy of the Center for Constitutional Rights.
Baher Azmy
: “Mr. Khalil’s detention has nothing to do with security. It is only about repression. The United States government has taken the position that it can arrest, detain and seek to deport a lawful permanent resident exclusively because of his peaceful, constitutionally protected activism — in this case, activism in support of Palestinian human rights and an end to the genocide in Gaza.”
Mahmoud Khalil was finally able to speak privately with his legal team Wednesday following an order by the New York court to the Department of Homeland Security.
Mahmoud Khalil’s wife, who is a U.S. citizen and eight months pregnant, issued a statement on Tuesday, “pleading with the world to continue to speak up against his unjust and horrific detention by the Trump administration.” Protests have erupted across the country following Khalil’s arrest Saturday, including in front of the
ICE
detention center in Jena, Louisiana, where he is being held. We’ll have the latest on this case with one of Mahmoud Khalil’s lawyers, Ramzi Kassem, after headlines.
U.N. Experts: Israel Has Committed Genocide Through Attack on Women’s Health
Mar 13, 2025
A panel of U.N. experts has accused Israel of “genocidal acts” against Palestinians in Gaza through attacks on women’s healthcare facilities, deliberately blocking medicine for pregnancies and neonatal care, and using sexual violence as a war strategy. In a new report, the Independent International Commission of Inquiry on the Occupied Palestinian Territory writes, “Israeli authorities have destroyed in part the reproductive capacity of the Palestinians in Gaza as a group, including by imposing measures intended to prevent births, one of the categories of genocidal acts in the Rome Statute and the Genocide Convention.” The commission also documented the use of forced public stripping and nudity, sexual harassment including threats of rape, as well as sexual assault, as part of Israeli forces’ standard operating procedure.
Mediated talks between Israel and Hamas continue today in Doha, as Palestinians in Gaza endured a 12th straight day of Israel’s total blockade of the Palestinian territory.
EPA
Moves to Undo Decades of Regulations on Air, Water, Polluting Industries
Mar 13, 2025
The Environmental Protection Agency has announced it’s undertaking the largest deregulation action in U.S. history. This includes rolling back dozens of crucial water, air quality and industry measures, as well as defunding billions of dollars from green energy and local climate groups. At threat is the landmark “endangerment finding,” which classifies greenhouse gases as a public health threat and allows for climate regulations. This is
EPA
Administrator Lee Zeldin.
Lee Zeldin
: “I’ve been told the endangerment finding is considered the holy grail of the climate change religion. For me, the U.S. Constitution and the laws of this nation will be strictly interpreted and followed. No exceptions. Today, the green new scam ends.”
The head of Sunrise Movement called the EPA’s move “a f— you to anyone who wants to breathe clean air, drink clean water, or live past 2030.”
NOAA
Faces More Layoffs;
DOJ
Guts Oversight Unit for Corrupt Public Officials
Mar 13, 2025
The National Oceanic and Atmospheric Administration is bracing for another round of mass firings after the Trump administration signaled plans to lay off more than 1,000 workers. NOAA’s workforce is already down by about 2,000 people since Trump took office, following a first round of cuts and a buyout offer by Elon Musk and his
DOGE
operation.
The Trump administration has begun gutting the Justice Department unit that oversees prosecutions of public officials accused of corruption. Cuts to the Public Integrity Section come after several of its officials resigned last month when acting Deputy Attorney General Emil Bove ordered them to drop federal corruption charges against New York Mayor Eric Adams.
USAID
Destroys Classified Documents After Judge Orders Trump Admin to Pay Out Foreign Aid
Mar 13, 2025
A federal judge in Washington, D.C., has ordered the Trump administration to pay nearly $2 billion in foreign aid funds owed to grant recipients and contractors of the State Department and the U.S. Agency for International Development. In Monday’s ruling, U.S. District Judge Amir H. Ali said the administration’s freeze on the foreign aid funds likely violates the separation of powers, writing, “The constitutional power over whether to spend foreign aid is not the President’s own — and it is Congress’s own.”
Meanwhile, the Trump administration has ordered the wholesale destruction of classified documents and other records at
USAID
, which President Trump and Elon Musk are working to dismantle. An email from USAID’s acting executive secretary, Erica Carr, reads, “Shred as many documents first, and reserve the burn bags for when the shredder becomes unavailable or needs a break.” It’s not clear whether any
USAID
official got permission from the National Archives to destroy the documents, a necessary step under the Federal Records Act of 1950.
Trump Administration Flies 40 Remaining Immigrants from Guantánamo to Louisiana
Mar 13, 2025
The Trump administration has flown all 40 remaining immigrants from the Guantánamo Bay military prison back to the U.S., to Louisiana. The move follows a congressional delegation’s visit to Guantánamo over the weekend and amid mounting legal challenges. Nearly 300 immigrants have been locked up by the Trump administration in Guantánamo.
Russia Recaptures Ukraine-Occupied Territory as U.S. Envoy Arrives in Moscow for Ceasefire Talks
Mar 13, 2025
Russia continued attacks on multiple Ukrainian cities overnight. In Kherson, a 42-year-old woman was killed by Russian artillery fire. Meanwhile, Russia’s top general says his forces have reclaimed nearly 90% of the territory that Ukraine seized from Russia’s Kursk border region during a surprise counteroffensive in August. On Wednesday, Russian President Vladimir Putin toured a command post near the frontlines in Kursk, dressed in military fatigues. Russian forces made rapid gains in recent days after the Trump administration cut off intelligence sharing and the flow of weapons to Ukraine. The administration restored the assistance on Tuesday, after Ukraine agreed to a 30-day ceasefire proposal.
Trump’s special envoy Steve Witkoff arrived in Moscow earlier today for talks at the Kremlin over the ceasefire plan. Reuters reports Russia presented the U.S. with a list of demands similar to its previous ultimatums: no
NATO
membership for Ukraine, an agreement not to deploy foreign troops in Ukraine and international recognition of Putin’s claim to Crimea and four Ukrainian provinces.
Meanwhile, Poland’s President Andrzej Duda has repeated his request for the U.S. to deploy nuclear weapons on Polish soil, calling it a deterrent to Russian aggression.
Canada Hits Back at Trump wIth 25% Reciprocal Tariffs on Steel, Aluminum and Other Goods
Mar 13, 2025
As Trump’s steel and aluminum tariffs went into effect Wednesday, Canada imposed its own 25% retaliatory tariffs on steel and aluminum, as well as on computers, sports equipment and other goods totaling over $20 billion. This is on top of previously announced reciprocal tariffs on U.S. goods. This is Canadian Foreign Affairs Minister Mélanie Joly.
Mélanie Joly
: “Canada is not the one driving up the cost of your groceries or of your gasoline or any of your construction. Canada is not the one putting your jobs at risk. Canada is not the one that is ultimately starting this war. President Trump’s tariffs against you are causing that. And there are no winners in a trade war.”
As the trade war between Trump and Canada escalates, the U.S. has paused talks with Ottawa on a key water-sharing treaty over the Columbia River. Trump has repeatedly called for the U.S. annexation of Canada, for Canada to become the 51st state.
Greenland Center-Right Party Wins Surprise Election Victory Amid Trump’s Threats to Take Over
Mar 13, 2025
In Greenland, the opposition, center-right Demokraatit Party pulled off a surprise victory in parliamentary elections. All of Greenland’s main parties have called for full independence from Denmark, though the Demokraatit Party favors a more gradual transition. The election was called last month as Greenland came under a major international spotlight amid Trump’s threats to take over the territory, which is home to rare earth minerals. Greenlanders have forcefully rejected the idea. The population of Greenland is just 56,000 and is 90% Inuit.
Ugandan Forces Deploy to South Sudan as Power-Sharing Deal That Ended Civil War Unravels
Mar 13, 2025
Image Credit: X/@stateHouse_J1 // X/@RiekMachar
Uganda has deployed special forces to South Sudan amid escalating tensions between South Sudan’s President Salva Kiir and First Vice President Riek Machar over a collapsing power-sharing deal. That 2018 agreement ended a five-year civil war that killed some 400,000 people. Deadly clashes by armed groups have been mounting, many centered in Upper Nile state. The U.N. has urged South Sudan’s leaders “to resolve tensions through dialogue.”
Yale Suspends Legal Scholar and Palestinian Rights Supporter Helyeh Doutaghi
Mar 13, 2025
Yale has suspended an international law scholar and outspoken defender of Palestinian rights, after an AI-powered, far-right website called Jewish Onliner accused her of ties to a group on a U.S. sanctions list, an apparent reference to the advocacy organization Samidoun. Helyeh Doutaghi wrote in a statement, “[Yale Law School] actions constitute a blatant act of retaliation against Palestinian solidarity … I am being targeted for one reason alone: for speaking the truth about the genocide of the Palestinian people that Yale University is complicit in.” Doutaghi added the move from Yale is “the last refuge of a crumbling order — an empire in decline, resorting to brute repression to stifle and crush those who expose its unraveling hegemony.”
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
After I had repeatedly wondered how all those "ash" variants
might be related, and had found hardly
any
information, I had a closer look at the variants I know.
Source is available for all variants, except for BSD/OS.
Thanks to
TUHS
for archiving the traditional BSDs and 386BSD,
to
Kirk McKusick
for his
CSRG archive
,
and to
Peter Seebach
for allowing me to learn all BSD/OS variants.
This page documents relationships.
And for the variants without changelogs (the traditional BSDs, 386BSD, BSD/OS and Minix)
this page aims at being a
complete
log concerning source code changes.
But for the other, later variants this was certainly not the goal and just a few,
arbitrarily chosen changes or bugfixes are listed.
Why no commented diffs? I won't manage to do that work. And if you're interested
on this level, then you are looking at the source already and these comments are a good start.
It was written by Kenneth Almquist as replacement to the
traditional "System V Release 4" Bourne shell
due to the license war between AT&T and Berkeley.
Berkeley distributed it first with "BSD 4.3-Net/2".
The source was
posted to comp.sources.unix, "A reimplementation of the System V shell"
, on 30-05-1989.
Shortly after, Kenneth published a
job control patch
to fix the problem that more than 4 jobs were not handled properly.
First, a few summaries for orientation, what it's about with these almquist shells.
Differences between the ash family and the SVR4 shell:
ash implements
"
$(...)
" command substitution
accepts
"
export VAR=value
"
echo
accepts option -e to interpret escape sequences (undocumented)
the file
DIFFERENCES
from the original distribution
lists most remaining differences to both the Berkeley shell
(a variant of the V7 shell) and the SVR4 shell.
A few differences are documented but not listed in this file, e.g.,
the built-ins "
bltin
" and "
setvar
"
only mentioned in the manual, but not explained:
built-ins
"catf"
,
"expr"
,
"line"
,
"nlecho"
a characteristic feature of almquist shells in general:
a PATH component flagged with a trailing "
%func
" is recognized
as a
directory containing function definitions
.
Read more about this.
another characteristic feature of almquist shells:
"local -"
makes
$-
local in functions (thanks to
Jilles Tjoelker
)
Common to ash variants and the SVR4 shell only, in contrast to other bourne compatible shells:
"
notexistent_cmd 2>/dev/null
" doesn't redirect the error "not found"
(exception: dash since 0.4.17 and FreeBSD 9.0)
"
echo x > file*
" doesn't expand
file*
Details which are only found in early ash variants:
(only) the original release knew an
undocumented version variable:
SHELLVERS="ash 0.2"
Later variants can only be recognized and distinguished by characteristic features.
(A release "ash-0.2" of the later linux port is not related to this variable.)
like SVR4 sh:
no command line editing and history
(by intention)
like SVR4 sh:
no
#
and
%
form of parameter expansion
like SVR4 sh:
no arithmetic expansion
like SVR4 sh: no multi-digit parameter expansion like
${10}
like SVR4 sh: "
set --
" doesn't unset the parameter list
by intention, there is
no backslash escaping in the back-tick form of
command substitution (
`...`
), side effect: this form cannot be nested
[changed in 4.4BSD]
undocumented: no "type" built-in
(probably accidentally) [fixed in NetBSD1.3,FreeBSD2.3]
read by default
behaves as if the POSIX
flag -r
is active
PS1 is "
@
"
with "
sh -c cmd arg0 arg1
",
$1
is set to
arg0
[fixed relatively late: all traditional BSDs, and early Free-/NetBSDs and early Minix
behave as well]
bug: "
VAR=set eval 'echo $VAR'
" gives no output
,
that is, you can't call
eval
with local variables
"
for i do
" not accepted
(but "
for i; do
" or "
for i<newline>do
")
flag
-v
not implemented
reads SHINIT at start
(except if a login shell or called with "sh file")
"
trap
" doesn't accept symbolic signal names
"
expr
" built-in, merged with "test"
negation of patternmatching with "
!!pattern
"
flag
-z
(don't collapse filename pattern if it doesn't match)
rejects "
case x in (x)
"
[1]
[Fixed soon. As exceptions, NetBSD and Minix fixed this quite late]
bug: segmentation violation on empty command substitution "
$( )
"
(but not on "
` `
")
bug: segmentation violation on "
${var=`cmd`}
"
compile time option (off by default) for a "cleaner" eval (not concerning the above problem, though)
shell recognizes
#!
(if the kernel fails), if not compiled with BSD support
This compile time macro is explained with "running 4.2 BSD or later".
This code still exists in some modern variants, but is de-facto inactive.
[1]
In fact, no ash variant
needs
"
case x in (x)
" because the
parser is robust about case constructs in
$(...)
command substitution.
However, several other shells are not robust and they have to
work around this way; so this is a script portability issue.
It was, sooner or later, fixed in most implementations:
BSD/OS 3.0 (06/'96), dash-0.3.5-4 (08/'99) (and thus Busybox and Slackware 8.1 ff.),
FreeBSD 4.8/5.1 (10/'02), NetBSD 4.0 (06/'06).
Details which are found in early and in some later ash variants:
bug: built-ins in command substitution are wrongly optimized,
the shell exits upon
foo=`exit 1`
[fixed in NetBSD 1.4, dash-0.3.5 and thus busybox, FreeBSD 9.0]
A detail which is found in ash, in the SVR4 shell,
and in most bourne compatible shells, but usually not documented
accepts "
{ ...; }
" as body in a
for
loop (instead of "
do ...; done
")
renamed flag -j (jobcontrol) to -m (like SVR4 shell),
added flags -C (noclobber), -a (allexport), -b (notify asynchronous jobs),
implemented and documented flag -v (formerly accepted but not implemented),
removed flag -z, (don't collapse filename pattern if it doesn't match)
accepted (document as not implemented) flag -u (error on unset)
documents multi-digit parameter substitution, like
${10}
, but doesn't implement it
warning "You have stopped jobs."
"
shift
" built-in knows "can't shift that many"
warning "running as root with dot in PATH" in case
negation of patternmatching with "
!!pattern
" removed
The 386BSD distribution was derived from
4.3BSD-Net/2
and aimed at maintaining a runnable system,
that is, completing Net2 with the pieces which had to be removed after the license war.
386BSD 0.0: initial release, sh is identical to 4.3BSD-Net/2 (02/'92)
This started as a Linux port for Debian, from Herbert Xu, derived from
NetBSD
.
And Debian stable release 2.0 ("Hamm", 03/'99) then replaced the early Linux port ash-0.2 with this variant.
Two high priorities of this project are restricting to POSIX conformance and slim implementation.
The dash manual documents the command line history and editing capabilities (gained with 4.4BSD-Alpha).
These features are disabled at compile time, though.
You can enable them by recompiling: have the "BSD editline and history library" available
(package libedit-dev on Debian) and remove
-DSMALL
from
src/Makefile
.
Some changes:
fork from NetBSD-current-19970715 (~ release 1.3).
It starts as "ash" release with release number "
0.3-1
"
It gets
re-synced with NetBSD later
several times.
Sync with NetBSD on 19970715, 19980209.
[Debian stable release 2.0 ("Hamm") 03/'99 with 0.3.4-2]
[Debian stable release 2.1 ("Slink") 08/'99 with 0.3.4-6]
[Debian stable release 2.2 ("Potato") 08/'00 with 0.3.5-11]
substituted the embedded
pmatch()
with
fnmatch(3)
as of 0.3.7-1,
but switched this several times later (due to problems with buggy glibc versions).
This also determines if negation with
[^...]
and extended character classes
like
[:print:]
are recognized
previous exit status preserved when entering a function, 12/'00, 0.3.7-11
unknown escape sequences are printed literally by echo, 03/'01, 0.3.7-15
"printf" built-in
added, 04/'01, 0.3.8-1
doesn't recognize "{ ... }" any longer as body in a
for
loop
(that is, as "
do ... done
"), 04/'01, 0.3.8-1
Sync with NetBSD on 20000326, 20000729, 20010316 and patches from NetBSD on 20010502, 20011103, 20020302.
"
fnmatch
" used again (and thus
[^...]
) 11/'01 with 0.3.80-30
"
jobid
" built-in removed, 02/'02 with 0.3.8-36
[Debian stable release 3.0 ("Woody") 07/'02 with 0.3.8-37]
renamed to "dash"
(influenced by Debian), 09/'02, with release 0.4.1.
[Debian stable release 3.1 ("Sarge") 06/'05 with 0.5.2-5]
Sync with NetBSD 1.6 on 20021019.
"
notexistent_cmd 2>/dev/null
" redirects the error "not found" since 12/'02, 0.4.7.
Sync with NetBSD on 20030123.
"
setvar
" built-in removed 04/'03 with 0.4.14.
"
fnmatch
" not used anymore (and thus no
[^...]
) 04/'04 with 0.4.26
As of
0.4.26-1
(2004-05-28) it's
not a native debian package anymore
but is maintained independently (and debian maintenance goes to Gerrit Pape).
added extended character classes due to still existing problems with glibc
fnmatch()
,
06/'04, 0.4.26.
"
exp
" aka "
let
" built-in removed 02/'05 with 0.5.3
[Debian stable release 4.0 ("Etch") 04/'09 with 0.5.3-7]
arithmetic expansion accepts variable names without dollar, 01/'09, 0.5.5.1.
[Debian stable release 5.0 ("Lenny") 09/'09 with 0.5.4-12]
Slackware 8.1 (09/'06) then switched to a variant similar to
dash 0.3.7-14
.
It is based on a NetBSD ash tarball named ash-0.4.0 and a collection of 21 debian patches,
(with the debian specific stuff taken out).
With later releases, these patches were not modified anymore
(until Slackware 13.1, at the time of this writing).
The debian changelog which is contained stops at 0.3.7-14, a fix from 0.3.7-11 is in
(preserve previous exit status upon entering a function), and a fix from 0.3.7-15 is not in
(unknown escape codes are printed literally by echo).
Some other smaller distributions, emphasizing on a slim shell, also use the Slackware shell variant.
Originally, the system shell on Android (/system/bin/sh) was an ash variant.
The Android core consists of a Linux kernel and, for license reasons, NetBSD utilities instead of GNU.
The
initial release
(android-1.0) in the public part of the Android core source repository dates from
Oct 21 2007.
It's a checkout from the NetBSD repository, some time after 13th of june 2005
which is the latest rcs time stamp in the code.
NetBSD was going to be released as 3.0 soon, in December 2005.
The NetBSD ash source was checked out only once, early in the alpha phase of the Android project,
and was just minimally modified so that it compiled on Android.
From then on it was changed almost only for compiling reasons.
Android 2.3.1_r1: command line editing and history was added, using the
"
linenoise
" library, Dec 2010
Since Android 4, Oct 2011, (apparently rather some 3.x which is closed source though),
mksh
is also available and has become the default shell for many apps, e.g. the terminal emulator.
The BusyBox distribution is aiming for small implementations.
Apart from that, emphasis is both on standards compliance and on user convenience,
where bash plays a role.
Although further features were added, there's no reference documentation
at the time of this writing (05/'11).
VMWare
ESX (6.7.0)
has it as system shell.
The
debian ash variant
0.3.8-5
was incorparated with busybox release 0.52 (07/'01).
(This release also added msh, the minix shell, and lush, "a work in progress").
The code was accumulated into one file and notably modified.
compile time options
to activate math, to deactivate jobcontrol and aliases,
to activate these built-ins:
type
(early releases),
getopts
,
command
,
chdir
, some flags for
read
(later releases);
and to integrate these commands as built-ins:
true+false
(early releases),
echo
,
test
(later releases)
ash became the default shell with busybox release 0.60.3 (04/'02).
compile time option ("
ASH_BASH_COMPAT
", active per default)
for several bash compatibility features:
option
pipefail
, substring and replacement parameter expansion
${x:y:z}
and
${x/y/z}
,
[[
,
source
,
$'...'
,
&>
undocumented: calls
fnmatch(3)
instead of the embedded
pmatch()
,
which has further issues due to bugs in some glibc versions.
This also introduces
[^...]
as synonym for
[!...]
.
With
Minix 2.0.3
, an ash variant replaced the former "minix shell" as /bin/sh:
This ash was derived from
4.3BSD-Net/2
, apparently with the
386bsd patchkit 2.4
applied.
Additionally:
knows tilde expansion
"-" is also end of options
"
getopts
" fixed (
$OPTIND
correct after an
"option requires an argument")
accepts empty command, e.g. "
if ; [...]
"
"
false
" built-in added (true was always a synonym for the colon command)
Minix specific:
Command line editing and history added by means of Minix
readline()
← HOME
- a blog about
xit
RSS
██╗ ██╗██╗████████╗██╗ ██████╗ ██████╗
╚██╗██╔╝██║╚══██╔══╝██║ ██╔═══██╗██╔════╝
╚███╔╝ ██║ ██║ ██║ ██║ ██║██║ ███╗
██╔██╗ ██║ ██║ ██║ ██║ ██║██║ ██║
██╔╝ ██╗██║ ██║ ███████╗╚██████╔╝╚██████╔╝
╚═╝ ╚═╝╚═╝ ╚═╝ ╚══════╝ ╚═════╝ ╚═════╝
DEVLOG - OPTIONAL PATCHES, FORCE PUSH, SYMLINKS AND MORE -
March 13, 2025
I got my first couple bug reports...mama, we made it! In
#2
I forgot that in
xitdb
, data made in a transaction is
temporarily mutable (an important perf optimization). It's
fun forgetting how my own database works. In
#4
I forgot
that when you merge, the base commit might not actually
contain the conflicting file. Oops.
Shortly after that, I made patch-based merging
optional
.
It no longer generates patches during the clone, so cloning
repos with large histories won't outlast the heat death of
the universe. It still takes more time than git, because it
is decompressing and chunking each object...I'm working on
making it faster.
The next morning I woke up feeling particularly
masochistic, so I decided to work on the pack file code. I
noticed that some fetches were failing because the pack file
from the server contained a
REF_DELTA
object, which is an
deltified object whose base object is not in the pack and is
expected to already be in the object store. I
fixed
the
pack object reader to read these objects correctly, and then
told my therapist all about it later that day.
After recovering from that eldritch horror I implemented
--abort
for merge and cherry-pick. This would normally be
easy because it's essentially just `git reset --hard`, which
xit already has in the form of `xit reset-dir`, but it
wasn't correctly cleaning up the merge state. Now it is, and
you can --abort all you want.
The next day I turned to `push`, because there was an
issue that I doubt people would enjoy: all pushes were force
pushes. This is because the server does not check if you're
about to obliterate any commits...that check is done
client-side by git! I
added
the check to xit, along with a
-f flag to bypass it.
I had to get away from networking for a while so the next
day I turned to the TUI. I added an obvious optimization
that I should've added long ago:
buffered
writing. The TUI
on windows was particularly slow but now it's actually
usable. I also made the TUI exit cleanly when
ctrl+c
is
entered.
And that brings me to yesterday, when I added support for
symlinks
. That was WAY more annoying than I thought it
would be. Fuck symlinks. Apparently windows agrees, because
it doesn't even let you make them without admin
privileges...so xit is forced to just make a normal file,
just like git does. Speaking of windows, I also added a nice
fix
to prevent windows users from accidentally overwriting
the file mode in a repo.
That was just one week of work! Oh...and this blog is no
longer wrapping every single character in span tags. That
really triggered a lot of web developers. This is what
happens when you put a Zig programmer in charge of HTML. I
hope you all learned a valuable lesson.
Wanderstop review – a wonderful break from the pressure to win
Guardian
www.theguardian.com
2025-03-13 12:04:56
Ivy Road/Annapurna Interactive; PC, PS5, Xbox A fallen warrior trades in fighting for putting the kettle on, wandering round gardens and watering plants. Much more than cosy – it’s healing The term “cosy game” typically inspires one of two responses in those of us who play video games regularly. It ...
T
he term “cosy game” typically inspires one of two responses in those of us who play video games regularly. It will either call you in with the promise of soft, resource-management oriented gameplay whose slower pace offers a gentle escape and a bucolic alternative to gunslinging and high-stress adventure. Or it will repel you – as admittedly, it repels me. Cosy is often a kind of code for twee, low-stakes domestic adventures where drama is eschewed in favour of repetitive tasks intended to generate comfort, or imitate lightning-in-a-bottle resource management sims such as
Stardew Valley
or
Animal Crossing
.
So when faced with Wanderstop, a colourful game in which a fallen warrior trades in her fighting life for managing a tea shop, I was hesitant. However, this is Davey Wreden’s third project, after The Stanley Parable and The Beginner’s Guide, which means that, if it is anything like its predecessors, it will be full of surprises and made with deep attention to detail and artistic vision. Wreden is an auteur – one of his trademarks being a knowing, wry postmodernism. His work bends what the medium of the video game is capable of, and thankfully this latest offering is no exception.
Urn your living … Wanderstop.
Photograph: Annapurna Interactive
Wanderstop excels at combining the nature of the gameplay with a narrative about the perils of burnout. The game practises what it preaches – it does not simply gesture towards a quiet life; rather, it erects one around the player whether they like it or not.
Alta, our fallen warrior protagonist, distinctly does not like it. She has lost too many battles in a row, and on a quest to study under her hero, finds herself collapsing in the woods. Boro, a benevolent gentleman who runs the Wanderstop teashop, takes Alta in and coaxes her into participating in some tea-making and light chores until she recovers. Characters come and go while the player makes them tea and maintains the colourful and dangerously Ghibli-esque gardens. Aside from making tea, you can take care of some fine creatures that look like puffins. Take photos. Sweep. Gather trinkets. Read some of the books that are lying around. Grow plants, pick seeds, grow bigger plants, pick fruit. Use the fruit to make tea. Give the tea away to your guests, or drink it yourself, staring into the lovely landscape.
This is not an unfamiliar vignette. Wytchwood, Spiritfarer, Spirittea, Moonstone Island – there are plenty of games that deal in combining botanical ingredients to fulfil the wishes of whimsical creatures. What makes Wanderstop different is that it refuses to hand you progress or resolution. There is no way to optimise, no way to tick off boxes, no pressure. No winning. The game refuses to give you the satisfaction of the grind, of boxes ticking up, neat little rows of plants. To explain how the story manages this would be to spoil the great magic trick of the game – but suffice to say I was awestruck at real ludonarrative harmony in action. It is one thing to talk, within the dialogue and story of a game, about burnout. About working so hard you can no longer even think about working. It is one thing to extol the virtues of rest. It is another altogether to actively demonstrate what surrender and healing look and feel like.
On a technical level, the game offers no resistance – the controls are immaculate, straightforward and cleanly executed. The music is pleasant and unobtrusive, the voice acting is used in small doses and is very strong. The mechanical aspects of the game are perfectly tuned and the dialogue and incidental text are funny, surprising, and shockingly poignant when they need to be. There are no snags to trip up on, nothing in the way of a deeply transporting experience.
It takes around 12 hours to “complete” Wanderstop, but for me the game demanded a replay immediately. I was eager to return to the gardens around the teashop, to look for secrets, to talk to Boro as much as possible. To linger for a little while more. To slow down, and to consider what exactly it is that is rushing all of us. If the slippery and infallible nature of the gameplay is frustrating to seasoned resource management fans, I would argue that that is exactly the point. To play while letting go.
Wanderstop’s cosy and cute exterior belies something much richer and much cleverer than I have seen in quite some time. It is a masterpiece in a cute disguise – offering the player a place worth visiting, staying and paying attention to.
"For it is true that astronomy, from a popular standpoint, is handicapped by the inability of the average workman to
own an expensive astronomical telescope. It is also true that if an amateur starts out to build a telescope just for
fun he will find, before his labors are over, that he has become seriously interested in the wonderful mechanism of
our universe. And finally there is understandably the stimulus of being able to unlock the mysteries of the
heavens by a tool fashioned by one's own hand."
Russell W. Porter
Founder of Stellafane, March 1923
Introduction
A lot has changed since Russell Porter wrote those
words - today the "average workman" can afford to buy an already
made telescope and Dobsonian mountings are very popular. Much
is also unchanged - mirror grinding techniques are very similar to
those written up by Porter and Ingalls in the 20´s and 30´s. Many
amateur astronomers still choose to fabricate their own instruments,
for the pride of accomplishment, the gaining of knowledge and the
insurance of quality. Telescope making is at the heart of the
Springfield Telescope Makers - after all it is two thirds of our
club's name - and on these pages we hope to show you that you too
can make your own telescope - and it can be an excellent performer!
A note on ATM techniques:
There are almost as many ways to
make mirrors and telescopes as there are telescope makers. On these
pages we present one or more ways that have worked for us, but that
doesn't mean there aren't many other valid approaches. In fact, many
of us enjoy ATMing because we can experiment with different
techniques and sometimes find better ways of making or building a
telescope. And even if our new technique isn't better, we usually
learn something valuable in the process. On these pages, however, we
have tried to stick with simple and proven techniques that are most
appropriate for novice mirror and telescope makers, and are
generally what we teach beginners at the
Stellafane Mirror Class
.
Provides basic information about telescope for beginners, and
briefly discusses important factors that should be considered in
Selecting a First Telescope.
There are a lot of misconceptions about making a mirror -
read this even if you don't plan to make a mirror, but just want
to know how it is done - with
your
bare hands and a few
simple
tools, you can grind and figure a fine mirror with a
surface accurate to a few
millionths
of an inch!
This page has formulas for many telescope and mirror
parameters. It will calculate parameters for two different
telescope designs and compare the results for you. It provides
data to help in selecting a telescope, and also provides
information necessary for mirror making and testing.
This isn't your modern C compiler booting on a recent processor. Instead, you'll read about how I bootstrapped a tiny C compiler on a 1987 transputer processor, while providing it with a basic operating system, a text editor, and an assembler! This happened in 1995, when I was age 16.
This article is possible because I was so proud of my achievement that I saved most of the original files and source code in a single floppy disk. In fact, I had forgotten completely about it, except for my fond memories about porting the C compiler and my 32-bit operating system that grew fairly sophisticated, until recently I made a transputer emulator and I was looking at my archives. To my complete surprise, I found a pretty early version of my toolset. But let's go in parts.
C is very hard
Nowadays a common Reddit question. I learned the hard way that jumping from BASIC language to a more modern language wasn't so easy as I thought! I had great trouble going from the line-oriented BASIC language to structured programming with the Pascal and C languages.
Accustomed to read the source code in a top to bottom fashion, it took me a lot of time to understand that Pascal starts by the end, while in C the
main()
function can appear
anywhere
. Worst, the free form style of both languages drove me nuts because I was too fixed in having a full IF statement in a single line.
I tried many times reading
The C Programming Language
book by Brian Kernighan and Dennis Ritchie. Maybe so early as 1989 when I was barely age 10. Although I could understand the "Hello World" program, the next program they chosen to showcase the language was absolutely terrible for learning (the conversion table from Fahrenheit to Celsius) just because it was too much info for my young mind to process. It doesn't help the next example put the same program together in only two lines, an early example of obfuscated C.
It wasn't until I understood Pascal that I could get back to C and I finally got enlightened. The curly braces are Pascal's
BEGIN
and
END
, the syntax diagrams don't need to be in neat boxes (like Pascal) but can be in lines neatly ordered with recursive references (in turn this made awesome sense of the appendix A of
The C Programming Language
)
The Small-C compiler
My father in the eighties basically bought every single good technical and programming book available at our local American Book Store located in Ciudad Satelite, a town located just outside the north of Mexico City. They brought USA technical books into Mexico, even though their main business line was English student texts for local schools. Remember there was no Internet at the time, the knowledge dissemination was exclusively through books, magazines, and bulletins. If you wanted to learn something about programming, there's a chance there was a book in my father's library with enough information to direct you to learn. Once I know that English language was a requirement, I self-taught myself, reading entire articles of
Compute! Magazine
(available at Sanborns chain stores), and using an English-Spanish dictionary for reference.
Around the same I was discovering the similarities between Pascal and C, I found the article "
A Small C compiler for the 8080s
" in a book curiously named
Dr. Dobb's Programming Journal volume 5
. Who was Dr. Dobb? No idea, but I found fun the idea that a guy called Dr. Dobb wrote his programming developments, but curiously there was no published article by Dr. Dobb. Instead the C compiler was signed by Ron Cain (who wrote a great
article about his Small-C inspiration
). Of course, Dr. Dobb was a running joke of the magazine.
My mind was blown. This was a C compiler written in the C language, and it was able to compile itself! It wasn't anything like the BASIC language. BASIC interpreters are written in assembly language, and to this day I don't have heard of BASIC compilers written in BASIC. Furthermore, the idea of portability: Writing a program in C and running it in multiple machines, it was like someone was telling me that all my programs could be written to run on all the world's computers, an authentic holy grail (but of course, pretty misleading).
It throw me in a crusade for running this C compiler in the homebrew Z280 computer built by my father. This computer was just brought in working order around 1991. At this point in time, I had a primitive operating system written in Z280 machine code, providing a directory, allocation table, and it could read and write whole files.
Being more able in programming, my first attempt was typing the whole of the C compiler into an old Televideo computer running MS-DOS and Turbo C. It took me days because the font used for the source code was unreadable at times. Once it was working, I proceeded to write a text editor into the Z280 computer using machine code, and also a Z280 assembler to process the Small-C generated code.
At some point in early 1992, I managed to get running the Small-C compiler in the Z280 computer, complete with editor and assembler. I was kind of disappointed, because the unoptimized generated code bloated the output, also I needed to write support routines for everything. Worst, it barely had compatibility with any other C code because it missed a lot of syntax of the real C language.
Transputer Summer
It was 1992 when the 32-bit transputer board add-on was built by my father. I ported the Tiny Pascal compiler to it, and inspired by Small-C, I rewrote the compiler in Pascal, and went so far as to make an almost full Pascal compiler. The transputer had the big advantage of having a linear memory space, easing the development of compilers. Furthermore, I just had dominated the expression tree generation so I could have for the first time a reasonable optimized code. You can read here a
complete article about this Pascal compiler made in 1993
.
So with Pascal fully dominated, I decided I could provide the transputer with a C compiler, so I turned back again to Small-C. I decided I had lost already too much time (it was 1995 already), so I went for a quick&dirty conversion to transputer assembler language. Given my previous experience, I made this in two days, again using Turbo C in MS-DOS as base.
I changed all the
int
word sizes to 4 bytes, and replaced all the 8080 assembler code with transputer assembler code. Here is a brief example of how I changed the assembler target:
/* Add the primary and secondary registers */
/* (results in primary) */
zadd()
{ ol("add");
}
/* Subtract the primary register from the secondary */
/* (results in primary) */
zsub()
{
ol("rev");
ol("sub");
}
/* Multiply the primary and secondary registers */
/* (results in primary */
mult()
{ ol("mul");
}
/* Divide the secondary register by the primary */
/* (quotient in primary, remainder in secondary) */
div()
{ ol("rev");
ol("div");
}
The next step was running it into the transputer. This first version was big and slow (although 21 kilobytes of code wasn't so big thanks to the optimized RISC instructions of the transputer), but I wasn't bothered, because I knew I could implement the expression tree generator and optimizer similar to the one I've developed in my Pascal compiler to reduce the final compiler size. It feel wonderful when I managed to get the C compiler in 16 kilobytes.
This article could have ended here, I could have made the Small-C compiler the same as my Pascal compiler, feeding it source code through the transputer channel, and receiving the assembler code through the other channel. However, this history takes a very different path here.
A Dutch Operating System
Flashback to 1993, I was talking with a friend about how my C compiler for Z280 was too basic, and I didn't understand how to implement the bigger features like struct and union, and I was looking for a book in order to help me learn how to write C compilers.
He brought me
Operating Systems: Design & Implementation
, by Andrew Tanenbaum, and... I was disappointed: The book didn't contain a C compiler, but after reading it, I found something so awesome it opened a whole window in my mind: A full operating system written in C language. The code shown for MINIX was way beyond the things I could understand at that moment, but it taught me how operating systems worked, their basic concepts, and the difference between monolithic and kernel-like systems. Tanenbaum is a great teacher.
At the time, my transputer development focus was based on a driver program running in the Z280 host machine. I was kind of tired of redoing my driver program, as each time I needed to adapt it to the program I was going to run (similar to how the Eniac II was rewired for each new task). I had a driver for the Ray Tracing program, another for the Pascal compiler, and another for compiled programs.
I was also writing a CP/M emulator around 1994. The CP/M operating system has a concept that I liked very much: The operating system was the same, only the basic input/output routines (BIOS) changed or were provided by the computer manufacturer. Pretty instrumental for the early success of CP/M as this separation made possible for Digital Research to sell CP/M unchanged to every manufacturer, charging $70 USD for each copy, and these manufacturers were in charge of writing the BIOS at no extra cost for Digital Research.
These three things mixed into my mind in 1995. I was writing a C compiler running into the transputer board, but I could make an operating system with my current subset of the C language, it would invoke the basic input/output routines through the transputer link channels, and the host computer would provide everything. The BIOS wouldn't be only separated from the operating system, but instead
running in another computer
.
This way, the host computer (the Z280) would provide disk access, video output, graphics output (everything I had handled in separated driver programs), and the transputer would be the operating system.
Of course, many people around the world had this idea to bootstrap systems, but for me, it was an absolute revelation!
Bootstrapping an operating system
I started to code a new driver program to provide the host services through the transputer links, and I coded my first 32-bit operating system using the adapted Small-C compiler. This operating system would handle memory management, its own File Allocation Table system, file names still using 8.3 syntax, and provide file handling services to the user programs. Technically, I was simply making yet another monolithic operating system, but I didn't stop to think about it.
As I was creating my operating system, I also made up a new text editor written in C language, and decided for an ANSI screen driver to allow colors, and cursor displacement over the screen. Furthermore, I recoded the transputer assembler in C language, in a testament to my ingenuity, the assembler is almost unchanged from that time.
It took me a few weeks, but I was delighted when I booted up successfully the first iteration of this standalone operating system. One where you could edit your source code, compile it, and update the operating system without resorting back to compile things in the host machine.
Archeology on your own disk
I have the floppy disk dump with the files for this early iteration of my 32-bit operating system. Barely any documentation, so some archeology work on it is required because I had forgotten how it was built. These are the files in the floppy disk:
Jun/09 to Jul/12 was barely one month to do all this. For booting my first operating system disk, probably I wrote myself the first sectors of the 1.44 mb floppy disk using the monitor program I had in the Z280 machine. But anyway, these are the steps I planned for bringing this back to life:
The transputer emulator will be running
MAESTRO.CMG
(MAnejo de Entrada y Salida de Transputer por Oscar, or simply input/output handling for transputer by Oscar)
Setting up a 1.44 mb. disk image file:
dd if=/dev/zero of=disk.img bs=1 count=1474560
Copy
ARRANQUE.CMG
(the boot sector) into the first 512 bytes of the disk image.
Build an initial FAT system in sector 2 with 160 bytes (one for each track, 80 tracks x 2 sides).
Build an initial directory image of ten sectors. Each entry measuring 32 bytes.
Add an initial entry
SOM32.BIN
that contains the basic operating system (SOM stands for Sistema Operativo Mexicano, or Mexican Operating System).
Add a second entry
INTERFAZ.CMG
containing the command-line processor. Problem here: The compiled version is missing from my archive.
Add the files
EDITOR.CMG
,
ENSG10.CMG
and
TC2.CMG
to complete the environment. Again I'm missing the
EDITOR.CMG
file but I've the C source code.
Add all the source files so these can compile from the inside of the environment.
Turns out it isn't so easy. I started by developing a
buildboot.c
program to create the disk image file in the format of my operating system, because I would be rebuilding several disk image files as I progressed in making these able to run again.
First three sectors of a working disk image for my transputer operating system. Notice my boot signature $12 $34 at offset 510.
The internal structure of the disk is graciously deducted from the source code of my operating system. Now the current state of things:
I need to modify my transputer emulator to provide access to the disk image file.
I need to add several transputer instructions and some "paralellism" because it depends on the transputer switching processes.
I need to rebuild the missing
INTERFAZ.CMG
file to provide the command-line interface for the operating system.
I couldn't start in the step 1 because it required step 2 to be working in order for step 1 to be tested. A kind of chicken-egg case. It happens process management in the transputer is one of the best guarded secrets, and that's the cause I didn't emulate earlier my operating system. However, Michael Brüstle, maintainer of
transputer.net
, was kind enough to publish the T414 internal documents where I could see how the internal structures for processes where handled.
Process is a kind of misleading word, in the transputer this is more like threads. Anyway, thanks to these documents I could emulate the process handling (namely the instructions
startp
,
endp
,
runp
, and
stopp
), along with the
tin
instruction in a simplified implementation. The transputer has a very complicated code to sort the timers, and I only did the bare minimum because I only have two timers in my operating system.
After a quick glance at the source code, I saw I have four processes in this basic operating system:
Screen refresh: Takes a buffer, checks for changes, sends line updates to the host screen.
Middle-level BIOS: processes the BIOS calls, and sends back a slew of messages to the host system, probably because I wanted to isolate changes in the driver program.
Main execution: This boots the operating system and calls it.
Sleeper: A low level process that simply sleeps.
Is it easy? Not really. I got stuck three days trying to get the basics to boot in my transputer emulator. For something so primitive, it is amazingly complicated. I managed for it to read the boot sector and then get stuck, and that after discovering the
startp
instruction doesn't start a process, instead it adds the process to the list of processes to run, in order to run it later.
I still was stuck until I noticed this debug sequence:
It read the first byte of the boot signature in order to check against $12, instead it got $34. So the sector data was off by one byte, because it expects the first byte to be the status of the disk read. I misunderstood myself! Once that was solved, it jumped correctly into the boot sector, and it proceeded to read sector -1... So, another bug, but not. It turns I inserted myself that to reset the internal floppy disk cache on my host system.
Also, there was a striking difference between
MAESTRO.LEN
and
ARRANQUE.LEN
, one was based on track/head/sector, and the other was based on logical sectors. However, the
SOM32.c
source code I have still follows the convention of track/head/sector, and as I wanted to compile the operating system unchanged. I found that
MAESTRO.LEN
contained the
LEESECTOR
(read sector) subroutine that I needed for doing the conversion to track/head/sector.
After modifying and reassembling
ARRANQUE.CMG
with the rolled back code (and in the process modifying my transputer assembler to support strings between single quotation marks, and making yet another disk image), it was able to read sector 2 (containing the directory). I now realized the first file I added was called
SOM32.CMG
not
SOM32.BIN
as required, I had to make yet another disk image.
I was delighted to see it reading the sectors composing the operating system, and the emulation ended because it asked for another service code I hadn't yet implemented: 08. It just sets the cursor shape, very common in an age of CGA/EGA/VGA graphics cards, where you could set the text cursor to be a big rectangle, or just a blinking underline. Also implemented along the service 07 (setting cursor position), and after doing these I expected to see something on the screen (because the command-line processor file
INTERFAZ.CMG
wasn't still in the disk image), but I noticed it crashed while trying to load again the operating system, and then tried to get a sector -12.
The error could wait, but I knew something should appear on the screen. The refresh screen process wasn't working. I noticed the timer comparison was
unsigned int
, but in the C language an
unsigned int
comparison can never be less than 0. This was solved using a typecast to
int
. And once the timer was working, everything else ceased to work.
I thought I made a function for reading the 32-bit memory (read_memory_32) in the emulator. Instead, it was a macro, and it wasn't parenthesized. So it executed
v | v << 8 | v << 16 | v << 24 - value
. Can you see the mistake? I added parenthesis to the macro result, and I was greeted with this message:
I?serte u? disc? c?? siste?a ?perativ?? y pu?se u?a tec?a???
Et voila! It was working finally. I was mystified by the wrong letters, but then I discovered my hexadecimal routine from 1995 never was a real hexadecimal routine. It only converted values from 0 to 15 to ASCII 48-63, and I was expecting real ASCII hexadecimal in my emulator driver software. The next test ran right!
Inserte un disco con sistema operativo, y pulse una tecla...
Or "Insert a disk with operating system and press a key". As I feed it a zero'ed disk image file, the code was expecting the boot signature. Now I could feed the proper operating system, and it worked! But it expected the missing
INTERFAZ.CMG
for the command-line interpreter. How to compile it?
With a little trick. I built a disk image with
INTERFAZ.C
and the compiler
TC2.CMG
renamed as
INTERFAZ.CMG
, along an actual copy of
TC2.CMG
, and
ENSG10.CMG
. This way the operating system would run the C compiler, allowing me to generate
INTERFAZ.LEN
(the compiled transputer assembler file).
However, the operating system refused altogether to load
INTERFAZ.CMG
. After reading more dumps, I discovered the source file I had for the operating system was old, and the binary was slightly more recent, it took the directory from the second block of the disk, instead of the sector 3. I adjusted properly my disk image generator, and after reading two sectors of
INTERFAZ.CMG
it got stuck. After peering some thousands of debug lines, I found that executing an "in" instruction triggered another "in" instruction incorrectly; I was able to solve this by counting better the internal timers as the service routine caused a race condition.
Finally, the transputer emulator generated a 143mb long file of debug lines, loaded the complete C compiler into the memory, and crashed because a mishandled service.
The big surprise
It was a bucket of cold water on my head when I noticed the compiler was designed to run just like the Pascal compiler: Without operating system.
It needed another driver program to handle opening files, that's the cause it was sending a different protocol over the link channels, and the crash of the operating system. Incidentally, it also explained why I had three
STDIO
libraries, one was for running it with the host computer using TC.CMG, the second one the same but for TC2.CMG (because the label prefix was changed to
q
instead of
qz
), and the third one is the bootstrapped one to run programs inside my transputer operating system.
After I wrote the driver for the C compiler in my transputer emulator, I was able to recompile everything. Including
SOM32.c
because the binary apparently had trouble running
Interfaz.c
. This is the way to invoke the C compiler from the host side:
It will ask for the input and output file names. The assembly is done with tasm. Still, it loaded
interfaz.cmg
and crashed. But now I had a disk image file with everything ready! I could now figure how I bootstrapped the transputer C compiler into my own transputer operating system:
Create the service handler to interface with the host machine (
MAESTRO.LEN
)
Create the boot sector in transputer assembler code (
ARRANQUE.LEN
)
Compile the C compiler using Turbo-C on MS-DOS.
Compile itself in MS-DOS and assemble the generated file (
TC.CMG
) with the
STDIO.LEN
library.
Create a driver program in the host Z280 machine to handle C files for the transputer board.
Enhance the C compiler to use tree expression generator (
TC2.CMG
), and assemble it with
STDIO2.LEN
.
Compile
SOM32.c
and assemble with
MENSAJES.LEN
. Use this as
SOM32.BIN
inside the OS.
Compile
Interfaz.c
and assemble with
STDIO3.LEN
. Use this as
INTERFAZ.CMG
inside the OS.
Compile
Editor.c
and assemble with
STDIO3.LEN
. Use this as
EDITOR.CMG
inside the OS.
Compile
TC2.c
and assemble with
STDIO3.LEN
. Use this as
CC.CMG
inside the OS.
Compile
ENSG10.c
and assemble with
STDIO3.LEN
. Use this as
ENSG10.CMG
inside the OS.
Now you can do development inside the transputer OS.
The main question is why I didn't have all these files ready to rebuild a floppy? The answer is easy, probably I overwritten the files while I was bootstrapping the operating system. Because once you get to the last step, you don't need anymore the host files because you just have found how to fly free.
I'm pretty impressed by how my young myself figured that
STDIO.LEN
was a Basic Input Output/System in its own, and could be modified from directing the driver in the host machine to calling services from the operating system inside the transputer.
Again I found a difference of opinion between the boot sector code, the disk image file, and the operating system. A small correction, and I was so glad to read this on the screen:
Sistema Operativo Multitarea de 32 bits. v1.00
>>>>> (c) Copyright 1995 Oscar Toledo G. <<<<<
A>
I was only missing the keyboard input code. I coded it in the emulator so fast as I could, and tried to type D (for DIR), and it crashed after displaying the letter. Ready to give a look at a 182.5 mb debug file, you can say *sigh* or you can continue working singing Save Your Tears (Remix).
I saw you across the room, one hour later (I mean the bug). The timer interruption could happen in the middle of an instruction with prefixes. I had to add a
complete
variable to give this information before accepting timers, and the command-line come to life, but no command was accepted. Everything was interpreted as a file name to execute.
While I was looking for this, I did a regression test in the emulator to make sure everything was working properly, and I tried to compile the Pascal compiler again. It didn't work... I noticed I added time slicing to the
j
and
lend
instructions, and the emulator worked again when I disabled the time slicing. After a careful look, again a C macro was the culprit. I invoked the macro
write_memory_32
that expanded into 4 separated memory assignments, but the macro reference wasn't inside curly braces, so only the first byte was updated, and the next three assignments overwrote the process list. This is the code before fixing:
Once the emulator was compiled, this time I could do
DIR
in my operating system:
Sistema Operativo Multitarea de 32 bits. v1.00
>>>>> (c) Copyright 1995 Oscar Toledo G. <<<<<
A>dir
SOM32 .BIN 4,143 23-ene-19<5 10:59:00
INTERFAZ.CMG 2,369 23-ene-19<5 10:59:00
CC .CMG 16,952 23-ene-19<5 10:59:00
INTERFAZ.C 5,885 23-ene-19<5 10:59:00
EDITOR .C 17,093 23-ene-19<5 10:59:00
ENSG10 .C 28,617 23-ene-19<5 10:59:00
EDITOR .CMG 5,295 23-ene-19<5 10:59:00
ENSG10 .CMG 10,876 23-ene-19<5 10:59:00
8 archivos.
1,336,320 bytes libres.
A>
Yes! Yes! Yes! The operating system I developed 30 years ago was working again. Of course, my command-line directory list was suffering of the year 2000 problem, and the month was incorrectly built by my disk image tool.
Some polish required
The transputer emulator gets the color screens from my operating system and replicate it using ANSI escape sequences on the terminal. You'll see my text editor is a homage to Turbo-C. You can edit the source files, compile these inside the operating system, assemble these, and re-run the programs. For easier usage, I've translated the macOS terminal keys to the internal codes used in my operating system for function and arrow keys.
Alternatively, you can compile the source files using the transputer emulator, and get the output directly into your directory.
Using the programs
You can boot the predesigned disk using this command line or
run_os.sh
:
./tem -os maestro.cmg disk.img
Alternatively it is provided
build_disk.sh
to create a disk with your preferred file setup.
None of the programs use command-line arguments. The comamand-line interpreter allows to use some commands like DIR, VER, MEM, and FIN, and invoke executables simply by typing their name.
Text editor running on transputer
The editor
EDITOR.CMG
is completely visual, if you are using macOS you'll be able to edit easily the files, and press Fn+F1 to see help. I still haven't checked with Linux nor Windows.
For the C compiler
TC2.CMG
, you should say N two times (no error stop, and no C source code displayed), and then enter the input file name (for example, TC2.C), then the output file name (for example, TC2.LEN), and then press Enter to exit back to the operating system.
For the assembler
ENSG10.CMG
, you should enter the input file name (for example, TC2.LEN), then the library file name (STDIO3.LEN), press Enter alone, and now the output file (for example, TC2.CMG).
Instead of typing all these file names (I did it in 1995), you can use the file
assemble_os.sh
that will assemble all the files using my tasm assembler.
Final thoughts
This early version of my operating system was incredibly small. The operating system was composed of 793 lines, the command-line interface barely 298 lines, the editor 830 lines, the assembler 1473 lines, and the C compiler the biggest with 3018 lines.
It was my biggest achievement for that year, a pure 32-bit operating system working with service communication, and self-sustained integrated applications. Far beyond from only hosting a Pascal compiler and a ray tracer.
Of course, this history continued with my operating system growing bigger with support for CD-ROM access using High-Sierra and ISO-9660 standards, my C compiler grew to full K&R C (even I could compile some obfuscated C!), my editor got syntax coloring, but that will be another history.
Chinese tech giant Huawei is at the centre of a new corruption case in Europe’s capital. On Thursday, Belgian police raided the homes of its lobbyists, Follow the Money and its media partners Le Soir and Knack can reveal.
Young people: what rules around smartphone use should be put in place for children?
Guardian
www.theguardian.com
2025-03-13 10:09:11
We would like to hear from people aged 18-24 about their childhood experiences with smartphones – positive or negative We would like to hear from people aged 18-24 about their childhood experiences with smartphones and social media, positive or negative. What approach would you take with your own ch...
We would like to hear from people aged 18-24 about their childhood experiences with smartphones and social media, positive or negative.
What approach would you take with your own children, as a result of lessons learned from your experience? Would you let them have access to a smartphone or social media, and how often? What rules would you put in place?
Share your experience
You can tell us about your childhood experiences with smartphones using this form.
Please share your story if you are 18 or over, anonymously if you wish. For more information please see our
terms of service
and
privacy policy
.
Your responses, which can be anonymous, are secure as the form is encrypted and only the Guardian has access to your contributions. We will only use the data you provide us for the purpose of the feature and we will delete any personal data when we no longer require it for this purpose. For true anonymity please use our
SecureDrop
service instead.
If you’re having trouble using the form click
here
. Read terms of service
here
and privacy policy
here
.
Game Boy Advance Architecture – A Practical Analysis
The internal design of the Game Boy Advance is quite impressive for a portable console that runs on two AA batteries.
This console will carry on using Nintendo’s
signature
GPU. Additionally, it will introduce a relatively new CPU from a British company that will surge in popularity in the years to come.
CPU
Most of the components are combined into a single package called
CPU AGB
. This package contains two completely different CPUs:
A
Sharp SM83
running at either
8.4 or 4.2 MHz
:
If it isn’t the same CPU found on the Game Boy!
It’s effectively used to run Game Boy (
DMG
) and Game Boy Color (
CGB
) games. Here’s
my previous article
if you want to know more about these consoles.
An
ARM7TDMI
running at
16.78 MHz
: This is the new processor we’ll focus on, it most certainly runs Game Boy Advance games.
Note that both CPUs will
never run at the same time
or do any fancy co-processing. The
only
reason for including the
very
old Sharp is for
backwards compatibility
.
That being said, before I describe the ARM chip, I find it handy to start with the history behind this brand of CPUs.
The Cambridge miracle
The story about the origins of the ARM CPU and its subsequent rise to fame is riveting. Here we find a combination of public investment, exponential growth, ill-fated decisions and long-distance partnerships.
Photo of the BBC Micro with a box of 5¼ disks on top
[1]
, the first disk is Elite.
The late 70s were a tumultuous time for the United Kingdom populace. The interventionist economy once built under post-war ideals had reached its course, and the pendulum soon swung towards free-market reforms. Amid this storm, Cambridge-based ventures such as
Acorn Computers
, along with Sinclair and the like, were selling computer kits to laboratories and hobbyists. Similarly to American and Japanese enterprises, Acorn’s computers relied on the
6502 CPU
and a proprietary BASIC dialect.
Entering the 80s, ministerial interests within the new British government led to the creation of a project to uplift computer literacy at schools
[2]
. Thanks to Acorn’s upcoming ‘Proton’ home computer, the company was awarded the contract to build an affordable computer that fulfils the government’s vision. The result was the
BBC Micro
(nicknamed the ‘Beeb’), which enjoyed significant success among schools, teachers and students. Within the Micro, Acorn incorporated an avant-garde
‘Tube’ interface
that could expand the computer with a
second processor
. This would pave the way for Acorn’s next big investment.
During the development of their next product, this time enterprise-focused, Acorn did not find a suitable CPU to succeed the 6502. Pressure to innovate against Japanese and American competition, combined with unfortunate planning, placed Acorn in a troubled financial state. Thus, a new division in Acorn was tasked to produce a compelling CPU. To work around Acorn’s recent constraints, the CPU team based their architecture on the teachings of a research paper called
The Case for the Reduced Instruction Set Computer
[3]
and its prototype, the
RISC CPU
[4]
. Finally, in 1985, Acorn delivered the
ARM1 CPU
as a Tube module for the BBC Micro, but was only marketed for R&D purposes. It won’t be until 1987, with the introduction of the first
Acorn Archimedes
computer, that ARM chips (by then, the ARM2 CPU) would take a central role.
A new CPU venture
A late Newton model… after I played with it.
During the commercialisation of the Acorn Archimedes, Apple became captivated by Acorn’s energy-efficient CPUs, but the American company was still unconvinced that Acorn’s latest ARM3 would be suitable for Apple’s new pet project, the
Newton
. However, rather than walking away (after all, Acorn was a competitor), both discussed the possibility of evolving the ARM3 to deliver Apple’s requirements
[5]
, namely
flexible clock frequency
,
integrated
MMU
and
complete 32-bit addressing
.
This collaboration soon turned into a partnership where Acorn, Apple and VLSI (ARM chips manufacturer) set up a new company solely focused on developing ARM CPUs. Apple provided the investment (obtaining 43% of the stake), Acorn shared its staff and VLSI took care of manufacturing. In 1990,
Advanced RISC Machines (ARM) Ltd
came into existence, with Robin Saxby as its executive chairman.
Years after, Apple finally shipped the
Newton MessagePad
powered by an
ARM610
, one of the next generation of ARM chips incorporating Apple’s input. Meanwhile, Acorn also released the
RiscPC
using the new CPUs.
Now, while Acorn and Apple lingered on the computer/handheld market, ARM devised a radical business model. Keeping away from manufacturing, Saxby’s vision consisted of
licensing ARM’s intellectual property
, in the form of
CPU designs
and its
instruction set
[6]
. This granted ARM with clients beyond the computer realm, such as Texas Instruments
[7]
, who later connected the company with the emerging mobile market (culminating in the Nokia 6110) and set-top boxes. The follow-up years will see ARM’s technology being bundled in billions of mobile devices
[8]
.
The Nintendo partnership
Back in Japan, and thanks to the
Game Boy analysis
, we learnt that Nintendo’s hardware strategy for portable systems favours a
System On a Chip (SoC)
model. This has allowed the company to obfuscate affordable off-the-shelf technology and combine it with in-house developments. In doing so, the new console could be unique and competitive.
CPU AGB, housing the ARM7TDMI CPU (among many other components).
Fortunately, ARM’s licensing model fitted just right for those needs. Both companies held talks since 1994 (a year before the
Virtual Boy
’s launch) despite nothing materialising until many years later
[9]
. The reason was simple: the Japanese found unfeasible ARM’s code density and the need for 32 data wires (something the
Virtual Boy’s CPU
already managed to escape). Nevertheless, ARM’s new CPU designer - Dave Jaggar - quickly answered with the
ARM7TDMI
, a new CPU that focused on maximising performance under power and storage constraints.
This was a turning point for ARM
, as this new product not only pleased Nintendo, but also got the attention of
Texas Instruments
,
Nokia
and the rest of the competitors in the cellphone arena.
Unsurprisingly, when Nintendo started working on the successor of the
Game Boy Color
, their CPU pick became the ARM7TDMI.
The ARM7TDMI
Let’s now dive into what this chip offers.
Commanding the CPU
To begin with, the ARM7TDMI implements the
ARMv4
instruction set, the successor of the ARMv3. This implies:
A
RISC-based design
: As explained before, ARM CPUs have been influenced by a paper from the University of California, Berkeley called ‘The Case for the Reduced Instruction Set Computer’
[10]
. Its research outlines a series of guidelines for scalable processor design and defends the use of a
load-store architecture
, a fixed instruction size and a large register file. Many of these were absent in the populated CPU market (i.e.
Intel 8086
,
MOS 6502
,
Zilog Z80
and the
Motorola 68000
), but would influence the design of new CPU lines throughout the 80s and 90s.
Conditional execution
: A peculiar feature of the ARM ISA. Essentially, almost every instruction embeds a condition that states whether it should be executed. Typically, other CPUs follow the ‘compare and jump’ process (also called ‘branching’) to control which instructions must the CPU execute. By contrast, ARM programmers may insert the condition in the instruction itself. This is possible due to the first four bits of ARM’s opcodes are reserved for a condition (i.e.
equal
,
not equal
, etc). All in all, this reduces the complexity of ARM code, as conditional execution provides a cleaner design for routines, unlike branching and subroutine splitting. Additionally, this also serves as a workaround for
control hazards
(explained in more detail later).
A
flexible second operand
, also known as ‘Operand2’
[11]
. Typically, operations are made of two operands (like
add 2 and 2
). However, ARM instructions also allows embedding an extra
shift
operation into the second operand. For example, you can compute
shift 2 by four bits and then add it to 2
in a single instruction.
Bit shifting is also a cheap shortcut to perform division or multiplication with powers of 2, which leads to many optimisation techniques.
32-bit
and
64-bit multiplication
instructions: An addition of the ARM v4. Furthermore, 64-bit operations output the result in two registers.
The package
Now that we know how developers talk to this chip, let’s check what’s inside the silicon.
In terms of circuitry, the ARM7TDMI is a cut-down version of the ARM710 with interesting additions. The core includes
[12]
[13]
:
16 general-purpose 32-bit registers
: While it’s a big step compared to the seven
8-bit registers of the SM83/Game Boy
, this is a compromise of the RISC guidelines, which required thirty-two 32-bit registers instead. This is because ARM favoured maintaining a small silicon size
[14]
.
32-bit data bus and ALU
: Meaning it can move and operate 32-bit values without consuming extra cycles.
Clean 32-bit addressing
: This is part of Apple’s input. The first three ARM CPUs used 26-bit memory addresses to optimise performance (the combined Program Counter and Status Register could fit in a single 32-bit word) in exchange for memory addressability (up to 64 MB of memory could be accessed). The follow-up ARM6 series (with its ARMv3 ISA) implemented 32-bit addressing logic, but kept a backwards-compatible mode for old code. Now, the ARM7TDMI (being mobile-focused) scrapped the 26-bit mode and only houses logic for 32-bit addresses (reducing the amount of silicon needed).
No
Memory Management Unit (MMU)
: Ever since the ARM1, ARM provided an MMU solution. First as the ‘MEMC’ co-processor, and then integrated with the ARM610. Now, the ARM7TDMI seems to be the only one in its series to provide none, potentially due to the lack of interest (early mobile devices didn’t require sophisticated virtual memory).
No cache
: Another cost-reduction of this chip, as previous ARM chips bundled some cache.
Finally, all of this can operate with a
3 Volt
power supply
[15]
. This is an evident step towards mobile computing, as earlier cores required a 5 V supply.
The pipeline
Since its first iteration, ARM has implemented a
three-stage pipeline
to run code. In other words, the execution of instructions is divided into three steps or
stages
. The CPU will fetch, decode and execute up to three instructions concurrently. This enables maximum use of the CPU’s resources (which reduces idle silicon) while also increasing the number of instructions executed per unit of time.
Like two very
similar
contemporaries
, ARM CPUs are susceptible to
data hazards
. Nevertheless, neither the programmer nor compiler will notice it as, in this case, the CPU will automatically stall the pipeline whenever it’s needed.
Control hazards
are also present, but ARM tackled them with an efficient approach called
conditional annulment
: Whenever a branch instruction is at the second stage (
Decode
), the CPU will calculate the condition of the branch
[16]
. Based on the result, if the branch must be executed, the CPU will automatically nullify the follow-up instruction (turning it into a filler). Now, this may look inefficient when compared to
MIPS’ approach
(as a MIPS compiler can insert useful instructions, not just fillers). Hence, apart from branching, ARM provides
conditional execution
. The latter turns this pipeline design into an advantage, since ARM can decode an instruction and calculate its embedded condition at the same stage. Thus, in this case, no fillers will be added. That’s why conditional execution is preferred over branching when programming for ARM CPUs
[17]
.
Squeezing performance
One of the drawbacks of a load-store architecture led to ARM’s code being very
sparse
. Competitors like x86 could perform the same tasks using smaller amounts of code, requiring less storage. Consequently, when Nintendo took a look at ARM’s latest design, the ARM7, they weren’t pleased with it. The size of ARM’s instructions meant that hypothetical gadgets comprised of 16-bit buses with limited memory and storage - all to save cost and energy - would make the CPU inefficient and bottlenecked. Luckily, Dave Jaggar had just finished designing the ARM7 and wouldn’t give up yet. During his commute after meeting Nintendo, he came up with a solution: The
Thumb instruction set
[18]
.
Thumb is a subset of the ARM instruction set whose instructions are encoded into
16-bit words
(as opposed to 32-bit)
[19]
. Being 16-bit, Thumb instructions require
half the bus width
and occupy
half the memory
. To achieve this, it compromises in the following ways:
Thumb doesn’t offer conditional execution
, relying on branching instead.
Its data processing opcodes only use a
two-address format
(i.e.
add R1 to R3
), rather than a three-address one (i.e.
add R1 and R2 and store the result in R3
).
It only has access to the bottom half of the register file. Thus,
only eight general-purpose registers
are available.
All in all, since Thumb instructions offer only a functional subset of ARM, developers may have to write more instructions to achieve the same effect.
In practice, Thumb uses 70% of the space of ARM code. For 16-bit wide memory, Thumb runs
faster
than ARM. If required, ARM and Thumb instructions can be mixed in the same program (called
interworking
) so developers can choose when and where to use each mode.
The extensions
The ARM7TDMI is, at its essence, an ARMv3-compliant core with extras. The latter is referenced in its name (
TDMI
), meaning:
T
→
Thumb
: The inclusion of the Thumb instruction set.
D
→
Debug Extensions
: Provide JTAG debugging.
M
→
Enhanced Multiplier
: Previous ARM cores required multiple cycles to compute full 32-bit multiplications, this enhancement reduces it to just a few.
I
→
EmbeddedICE macrocell
: Enables hardware breakpoints, watchpoints and allows the system to be halted while debugging code. This facilitates the development of programs for this CPU.
Overall, this made the ARM7TDMI an attractive solution for mobile and embedded devices.
Memory locations
The inclusion of Thumb in particular had a strong influence on the final design of this console. Nintendo mixed 16-bit and 32-bit buses between its different modules to reduce costs, all while providing programmers with the necessary resources to optimise their code.
Memory architecture of this system.
The Game Boy Advance’s usable memory is distributed across the following locations (ordered from fastest to slowest)
[20]
:
IWRAM
(Internal WRAM) → 32-bit with 32 KB: Useful for storing ARM instructions.
VRAM
(Video RAM) → 16-bit with 96 KB: While this block is dedicated to graphics data (explained in the next section of this article), it’s still found in the CPU’s memory map, so programmers can store other data if IWRAM is not enough.
EWRAM
(External WRAM) → 16-bit with 256 KB: A separate chip next to CPU AGB. It’s optimal for storing Thumb-only instructions and data in small chunks. On the other side, the chip can be up to six times slower to access compared to IWRAM.
Game PAK ROM
→ 16-bit with variable size: This is the place where the cartridge ROM is accessed. While it may provide one of the slowest rates, it’s also mirrored in the memory map to manage different access speeds. Additionally, Nintendo fitted a
Prefetch Buffer
that interfaces the cartridge to alleviate excessive stalling. This component independently caches continuous addresses when the CPU is not accessing the cartridge, it can hold up to eight 16-bit words.
In practice, however, the CPU will rarely let the Prefetch Buffer do its job. Since by default it will keep fetching instructions from the cartridge to continue execution
[21]
(hence why IWRAM and EWRAM are so critical).
Game PAK RAM
→ 8-bit with variable size: This is the place where the cartridge RAM (SRAM or Flash Memory) is accessed.
This is strictly an 8-bit bus (the CPU will see ‘garbage’ in the unused bits) and for this reason, Nintendo states that it can only be operated through their libraries.
Although this console was marketed as a 32-bit system, the majority of its memory is only accessible through a 16-bit bus, meaning games will mostly use the Thumb instruction set to avoid spending two cycles per instruction fetch. Only in very exceptional circumstances (i.e. need to use instructions not found on Thumb while storing them in IWRAM), programmers will benefit from the ARM instruction set.
Becoming a Game Boy Color
Apart from the inclusion of
GBC hardware
(Sharp SM83, original BIOS, audio and video modes, compatible cartridge slot and so forth), there are two extra functions required to make backwards compatibility work.
From the hardware side, the console relies on switches to detect if a Game Boy or Game Boy Color cartridge is inserted. A
shape detector
in the cartridge slot effectively identifies the type of cartridge and allows the CPU to read its state. It is assumed that some component of CPU AGB reads that value and automatically powers off the hardware not needed in GBC mode.
From the software side, there is a special 16-bit register called
REG_DISPCNT
which can alter many properties of the display, but one of its bits sets the console to ‘GBC mode’
[22]
. At first, I struggled to understand exactly when the GBA tries to update this register. Luckily, some developers helped to clarify this:
I think what happens during GBC boot is that it checks the switch (readable at REG_WAITCNT 0x4000204), does the fade (a very fast fade, hard to notice), then finally switches to GBC mode (BIOS writes to REG_DISPCNT 0x4000000), stopping the ARM7.
The only missing piece of the puzzle is what would happen if you were to remove a portion of the GBC cartridge shell so the switch isn’t pressed anymore, then did a software mode-switch to GBC mode. Multi-boot mode could help here. I’m not sure if the switch needs to be pressed down for the GBC cartridge bus to work properly, or if it just works. I’m willing to guess that the switch is necessary for the bus to function, but that’s just a guess.
–
Dan Weiss (aka Dwedit, current maintainer of PocketNES and Goomba Color)
Graphics
Before we begin, you’ll find the system a mix between the
SNES
and the
Game Boy
. In fact, the graphics core is still called
PPU
. Thus, I recommend reading those articles first as I’ll be revisiting lots of previously-explained concepts.
Compared to previous
Game Boys
, we now have an LCD screen that can display up to
32,768 colours
(15-bit). It has a resolution of
240 x 160 pixels
and a refresh rate of
~60 Hz
.
Organising the content
Memory architecture of the PPU.
Graphics are distributed across these regions of memory:
96 KB 16-bit
VRAM
(Video RAM): Where 64 KB store backgrounds and 32 KB store sprites.
1 KB 32-bit
OAM
(Object Attribute Memory): Stores up to 128 sprite entries (not the graphics, just the indices and attributes). As noted by its large width, the OAM’s bus is exclusively optimised for fast access.
1 KB 16-bit
PAL RAM
(Palette RAM): Stores two palettes, one for backgrounds and the other for sprites. Each palette contains 256 entries of 15-bit colours each, where colour
0
means
transparent
.
Constructing the frame
If you’ve read the previous articles you’ll find the GBA familiar, although there is additional functionality that may surprise you. Regardless, the fact the new system runs only on two AA batteries makes this study even more enthralling.
I’m going to borrow the graphics of Sega’s
Sonic Advance 3
to show how a frame is composed.
These two blocks are made of 4 bpp Tiles.
You may notice some weird vertical patterns in here, these are not graphics but ‘Tile Maps’ (see next section).
These two blocks are reserved for sprites.
Pairs of charblocks found in VRAM.
GBA’s tiles are strictly
8x8 pixel bitmaps
, they can use 16 colours (4 bpp) or 256 colours (8 bpp). 4 bpp tiles consume 32 bytes, while 8 bpp ones take 64 bytes.
Tiles can be stored anywhere in VRAM. However, the PPU wants them grouped into
charblocks
: A continuous region of
16 KB
. Each charblock is reserved for a specific type of layer (either background or sprites) and programmers decide where each charblock starts. This can result in some overlapping which, as a consequence, enables two charblocks to share the same tiles.
Due to the size of a charblock, up to 256 8 bpp tiles or 512 4 bpp tiles can be stored per block. Overall,
up to six charblocks can be allocated
, which combined require 96 KB of memory: The exact amount of VRAM this console has.
Only four charblocks can be used for backgrounds and two may be used for sprites.
Background Layer 0 (BG0).
Background Layer 2 (BG2).
Background Layer 3 (BG3).
This particular layer will be shifted horizontally at certain scan-lines to simulate water effects.
Static background layers in use.
The background layer of this system has improved significantly since the Game Boy Color. It finally includes some features found in the
Super Nintendo
(remember the
affine transformations
?).
The PPU can draw up to four background layers. The capabilities of each one will depend on the selected mode of operation
[23]
:
Mode 0
: Provides four static layers.
Mode 1
: Only three layers are available, although one of them is
affine
(can be rotated and/or scaled).
Mode 2
: Supplies two affine layers.
Each layer has a dimension of up to 512x512 pixels. If it’s an affine one then it will be up to 1024x1024 pixels.
The piece of data that defines the background layer is called
Tile Map
. Now, this information is encoded in the form of
screenblocks
: a structure that defines portions of the background layer (32x32 tiles). A screenblock occupies just 2 KB, but more than one will be needed to construct the whole layer. Programmers may place screenblocks anywhere in VRAM, which may overlap background charblocks. This means that not all tiles entries will contain graphics!
Sprites
Rendered Sprite layer
The size of a sprite can be up to 64x64 pixels wide. Yet, for having such a small screen, sprites will end up occupying a big part of it.
If that wasn’t enough, the PPU can now apply
affine transformations
to sprites!
Sprite entries are 32-bit wide and their values can be divided into two groups:
Attributes
: Contains x/y position, horizontal/vertical flipping, size, shape (square or rectangle), sprite type (affine or regular) and location of the first tile.
Affine data
: Only used if the sprite is affine. They specify scaling and rotation.
Result
All layers merged (
Tada!
).
As always, the PPU will combine all layers automatically, but it’s not over yet! The system has a couple of effects available to apply over these layers:
Mosaic
: Makes tiles look more
blocky
.
Alpha blending
: Combines colours of two overlapping layers, resulting in transparency effects.
Windowing
: Divides the screen into two different
windows
where each one can have its own separate graphics and effects, the outer zone of both windows can also be rendered with tiles.
On the other side, to update the frame, there are multiple options available:
Command the
CPU
: The processor now has full access to VRAM whenever it wants. However, it can produce unwanted artefacts if it alters some data mid-frame, so waiting for VBlank/HBlank (
traditional way
) remains the safest option in most cases.
Use the
DMA Controller
: DMA provides transfer rates ~10x faster and can be scheduled during VBlank and HBlank. This console provides four DMA channels (two reserved for sound, one for critical operations and the other for general purpose). Bear in mind that the controller will halt the CPU during the operation (although it may hardly notice it!).
Beyond Tiles
Sometimes we may want to compose a background from which the tile engine won’t be able to draw all required graphics. Now, modern consoles addressed this by implementing a
frame-buffer
architecture, enabling programmers to arbitrary alter each pixel individually. However, this is not possible when there’s very little RAM… Well, the GBA happens to have 96 KB of VRAM. This is enough to allocate a
bitmap
with the dimensions of our LCD screen.
The good news is that the PPU actually implemented this functionality by including three extra modes, these are called
bitmap modes
[24]
:
Mode 3
: Allocates a single fully-coloured (16 bpp, 32,768 colours) frame.
Mode 4
: Provides two frames with half the colours (8 bpp, 256 colours) each.
Mode 5
: There’re two fully-coloured frames with half the size each (160x128 pixels).
The reason for having two bitmaps is to enable
page-flipping
: Drawing over a displayed bitmap can expose some weird artefacts during the process. If we instead manipulate another one then none of the glitches will be shown to the user. Once the second bitmap is finished the PPU can be updated to point to the second one, effectively swapping the displayed frame.
Super Monkey Ball Jr. (2002).
Bitmap mode allowed the CPU to provide some rudimentary 3D graphics for the scenery.
Foreground objects are sprites (separate layer).
Tonc’s demo.
Rendered bitmap with some primitives.
Notice the screen doesn’t show significant patterns produced by tile engines.
Nickelodeon’s SpongeBob SquarePants.
Episode distributed as a
GBA Video
cartridge (it suffered a lot of compression, of course).
Examples of programs using bitmap modes.
Overall it sounds like a cutting-the-edge feature, however, most games held on to the tile engine. Why? Because in practice it
costs a lot of CPU resources
.
You see, the tile engine enables the CPU to delegate most of the computations to the graphics chip. By contrast, the frame-buffer system that the PPU provides is limited to only displaying that segment of memory as a
single background layer
, which means no more individual affine transformations, layering or effects unless the CPU computes them. Also, the frame-buffer occupies 80 KB of memory, so only 16 KB (half) are available to store sprite tiles.
For this reason, these new modes were predominantly useful for exceptional use cases, such as for playing motion video (the
Game Boy Advance Video
series completely relied on this) or for displaying
3D geometry
(rendered by the CPU). In any case, the results were impressive, to say the least.
Audio
The GBA features a
2-channel sample player
which works in combination with the legacy Game Boy sound system.
Functionality
Here is a breakdown of each audio component using
Sonic Advance 2
as an example:
The new sound system can now play PCM samples, it provides two channels called
Direct Sound
where it receives samples using a
FIFO queue
(implemented as a 16-byte buffer).
Samples are
8-bit
and
signed
(encoded in values from -128 to 127). The default sampling rate is 32 kHz, although this depends on each game: since a higher rate means a larger size and more CPU cycles, not every game will spend the same amount of resources to feed the audio chip.
Consequently,
DMA
is essential to avoid clogging CPU cycles.
Timers
are also available to keep in sync with the queue.
PSG
PSG-only channels.
While the Game Boy subsystem won’t share its CPU, it does give out access to its PSG.
For compatibility reasons, this is the same design found on the original Game Boy. I’ve previously written
this article
that goes into detail about each channel in particular.
The majority of GBA games used it for accompaniment or effects. Later ones will optimise their music for PCM and often leave the PSG unused.
Combined
Tada!
Finally, everything is automatically mixed and output through the speaker/headphone jack.
Even though the GBA only exhibits two PCM channels, you may notice that games magically play more than two samples concurrently. How is this possible? Well, while only having two channels may seem a bit weak on paper, the main CPU can lend some of its cycles to sequence and mix samples in a single channel
[25]
(that should give you an idea of how powerful the ARM7TDMI is!). This wasn’t an afterthought, however: in the ‘Operating System’ section, you will find that the
BIOS ROM already provides an audio sequencer
to assist developers with this task.
Best of both worlds
Some games took the PCM-PSG duality further and ‘alternated’ the leading chip depending on the context.
In this game (
Mother 3
), the player can enter two different rooms, one
relatively normal
and the other with a
nostalgic
setting. Depending on the room the character is in, the same score will sound
modern-ish
or
8bit-ish
.
Normal room, only uses PCM.
Nostalgic room, PSG leads the tune.
Operating System
ARM7’s reset vector is at
0x00000000
, which points to a
16 KB BIOS ROM
. That means the Game Boy Advance first boots from the BIOS, which in turn displays the iconic splash screen and then decides whether to load the game or not.
Splash animation halfway through.
Splash animation at the end.
The GBA features a new splash animation where it now shows off its extended colour palette and sprite scaling capabilities.
The BIOS ROM also stores software routines that games may call to simplify certain operations and reduce cartridge size
[26]
. These include:
Arithmetic functions
: Routines to carry out Division, Square Root and Arc Tangent.
Affine matrix calculation
: Given a ‘zoom’ value and angle, it calculates the affine matrix that will be input to the PPU in order to scale/rotate a background or sprite.
There are two functions, one for sprites and the other for backgrounds. Their parameters are slightly different but the idea is the same.
Decompression functions
: Implements decompression algorithms including Run-Length, LZ77 and Huffman. It also provides bit unpacking and sequential difference.
Memory copy
: Two functions that move memory around. The first one copies 32-byte blocks using a specialised opcode for this type of transfer (
LDMIA
to load and
SDMIA
to store) only once. The second one copies 2-byte or 4-byte blocks using repeated
LDRH/STRH
or
LDMIA/STMIA
opcodes, respectively. Thus, the second function is more flexible but not as fast.
Sound
: Implements a complete MIDI sequencer! It includes many functions to control it.
Power interface
: Shortcuts for resetting, clearing most of the RAM, halting the CPU until a certain event hits (V-blank or custom one) or switching to ‘low-power mode’.
Multi-boot
: Uploads a program to another GBA and kickstarts it. More details are in the ‘Game’ section.
The BIOS is connected through a 32-bit bus and it’s implemented using a combination of Arm and Thumb instructions, though the latter is the most prominent.
Also, remember that all of this will only run on the ARM7. In other words, there isn’t any hardware acceleration available to speed up these operations. Hence, Nintendo provided all of this functionality through software.
Games
Games are distributed in a new proprietary cartridge format, it’s still called
Game Pak
but features a smaller design.
GBA programs are mostly written in
C
with performance-critical sections in
assembly
(ARM and Thumb) to save cycles. Nintendo supplied an SDK with libraries and compilers.
Programming for the GBA shared some methodologies with the
Super Nintendo
but also inherited all the advancements of the early 2000s: Standardised high-level languages, reliable compilers, debuggable RISC CPUs, non-proprietary workstations for development, comparatively better documentation and… Internet access!
Accessing cartridge data
While the ARM7 has a 32-bit address bus, there are
only 24 address lines connected to the cartridge
.
This means, in theory, that up to 16 MB can be accessed on the cartridge without needing a mapper. However, the memory map shows that
32 MB of cartridge ROM are accessible
. So, what’s happening here? The truth is, the Gamepak uses
25-bit addresses
(which explains that 32 MB block) but its bottommost bit is fixed at zero. Thus, the only 24 remaining bits are set. That’s how Game Pak addressing works.
Representation of the Game Pak addressing model. Notice how the last bit of the 25-bit address (named ‘A0’) is always zero. I must also point out that in reality, the address and data pins are also shared/multiplexed.
Now, does this mean that data located at odd addresses (with its least significant bit at
1
) will be inaccessible? No, because the data bus is 16-bit: For every transfer, the CPU/DMA will fetch the located byte plus the next one, enabling it to read both even and odd addresses. As you can see, this is just another work of engineering that makes full use of hardware capabilities while reducing costs.
Curiously enough, 26-bit ARM CPUs also resorted to the same technique. These housed a 24-bit Program Counter, as the bits had to be multiples of eight (a.k.a. word aligned) so the two last bits of the 26-bit address were always zero. However, since these CPUs fetch 32 bits (the first byte plus the next three), the whole 26-bit address space can be accessed.
Cartridge RAM space
To hold saves, Game Paks could either include
[28]
:
SRAM
: These need a battery to keep their content and can size up to 64 KB (although commercial games did not exceed 32 KB). It’s accessed through the GBA’s memory map.
Flash ROM
: Similar to SRAM without the need for a battery, can size up to 128 KB.
EEPROM
: These require a serial connection and can theoretically size up to anything (often found up to 8 KB).
Accessories
The previous
Game Boy Link socket
is included to provide multi-player capabilities or extra content. Though there’s no IR sensor anymore, for some reason (maybe too unreliable for large transfers).
The GameCube-Game Boy Advance link cable
[29]
, this one connects to the GameCube’s controller port (handled by the
Serial interface
) for data transfers.
Additionally, the GBA’s BIOS implemented a special feature internally known as
Multi-boot
: Another console (either GBA or GameCube) can send a functional game to the receiver’s EWRAM and then, the latter would boot from there (instead of the Game Pak).
Anti-Piracy & Homebrew
In general terms, the usage of proprietary cartridges was a big barrier compared to the constant cat-and-mouse game that other console manufacturers had to battle while using the CD-ROM.
To combat against
bootleg
cartridges (unauthorised reproductions), the GBA’s BIOS incorporated
the same boot checks
found in the original Game Boy.
Flashcarts
As solid-state storage became more affordable, a new type of cartridge appeared on the market.
Flashcarts
looked like ordinary Game Paks but had the addition of a re-writable memory or a card slot. This enabled users to play game ROM files within the console. The concept is not new actually, developers have internally used similar tools to test their games on a real console (and manufacturers provided the hardware to enable this).
Earlier solutions included a burnable NOR Flash memory (up to 32 MB) and some battery-backed SRAM. To upload binaries to the cartridge, the cart shipped with a Link-to-USB cable that was used with a GBA and a PC running Windows XP. With the use of proprietary flasher software and drivers, the computer uploaded a multi-boot program to the GBA, which in turn was used to transfer a game binary from the PC to the Flashcart (inserted in the GBA). Overall, the whole task of uploading a game was deemed
too sluggish
. Later Flashcarts (like the ‘EZ-Flash’) offered larger storage and the ability to be programmed without requiring the GBA as an intermediate
[30]
. The last ones relied on removable storage (SD, MiniSD, MicroSD or whatever).
Commercial availability of these cards proved to be a
grey legal area
: Nintendo condemned its usage due to enabling piracy, whereas some users defended that it was the only method for running
Homebrew
(programs made outside game studios and consequently without the approval of Nintendo). Nintendo’s argument was backed by the fact flashers like the EZ-Writer assisted users in patching game ROMs so they could run in EZ-Flash carts without issues. After Nintendo’s legal attempts, these cartridges were banned in some countries (like the UK). Nonetheless, they persisted worldwide.
That’s all folks
My GBA and a couple of games.
Too bad it doesn’t have a backlight!
OpenZFS has some extremely nice tools and you can do a lot with them, but they start to struggle once you need to do more complicated things with your storage, or scale your OpenZFS installation out to tens or hundreds of pools, or use it as a component of a larger product. That’s usually when people turn up looking for better ways to “program” OpenZFS, and it’s usually not long before they’re disappointed, horrified or both.
Recently in the weekly
OpenZFS Production User Call
we’ve been trying to pin down some common use cases and work out how to get from where we are to the point where someone can turn up looking for a useful programmatic interface to OpenZFS. That conversation is ongoing but we all agree the bare minimum should be to make sure that what we have already is usable, even if not particularly useful. At least, that makes it easier for people to experiment and get a feel for the shape of the problem, and help us figure out what’s next.
BSD Fund
have generously sponsored me to do a little bit of exploratory work around
libzfs
and it’s younger sibling
libzfs_core
and make some progress. This fits in well with my personal mission of making OpenZFS more accessible to developers of all kinds (which I will try to write about soon). There’s some nice low-hanging fruit to be plucked here!
This post will go over what we’re trying to achieve, why it’s difficult and what I’m currently looking at.
There’s a lot here, and I’m not sure how coherent it is, so there’s every chance you’ll give up half way through. So I’ll say the most important part up front: if you have an application you’d like to integrate more tightly with OpenZFS, please do drop in on the production user call and introduce yourself. Or, if you’re the quiet type, feel free to email me directly. And if you’re the type that has money to spare and would just be happy to see nice things, please throw some towards BSD Fund. There’s a lot of really good and important ideas, wants and needs floating around, and more than anything they just need a bit of time for someone to sit down and think about them a bit. You can help!
Most of the time, OpenZFS lives inside the kernel. As with anything in the kernel, a program talks to it through system calls. And, as with any syscall that isn’t a “core” function of the kernel, that’s done by creating a special “control node” in
/dev
, then sending it “custom” commands via the catch-all
ioctl()
syscall.
If you haven’t seen
ioctl()
before, the idea is that sometimes you needed to send commands that didn’t “fit” into the old “everything is a file” model of everything being representable as operations on a data stream. Sometimes you just want to say “turn off the computer” or “eject the CD” or “hang up the modem”. Over time, we realised that actually, most things
aren’t
data streams, and we also have this small problem that the kernel has
opinions
about how the “core” syscalls should work.
ioctl()
though is
mostly
passed straight through to the “device driver”, because the kernel has no idea that we aren’t actually a CD drive. So we can generally do anything we want through
ioctl()
.
ioctl()
has basically no structure. A more conventional syscall like
read()
has a set number of arguments, with known sizes and meanings:
That is, a open handle on some device node, some arbitrary request number or id and some arbitrary payload. The in-kernel receiver does something, and can return an error code if it wants. That’s all of it.
OpenZFS has around 100 possible request numbers
which are all used to instruct OpenZFS to do things outside of its normal storage operations (which are handled through the regular data syscalls). Some of them are very simple and obviously map to
zpool
or
zfs
commands (eg
ZFS_IOC_HOLD
). Some are simple commands but need a very complicated payload (eg
ZFS_IOC_SEND
). Some have complicated functions under them (
ZFS_IOC_DIFF
), and some are small component of a larger function (eg
ZFS_IOC_POOL_IMPORT
,
ZFS_IOC_POOL_CONFIGS
,
ZFS_IOC_POOL_TRYIMPORT
and
ZFS_IOC_POOL_STATS
, which are all used during import), and some are tiny utility functions that are used by many functions (eg
ZFS_IOC_OBJ_TO_PATH
).
The payload format,
zfs_cmd_t
, is kinda wild. Traditionally
ioctl()
payloads are some small binary blob that both sides know how to interpret. As more and more commands were added, this structure grew to accommodate all the possible arguments they could take, making it big and messy and weird. More recent commands have taken steps to address this by passing an “nvlist” of args to the kernel, and receiving results back in another nvlist.
What’s an nvlist? Broadly, it’s an in-memory serialised key-value or dictionary structure, a bit like CBOR or MessagePack. It was used all over the place in old Solaris to pass structured data around, including onto disk and over the network, and so OpenZFS inherited it.
It’s about what you’d expect
. I don’t love it, but it’s fit for purpose and I don’t think about it most of the time. It’s
fine
.
Anyway, this whole edifice is kind of what you get when something is good-enough for 20 years - it quietly accumulates cruft but doesn’t really get in anyone’s way. It’s not supposed to matter though, because this is not the application interface to OpenZFS. That’s something else, called
libzfs
.
libzfs
is the bridge between userspace applications and the
ioctl()
interface into the kernel module. It’s two biggest and most important are the
zpool
and
zfs
tools that OpenZFS ships with … and it shows.
It’s certainly true that
libzfs
obviates the need to use
ioctl()
directly, and it definitely provides lots of useful facilities. But, it probably provides too many facilities, with some being too high-level, and others not high-level enough.
I wanted to put some examples in here but frankly, they’re kind of intensely complicated to do even simple things, I’m not sure if I’ve got it right and it’s really hard to know what I’ve missed. And this is my entire point!
Just now, I was looking at the code that implements
zfs list
. It’s a maze of callbacks, but the rough shape is:
initialise
libzfs
create a property list for the wanted properties (including the
name
)
open a handle to the root filesystem
call the filesystem iterator on that handle, which loops over the immediate children of the handle and calls a callback function for each
in the callback function, “expand” the property list for the filesystem in question
loop over each property in the list, and depending on its type and source (property inheritance), call the right function to get its value
print the property name and value
call the filesystem iterator on that this filesystem, to do recursion into the child filesystems
close the root filesystem handle
destroy the property list
shutdown
libzfs
Now that isn’t
too
bad considering that this needs to handle tens of thousands of datasets and all sorts of combinations of properties, sort order, filtering, etc. It’s certainly more than a casual programmer turning up wanting a list of datasets should have to deal with though.
The thing is,
libzfs
also has lots of functions for sorting by field names, helping with table layouts (including property display widths!), managing certain kinds of user interactions (including signals) and so on, doing localisation, building error strings, and so so. To the extent that it’s “designed”, it’s designed with the needs of
zpool
and
zfs
in mind.
To be clear, that’s ok! I mean yes, the abstractions are a mess and its hard to use, but if it’s just a utility library for programs that OpenZFS ships, well, we can deal with that, and slowly clean it up over time. Unfortunately, we’re here as an outsider looking for something to use in our own program, and it’s really hard for that.
It has deeper problems though. One is documentation. Consider this prototype from the header:
hdl
is obvious, and
newname
and
altroot
are understandable if you’ve ever used
zpool import
before.
config
however is the tricky one that shows the whole issue.
This is a pool configuration, and is (roughly) what
zpool_import()
is expecting to receive in its second arg:
If you’ve been around OpenZFS long enough, you’ll roughly recognise this format. This is the “pretty-print” output of an
nvlist_t
structure, the key/value or dictionary thing we mentioned above.
Where do we get one of these? Well, there’s a few possible places. The famous
zpool.cache
is just this nvlist structure dumped to a file. This config is also stored on all the drives in the pool (in the “label” areas) and is how
zpool import
finds pools if you don’t pass it a cachefile. Or you could just create one if you knew the right things to put in it.
How to find those? Mostly reading the source code and trying things.
Now I’ll admit I’m sort of being a dick here by choosing an import function, because import is maybe the most complicated single process anywhere in OpenZFS and it’s possibly unfair to expect any API to it to be simple. However I do think that this mostly demonstrates that
libzfs
is probably not the general-purpose library that people hope it would be.
There are other difficulties.
libzfs
has a lot of compatibility code to make sure it can continue to work against older kernel modules, at least for a short time, so that one can upgrade the OpenZFS userspace first, then reload the kernel module afterwards. It also has the problem that it’s header files are quite deeply entangled with OpenZFS internal headers, which can make it difficult or impossible to actually link an out-of-tree program with it (on Linux you basically can’t right now due to mismatches in
struct stat64
, while on FreeBSD
libzfs.h
is in base but the additional headers are in the system development package, so even if you have a program you can’t build it). There’s some things we can do to make it at least possible to mess with it (see way back at the start), but its still not likely to be a good general-purpose API, or at least not for many years.
And so our curious programmer rapidly discards
libzfs
as an option, and if they stick around and dig a little deeper, they find
libzfs_core
.
libzfs_core
was started back in 2013 as an answer to all of these problems with
libzfs
. Matt Ahrens did a nice presentation at an Illumos meetup in 2012 [
video
,
slides
,
text
] outlining what was wrong with
libzfs
and what
libzfs_core
would try to bring to the table.
Basically, it’s purpose is to be a light shim over
ioctl()
. Each function within it should simply massage their args into the right shape, call the relevant
ioctl()
, and return the results, and that’s all. In addition, it would be a committed interface.
libzfs
has to change for all the reasons above, but since (in theory) the kernel interfaces would never change, only be added to,
libzfs_core
’s API and even ABI would never need to change.
I think the ideas behind it is right, but I think it’s missing the mark in a couple of important ways.
It doesn’t implement quite a few useful
ioctl()
commands. In particular, it’s almost all aimed at dataset management; there’s precious little in there about pool management. For example, there’s no general way to access pool properties. I think at least part of this is that its geared towards the newer nvlist-based commands; the older ones (like pool management) still operate on the old binary data. Of course, it wouldn’t be any issue to implement wrappers over these, but I suspect it was assumed they would be converted to the newer structure before doing that.
The real problem is more fundamental, in that I think it still doesn’t draw the lines in the right place. A 1:1 mapping between API calls and
ioctl()
seems nice, and gives nice properties like easy atomicity, but as I’ve said above, some of those calls do too little, and this just shifts the burden onto the application programmer.
It also means that too many implementation details leak through when things get complicated, in particular falling back to nvlists:
Now don’t get me wrong, this looks like a very useful function. It can take multiple snapshots in the same call.
props
is a set of properties to set on each snapshot, and
errlist
has the result of each snapshot. Batching is generally pretty nice!
My main criticism here though, and again, is the overuse of nvlists. It’s not entirely bad; if you want a batching interface, you need some notion of a “batch” to add things to. But these are generic dictionary-style objects. They can’t check types, enforce args, or do much of anything really, and they’re hard to debug when you get them wrong and the kernel rejects them.
Of the two, I’d take
libzfs_core
over
libzfs
most days, because it’s small and only has two or three core concepts. But honestly, I don’t love either them. The specifics are different but the gist is that they make me think too hard and worry that I’m holding them wrong, and when I’m busy thinking about the actual problem I’m working on, I don’t want to be worried that I’m using my storage platform incorrectly.
There’s a handful of other programmatic interfaces to OpenZFS interfaces, and all are interesting and have ideas to bring to the table, but for now I think are off to the side.
pyzfs
OpenZFS ships with Python bindings to
libzfs_core
. I have no idea how it works, if it works. I do know that normal programmers get excited when they see it, and then disappointed when they find out it can’t do anything that
libzfs_core
also can’t, which is many things.
channel programs
OpenZFS has a Lua interpreter inside it, with Lua APIs to a handful of internal functions. The main advantage of this mechanism is to be able to avoid the round-trip penalty when doing bulk dataset operations, as they can all be loaded onto the same transaction. I am fascinated by what might be possible under this model in the future, but in their current form they’re barely usable, and most people who find them don’t get very far with them. I have a lot of thoughts about channel programs, but that’s for another time. See
zfs-program(8)
for more info.
JSON output
Since 2.3 many of the
zpool
and
zfs
commands can produce output in JSON instead of the usual text/tab format. This isn’t enough for what we’re talking about here, but I’ll definitely agree that it makes a lot of things that were previously awkward much easier, with a uniform output format that can be more easily transformed and consumed by other programs. If it had existed when I was putting in my first major OpenZFS installation in 2020, it’s quite plausible that I would not have gone very far into the code and possibly never have switched careers to do OpenZFS dev full time. I’ll let you decide what kind of missed opportunity that is.
If we were starting from scratch, I at least know what I’d prototype. It would be a small, standalone C library (no
libzfs
or
libzfs_core
dependency) with a conventional noun+verb style API, because I think those are the two biggest problems we have: it’s not clear how to set up the thing, and it’s not clear how to actually use it. (If I’m honest, that’s a whole-of-OpenZFS problem, but I’m not going into that whole thing today either!)
So a little off the top of my head, maybe a
zfs list
-alike might look like:
/* open the pool */zest_pool_t*pool =zest_pool_open("crayon");
/* loop over the datasets and print out their names and usage */zest_dataset_list_t*dslist =zest_pool_dataset_list(pool);
for (zest_dataset_t*ds =zest_dataset_list_first(dslist); ds != NULL;
ds =zest_dataset_list_next(dslist, ds)) {
constchar*name =zest_dataset_name(ds);
uint64_t*referenced =zest_dataset_prop_value_u64(ds, "referenced");
printf("%s\t%"PRIu64"\n", name, referenced);
}
Or maybe a part of some provisioning service would create new project datasets from a snapshot template:
/* create a new user project from the latest approved template */zest_dataset_t*ds =zest_pool_dataset_find(pool, "product/template/nice@latest");
zset_snapshot_t*snap =zest_snapshot_from_dataset(ds);
zest_dataset_t*zest_snapshot_clone(snap, "product/users/sam/project");
I’m not tied to the name “zest”, so don’t bikeshed on that too hard. The better questions to ask are why I would use C at all, and why is this particular C so verbose?
As we’ve seen above, actually talking to OpenZFS at any level is difficult. Writing and maintaining a library of this kind is going to be a reasonable amount of work for someone, and not likely to be repeated for every language that anyone might want to use. The answer to that then is bindings, and C is a lowest common denominator that every language can hook up to. More importantly, using simple and uncomplicated C with clearly named types, explicit conversions, and understandable lifetimes, it’s relatively easy to create bindings without needing to jump through too many hoops.
if I did this, I’d be thinking about maintaining it outside of OpenZFS proper, tracking new versions as they come out but providing some amount of API (if not ABI) stability across updates. I’d want it to be somewhat unconnected from changes to OpenZFS proper, and work properly across all platforms that it supports. I don’t see anything that isn’t achievable; most of it isn’t even that hard.
However, we’re
not
starting from scratch.
libzfs_core
was born seeking to replace
libzfs
, and now we have both of them. Channel programs can do some things that
libzfs_core
might have once imagined, and now we have those too. Starting a new thing feels like a big commitment because the worst thing would be leaving yet another thing in a half-working state.
For the moment, I’m working on shoring up
libzfs_core
. OpenZFS Production User Call regular Jan B has an actual application he’s working on (a FreeBSD jail manager) that would benefit from being able to manage snapshots and clones programmatically, and has spent a little time working out the smallest number of headers and defines required to get it to compile. I’m trying to either remove or make those dependencies explicit, and once that’s done, ensure that “base” OpenZFS packaging includes everything needed.
#17066
was recently merged and is the first small step along this road.
I’d like to then get
libzfs_core.h
documented. Even if it can’t do a lot of things, it should at least be not too much effort to get something simple up and running from just reading the documentation. I would also like to write a couple of sample programs, just to show how to do basic things.
Beyond that, I’m not sure. I mean, I have a thousand great ideas I’d like to play with, and never enough time. What I’d really love is for someone else to start building something and talk about what’s working and not working for them. Maybe that will help us see how to improve what have further, or maybe it’ll be the thing that forces a decision on making a new thing. Or maybe something else, who knows!
There we go, 3500+ words to tell you “I am fixing some headers” 😅
But seriously. Maybe this is the the opening move for a new generation of OpenZFS-enabled applications. Or maybe it’ll just make life a bit easier for those storage operators responsible for taking care of the ever-increasing amount of data in their care.
So again, if you’ve got an idea, or a use case, or a whole project proposal, or just like to pay for nice people trying to do nice things, you’d be very welcome at the weekly
OpenZFS Production User Call
, or you could say hello to the nice folk at
BSD Fund
, or just
hit me up for a chat
. Cheers!
Cursor told me I should learn coding instead of asking it to generate it
Yesterday I installed Cursor and currently on Pro Trial. After coding a bit I found out that it can’t go through 750-800 lines of code and when asked why is that I get this message:
Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding
Lol yes the message is actually funny. Not sure why it would write that in reality, never saw it happen.
So, in general its a bad idea to have huge files with code.
Not just because of AI context limit but also for humans to handle them.
Too big files are often a sign that a project is not well structured and the concerns of each file/class/function etc are not separate from each other.
It also seems you are not using the Chat window with the integrated ‘Agent’ which would create that file for you easier than in the ‘editor’ part of Cursor.
oh, I didn’t know about the Agent part - I just started out and just got to it straight out. Maybe I should actually read the docs on how to start lol
So as you are starting with Cursor i highly recommend going through the Docu to learn what it can do… and how to use each part
Yes it would help to ask it to split parts, depends on what language you use (looks like JS?), AI can then use import statements to include those separate files into your ‘start file’.
Usually its a good idea to do modular programming (split functionality into modules or classes or functions, depending on language or framework).
If you tell AI to use for example Single Responsibility Principle as guideline when coding it will not mix different features in one file. You may also create rules (see docs) that tell the AI for example to keep files under 500 lines limit (a bit over is not tragic but the more lines the harder it will be for AI)…
Hey
@T1000
. Thanks for your advice the other day, my post got deleted for an unknown reason, but I decided to go ahead and purchase a one month cursor pro subscription as you advised, I didn’t experience any major problems with the new releases and I’m enjoying it.
lol i think it’s awesome haha! never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing.
could it be related with some extended inference from your rule set?
‘Project 2025 in Action’: Trump Administration Fires Half of Education Department Staff
Portside
portside.org
2025-03-13 08:02:11
‘Project 2025 in Action’: Trump Administration Fires Half of Education Department Staff
Mark Brody
Thu, 03/13/2025 - 03:02
...
The Trump administration on Tuesday took a major step toward dismantling the U.S. Department of Education by firing
roughly half
of the agency's workforce, a decision that teachers' unions and other champions of public education said would have devastating consequences for the nation's school system.
The department, now led by billionaire Linda McMahon, moved swiftly, terminating
more than 1,300 federal workers
on Tuesday including employees at the agency's student aid and civil rights offices.
Sheria Smith, president of AFGE Local 252, which represents Education Department workers, pledged in a statement to "fight these draconian cuts." The union
told
NPR
minutes after the statement was issued that Smith, an attorney with the Education Department's Office for Civil Rights, was laid off.
The Education Department said the mass staffing cuts would affect "nearly 50%" of the agency's workforce and that those impacted "will be placed on administrative leave beginning Friday, March 21st."
In a
press release
, McMahon declared that the workforce cuts reflect the department's "commitment to efficiency, accountability, and ensuring that resources are directed where they matter most: to students, parents, and teachers."
But critics, including a union that represents more than 3 million education workers nationwide, said the firings underscore the Trump administration's commitment to gutting public education in the interest of billionaires pushing tax cuts and
school privatization
.
"Trump and
Elon Musk
have aimed their wrecking ball at public schools and the futures of the 50 million students in rural, suburban, and urban communities across America by dismantling public education to pay for tax handouts for billionaires," said Becky Pringle, president of the National Education Association.
"The real victims will be our most vulnerable students," Pringle added. "Gutting the Department of Education will send class sizes soaring, cut job training programs, make higher education more expensive and out of reach for middle-class families, take away special education services for students with disabilities, and gut student civil rights protections."
"We will not sit by while billionaires like Elon Musk and Linda McMahon tear apart public services piece by piece."
Randi Weingarten, president of the American Federation of Teachers, said in a statement that "denuding an agency so it cannot function effectively is the most cowardly way of dismantling it."
"The massive reduction in force at the Education Department is an attack on opportunity that will gut the agency and its ability to support students, throwing federal education programs into chaos across the country," she continued. "This move will directly impact the 90% of students who attend public schools by denying them the resources they need to thrive. That's why Americans
squarely oppose
eliminating the Education Department. We are urging Congress—and the courts—to step in to ensure all students can maintain access to a high-quality public education."
The Education Department purge came days after
news broke
that President
Donald Trump
was preparing an executive order aimed at completely shuttering the agency—a move that would legally require congressional approval.
Lee Saunders, president of the American Federation of State, County and Municipal Employees, said late Tuesday that the Education Department firings "are Project 2025 in action, and they have one goal—to make it easier for billionaires and anti-union extremists to give themselves massive tax breaks at the expense of working people."
"Today's announcement from the Department of Education is just the beginning of what's to come," Saunders warned. "These layoffs threaten the well-being and educational opportunities for millions of children across the country and those seeking higher education. The dedicated public service workers at public schools, colleges, and universities deserve better. Elections may have consequences, but we will not sit by while billionaires like Elon Musk and Linda McMahon tear apart public services piece by piece. We will keep speaking out and finding ways to fight back."
An Inequality Tale of Two Capital Cities
Portside
portside.org
2025-03-13 07:52:35
An Inequality Tale of Two Capital Cities
Mark Brody
Thu, 03/13/2025 - 02:52
...
Nine of the world’s
ten wealthiest
billionaires now call the United States home. The remaining one? He lives in France. And that one — Bernard Arnault, the 76-year-old who
owns
just about half the world’s largest maker of luxury goods — is now feeling some heat.
What has Arnault and his fellow French deep pockets beginning to sweat? Lawmakers in France’s National Assembly have just given a green light to the world’s first significant tax on billionaire wealth.
“The tax impunity of billionaires,” the measure’s prime sponsor, the Ecologist Party’s Eva Sas,
exulted
last month, “is over.”
Sas had good reason for exulting. In the French National Assembly debate over whether to start levying a 2 percent annual tax on wealth over 100 million euros — the equivalent of $108 million — the leader of the chamber’s hands-off-our-rich lawmakers introduced 26 amendments designed to undercut this landmark tax-the-rich initiative. All 26 of these amendments failed.
But France’s 4,000 or so deep pockets worth over 100 million euros — the nation’s richest 0.01 percent — don’t have to open up their checkbooks just quite yet. The French Senate’s right-wing-majority has no intention of backing the National Assembly’s new levy, and, even if the Senate did, France’s highest court would most likely dismiss the measure.
French president Emmanuel Macron, for his part, has spent most of the last decade cutting corporate tax rates and axing taxes on investment assets. And his budget minister has
blasted
last month’s National Assembly tax-the-rich move as both “confiscatory and ineffective.”
None of this opposition, believes the French economist who inspired the National Assembly’s new tax move, should give us cause to doubt that move’s significance. The annual tax on grand fortune that the Assembly’s lawmakers have passed, says the UC-Berkeley analyst Gabriel Zucman,
represents
“amazing progress” that has the potential to set a bold new global precedent.
What makes the National Assembly’s tax legislation even more significant? That tax-the-rich vote has come at a time when the most powerful nation on Earth — the United States — is moving in the exact opposite direction. The new Trump administration, with the help of the world’s single richest individual, is now busily hollowing out the tax-the-rich capacity of the Internal Revenue Service.
Donald Trump’s predecessor, Joe Biden, had actually made some serious moves to enhance that IRS capacity, hiring — before he left office — thousands of new tax staffers. But those new hires,
notes
a
ProPublica
analysis, have now started going through Elon Musk’s “DOGE” meat-grinder.
Team Trump’s ultimate goal at the tax agency? To use layoffs, attrition, and buyouts to cut the overall IRS workforce “by as much as half,” the Associated Press
reports
. A reduction in force that severe, charges former IRS commissioner John Koskinen, would render the IRS “dysfunctional.”
The prime target of the ongoing IRS cutbacks: the agency’s Large Business and International office, the IRS division that specializes in auditing America’s highest-income individuals and the companies they run.
On average, researchers have
concluded
over recent years, every dollar the IRS spends auditing America’s richest ends up returning as much as $12 in new tax revenue. The current gutting of the agency’s most skilled staffers, tax analysts have told
ProPublica
, “will mean corporations and wealthy individuals face far less scrutiny when they file their tax returns, leading to more risk-taking and less money flowing into the U.S. treasury.”
Moves to “hamstring the IRS,” sums up former IRS commissioner Koskinen, amount to “just a tax cut for tax cheats.”
Donald Trump,
agrees
the Institute on Taxation and Economic Policy’s Amy Hanauer, “is waging economic war on the vast majority of Americans, pushing to further slash taxes on the wealthiest and corporations, while sapping the public services that keep our communities strong.”
Public services like Social Security. Elon Musk has lately taken to
deriding
America’s most beloved federal program as a “Ponzi scheme,” and the Social Security Administration’s new leadership team, suitably inspired, has just
announced
plans to trim some 7,000 jobs from an agency “already at a 50-year staffing low.”
A vicious economic squeeze on America’s seniors. A massive tax-time giveaway for America’s richest. How can we start reversing those sorts of inequality-inducing dynamics? The veteran retirement analyst Teresa Ghilarducci has one fascinating suggestion.
Any individual’s annual earnings over $176,100 will this year, Ghilarducci
points out
, face not a dime of Social Security tax. A CEO making millions of dollars a year will pay no more in Social Security tax than a civil engineer making a mere $176,100.
If lawmakers removed that arbitrary $176,100 Social Security tax cap and subjected more categories of income — like capital gains — to Social Security tax, Ghilarducci reflects, we could ensure Social Security’s viability for decades to come and even make giant strides to totally ending poverty among all Social Security recipients.
And if we had just merely eliminated the Social Security tax cap on annual earnings in 2023, the most recent stats show, America’s 229 top earners would have paid more into Social Security that year than the 77 percent of American workers who took home under $57,000.
We could also apply Ghilarducci’s zesty tax-the-rich spirit to the broader global economy, as the inspiration behind France’s recent tax-the-rich moves, the economist Gabriel Zucman, has just
observed
in a piece that cleverly suggests “tariffs for oligarchs.’
The fortunes of our super rich, Zucman reminds us, “depend on access to global markets,” a reality that could leave these rich vulnerable at tax time. Nations subject to Trump’s new tariffs, he goes on to explain, could retaliate by taking an imaginative approach to taxing Corporate America’s super rich.
“In other words,” Zucman notes, “if Tesla wants to sell cars in Canada and Mexico, Elon Musk — Tesla’s primary shareholder — should be required to pay taxes in those jurisdictions.”
Taking that approach “could trigger a virtuous cycle.” The super rich would soon find relocating either their firms or their fortunes to low-tax jurisdictions a pointless endeavor. Any savings they might reap from such moves would get offset by the higher taxes they would owe in nations with major markets.
The current economic “race to the bottom,” Zucman quips, could essentially become “a race to the top” that “neutralizes tax competition, fights inequality, and protects our planet.”
Lawmakers in France have just shown they’re willing to start racing in that top-oriented direction. May their inspiration spread.
From the decimation of welfare programs, to attacks on public sector unions, extreme gerrymandering and the mass incarceration of people of color, Wisconsin has been a petri dish for right-wing causes for more than three decades. This is perhaps most clear in the state legislature’s growing use of tax dollars to pay tuition at private, religious schools while straightjacketing funds for public schools.
Public funding of private school tuition in Wisconsin — known as “voucher” programs and sometimes disingenuously referred to as “school choice” — was originally sold to the state legislature in 1990 as a small program to help a few non-religious community-based schools serving low-income kids in Milwaukee. Step by step, the right wing has marched toward their strategic goal of taxpayer-funded vouchers for all students at all private schools.
Today, there are almost 400 private, mostly religious schools throughout Wisconsin, annually receiving roughly half a billion dollars in taxpayer money.
We live in a world where, in so many ways, the once-unthinkable is becoming reality. The message from Wisconsin to other states considering so-called voucher or private school “choice” programs: Beware. Well-funded right-wing forces have their eyes on dismantling our public schools.
As Ann Bastian, co-author of
Choosing Equality: The Case for Democratic Schooling
, foresaw a quarter century ago, “Privatizing public education is the centerpiece, the grand prize, of the right wing’s overall agenda to dismantle social entitlements and government responsibility for social needs.”
In the 1990s, Wisconsin (along with Ohio and Indiana) was in the forefront of a handful of states finding ways to use public dollars to support private schools. By 2023, 30 states and Puerto Rico and Washington, D.C., had 42 private school voucher programs, and more states are considering similar legislation, according to EdChoice, a pro-voucher foundation in Indiana. The programs go by various innocuous, democratic-sounding names — “Educational Savings Accounts,” “Education Freedom Accounts,” “Invest in Kids,” “Opportunity Scholarships,” “Empowerment Scholarships.” Some programs include homeschooling. They all transfer public tax dollars to private schools, often religious schools, with little or no accountability.
The people of Wisconsin never voted in favor of public tax dollars going to private religious schools — instead, the state’s Republican-dominated legislature has done the work. In fact, throughout the country, any time vouchers have been put to a popular vote (such as statewide referendums in Michigan, Utah, and California), they have been soundly defeated.
Undermining the right to a free and public education — enshrined in every state constitution — attacks democracy. As Indian novelist/activist Arundhati Roy has so eloquently noted: “Privatization of essential infrastructure is essentially undemocratic.”
Public Dollars for Private Schools
Looking over the more than three decades of Wisconsin’s voucher program, the right wing’s strategy has been clear: gain a foothold and advance whenever possible toward the goal of universal vouchers — i.e., public tax dollars paying students’ tuition at private schools, with few if any strings attached. The initial legislation was sold to reluctant lawmakers as a pilot program with a five-year sunset provision, targeted at families at or below 175 percent of the poverty level. Equally important, only 49 percent of a school’s students could receive vouchers, to try to ensure the schools were good enough to attract families privately paying tuition and therefore could be deemed “private.”
One by one, the legislature eliminated or significantly reduced those restrictions. Today, for instance, 33 of the state’s voucher schools do not have a single student privately paying tuition. Another 38 have fewer than 10 percent of the students privately paying tuition.
The right wing labels these private voucher schools as just another type of public school. Don’t be fooled. To argue that a private school is “public” merely because it receives public tax dollars is like arguing that your local Walmart is a public store because it accepts food stamps.
Even when a voucher school does not have a single student paying private tuition, the state legally defines the school as private. This legal designation allows these private voucher schools to evade regulations that public schools must adhere to. Here are just a few of the important legal differences between public and private schools in Wisconsin:
Public schools are prohibited from discriminating against students on the basis of sex, pregnancy, marital or parental status, or sexual orientation. Private voucher schools are allowed to circumvent these anti-discrimination measures. (The only restrictions are that voucher schools must adhere to federal anti-discrimination guidelines based on race, color, or national origin.)
Public schools must honor constitutional rights of free speech and association, and due process when a student is suspended or expelled. Private voucher schools do not
Public schools must follow Wisconsin’s open meetings and record laws. Private voucher schools do not.
Public schools are controlled by publicly elected school boards. Private voucher schools are not.
Wisconsin school board meetings are open to the public. Private school board meetings are not.
Publicly Funded Homophobia
The problems are particularly acute when tax dollars subsidize religious education at voucher schools — and 95 percent of voucher schools in Wisconsin are religious. Take what happened to two female students in 2022 at Fox Valley Lutheran High School.
The two students were called into the dean’s office a few months before graduation, according to an investigative report last May in Wisconsin Watch, a nonprofit, nonpartisan investigative news service. One student was the cheerleading captain, the other a basketball player, homecoming queen, and student council member. In separate meetings, the two were told they faced expulsion because it was suspected that the two young women were dating each other. The school’s handbook prohibits any “homosexual behavior,” on or off campus.
The cheerleading captain told Wisconsin Watch that the dean said they could graduate — on the condition they break up and speak to a pastor. The dean, meanwhile, “outed” the students to their parents.
In 2019, Sheboygan Lutheran High School canceled the valedictorian speech of Nat Werth after he came out as gay. According to Werth, the school’s handbook was subsequently expanded to include anti-transgender policies.
Wisconsin has long been in the forefront of protecting LGBTQ+ students and employees, and in 1982 it became the first state to ban discrimination in public and private sector employment on the basis of sexual orientation. Yet today, because of public funding of private religious schools, Wisconsin taxpayers fund discrimination.
Gus Ramirez, whose family foundation runs the St. Augustine Prep voucher school in Milwaukee and plans to open another school, is clear that the school serves a select student body and promotes conservative religious beliefs.
“We hold firm to the biblical description of family at this school,” he told the
Milwaukee Journal Sentinel
in August. “That doesn’t mean all teachers and all staff are part of a nuclear family, but we strongly believe that nuclear family generates a lot better student outcomes. . . . Some will say that we’re too conservative, but for the most part our teachings are just aligned with Scripture.”
Skirting Special Education Law
Special education is another area with disturbing differences between public schools and private voucher schools. Public schools must adhere to all federal special education laws and regulations. Not so for private voucher schools.
An official from Wisconsin’s Department of Public Instruction (DPI), which oversees the voucher programs, told Wisconsin Watch that the DPI was “fully committed” to ensuring nondiscrimination of students with disabilities, but did not believe it had the authority to require the private schools to adhere to these federal requirements.
“DPI has significant concerns about the DPI’s authority to ensure that Choice schools do not discriminate against students with disabilities,” the agency’s chief legal counsel wrote in a letter to Wisconsin Watch.
Nicholas Kelly, president of the voucher organization School Choice Wisconsin, disputed the allegations of discrimination and sent a letter to Wisconsin Watch that read, in part: “Fundamentally, parental choice and educational freedom provide accountability. If parents or students are not satisfied with the education they receive they can choose another school.”
Discrimination Regardless of the State
As voucher programs have expanded across the country, pro-public school activists have documented various forms of discrimination.
In December 2023 the Education Voters of Pennsylvania issued “Pennsylvania Voucher Schools Use Tax Dollars to Advance Discrimination.” The report focuses on two of the four “scholarship programs” put in place in 2001 with annual funding of $340 million. The study highlights exclusionary and discriminatory policies and practices, found on the websites of private and religious voucher schools that participate in the Opportunity Schools Tax Credit program.
For example, the Dayspring Christian Academy “retains the right to refuse enrollment to or to expel any student who engages in sexual immorality, including any student who professes to be homosexual/bisexual/transgender or is practicing homosexual/bisexual/transgender, as well as any student who condones, supports, or otherwise promotes such practices.”
A similar study was influential in December 2003, when the Illinois legislature refused to renew the “Invest in Kids” voucher program, and to allow individuals and businesses to redirect the state taxes they owed to support private schools. (The policy was also criticized for potentially creating a $75 million hole in the state budget.)
The nonprofit organization Illinois Families for Public Schools found that nearly 20 percent of the schools that would have benefited from the renewed Invest in Kids Program had anti-LGBTQ+ policies. It also found that only 13 percent of the private schools in the Invest in Kids program served any special education students in 2022.
Improve, Don’t Destroy, Public Education
For every dollar that goes to a private voucher school, that money is unavailable for funding public schools. Those of us who work in public education are acutely aware of its shortcomings and challenges; too often our public schools reinforce social, racial, and gender inequality, especially in areas already segregated by class and race. But they are also battlegrounds to defend and promote progressive social policies, and are obligated to serve the needs of all children.
Establishing two school systems — one public and one private, yet both supported with tax dollars — only expands the ability of private schools to pick and choose their students, reinforce inequality, and teach a biased curriculum. The public should not be forced to subsidize educational programs that promote religious beliefs that might be antagonistic to one’s own religious views.
Milwaukee Congresswoman Gwen Moore (D) was a state representative when vouchers first passed the Wisconsin legislature, and voted for the program because it was small, secular, and experimental. “Of course, this is a vote I deeply regret,” she later said. “I never was the kind of voucher person who wanted to destroy public education.”
Public schools are the only educational institutions in our communities that have the capacity, commitment, and the legal obligation to serve all students. We must fight to improve our public schools and defeat the threat of private school vouchers.
Bob Peterson (
bob.e.peterson@gmail.com
) is a founding editor of Rethinking Schools and was a member of the Milwaukee School Board from 2019 to 2023, and board president for the final two years. He was a classroom teacher in public schools for more than 25 years, and president of the Milwaukee teachers’ union from 2011 to 2015. He would like to thank Barbara Miner for help with this article.
If you're familiar with C, you'll notice this is some sort of weird, bad C. You're mistaken, however. This is valid Prolog, using some
non-standard features of SWI-Prolog
for the curly braces.
This Prolog is read as a list of terms, and converted to valid C code. So this:
intfunc main(intvarargc, char:ptr:arrayvarargv)
Translates to this:
intmain(intargc, char*( argv[]))
Beyond some obvious changes like
var
and
func
operators, which make the syntax expressible as Prolog, there are some unexpected quirks:
Symbols starting with capital letters or underscores are for use with Prolog, and won't translate to C unless wrapped in single quotes:
int:ptrvarstring = 'NULL';
'_Bool'varv = true;
As a result of the above, character literals start with
#
:
Operator precedence was maintained for existing Prolog operators. For example,
=
and comparison operators like
==
,
<
, and
>
have the same precedence, so you'll need parentheses where you wouldn't in C:
This is also why the arrow operator
->
was replaced with
@
, since it's used in a completely different way in Prolog and has a completely different precedence.
C+P:
C:
Complex type declarations are different. C-style declarations are perfectly expressible in Prolog, I just don't like them.
Becomes
It may also help you remember the difference between
const char *
and
char const *
:
const(char):ptr,
char:const(ptr)
The examples provide a more complete picture of the syntax.
In the end, it's C but more verbose and particular about semicolons. Not exactly a silver bullet.
Let's introduce the
*=>
operator.
The
*=>
Operator
Take this snippet from example 04:
max(A, B) *=> A > BthenAelseB.
I said C+P does no processing beyond translating terms to C, but it does one small step. The compiler will gather all the terms defined with
*=>
, and then substitute the left-hand for the right-hand in the rest of the code until no more rules apply.
Because this operates on Prolog terms, not text, we don't need any extra parentheses in our
max(A, B)
macro to prevent mishaps with operator precedence. It's inserted into code as a single term, and looks exactly like a function call in use:
We have full access to Prolog at compile time using
:-
, allowing us to do just about anything.
This macro gets any instance of
println(Format, Args...)
with a string literal
Format
, and converts it to
printf
with a newline appended to
Format
.
We could probably do this automatically by scanning the code for any usage of
list[T]
and instantiating the template right above it, but I'll leave that as an exercise for the reader.
We also have some syntax to get a function name with
list[T]:Method
:
We have a template with a name
Name
, type parameterss
T
, and the body
Body
.
The macro removes this code and inserts a comment. Everything else is handled in the Prolog world.
assertz(
declare(Name[Z]) *=> NewBody
By God. Our macro
assert
s another macro,
declare(Name[Z])
. It also has conditions:
:- (
ground(Z),
T=Z,
Body=NewBody,
Those three lines are the bulk of the macro. It unifies the template type
T
with the real (ground) type
Z
, then returns the body of the template. This is what turns
declare(list[int])
into the code for the type.
But that's not all it does, the macro we're asserting itself asserts more macros:
This generates the C names for things like
list[int]
and
list[int]:function
.
'*mangled_name'
is just another macro, but this example is long enough and it's all in example 05. The
*
and single quotes are nothing special, they just prevent this macro from accidentally colliding with a user-defined
mangled_name
function.
C+P provides no type information we could use for method syntax, to do something like
my_list.append(...)
, instead of
list[int]:append(my_list, ...)
.
We could, of course, use macros to gather the types of every variable in a function body and swap out method calls for the appropriate function, but at a certain point I'm just writing a full-on XLang-to-C compiler in the macro system, which is an interesting idea, but I'm employed.
I've provided several other examples of the wonders of C Plus Prolog:
Example 06 overloads the
struct
keyword to add compile-time reflection.
Example 08a compiles different code if the target file ends in
.h
or
.c
, to declare a typical header and implementation in a single file.
I find it strangely compelling to write in C+P, adding features that sound impossible. It's like a puzzle game, which is why I reach for Prolog so frequently.
Installation and Usage
C Plus Prolog is easy to install. All you need is a C compiler, SWI-Prolog, and this repository. You can figure it out.
Then you can run
cpp.pl
from this repository with the following syntax:
swipl -s cpp.pl -- <input file> <output file>
By convention, C Plus Prolog files end in the extension
.c+p
.
Check
test.sh
and
test.ps1
for more example usage, plus a fast way to run all the tests (which are probably not rootkits).
What is the point of this?
C Plus Prolog is a half-serious exploration of macros in a systems programming language.
In the process of making this, it became clear I prefer the compile-time evaluation and reflection offered by languages like D and Zig over syntactic macros.
Sure, with enough work you can do everything and more with the macro system, but what does it actually
add
over a separate code generator, or a DSL? A nicer interface, maybe.
In Common Lisp, you have the full codebase at compile time, but the real value is recompiling code at runtime, not manipulating a list of lists that sort of looks like code, if you squint through all the backquotes and commas.
Rust's procedural macros save maybe 40 lines of code, since you don't have to write your own tokenizer, but everything else is up to you and whatever libraries you want to bring in.
Most of my metaprogramming wants involve reflecting on information the compiler already knows, like the types and annotations in the code, not the raw syntax tree. For a language-within-a-language, which syntactic macros are best at, I'd usually rather go the extra mile and make an entirely separate DSL with a purpose-built syntax, rather than contort the problem to, say, S expressions or Prolog terms.
Despite that, C+P is dangerously close to being useful.
The biggest advantage it has is the fact it generates plain C by default, allowing for performant cross-platform builds where less popular languages don't have support, or have to generate much more complicated C due to different abstractions.
Prolog's been around for 50 years, as well. If the SWI-Prolog extensions were removed, swapping
func F {Body}
with
func F => Body
, for example, it could be built on a huge swath of hardware.
But I wouldn't want to use it. There's absolutely no validation or error messaging, so any mistakes lead to broken C or silent failures. I thought using
var
as an operator would be nice, but it's a lot of visual noise. And if you thought C's semicolons were annoying, in C Plus Prolog, semicolons are operators. Having two in a row breaks things. Having one at the start breaks things. Having one at the
end
breaks things.
If someone wanted to use C+P, there are many better alternatives.
The
Nim
and
Haxe
languages can compile to C and/or C++, and they offer a good user experience out of the box, though I can't say how directly their code translates to C.
cmacro
offers a similar level of syntax manipulation to C+P in a nicer format, powered by Common Lisp.
The D language
Has compilers backed by GCC and LLVM, so it should work in most places C will.
I don't know what the conclusion is.
Democrats Must Defend Mahmoud Khalil
Portside
portside.org
2025-03-13 07:26:06
Democrats Must Defend Mahmoud Khalil
Mark Brody
Thu, 03/13/2025 - 02:26
...
Protestors demonstrate in support of Palestinian activist Mahmoud Kahlil Tuesday in Washington Square Park in New York.,AP Photo/Yuki Iwamura
Top Democratic officials spent the better part of the last decade warning that Donald Trump must not become president, because he would become a dictator, act like a dangerous authoritarian, and be Adolf Hitler reincarnate. Now, as outrage builds over Trump’s attempt to strip a permanent resident of his green card and unlawfully deport of him over his antiwar activism, many Democratic leaders have either been silent or offered only the weakest of objections.
It’s fair to say that the overall Democratic response so far to what has roundly and correctly been called the most
serious assault on the First Amendment
in years has been a mixed bag. The case is the exact kind of authoritarian overreach that high-ranking Democratic officials have claimed to be fighting the last eight years.
On Saturday, Immigration and Customs Enforcement (ICE) agents arrested Mahmoud Khalil, a leader of the Columbia University student protests against the war in Gaza who is a permanent resident and whose US citizen wife is eight months pregnant. They then spirited him a thousand miles away to a
scandal-ridden
detention facility in Louisiana, while Trump officials announced they had summarily revoked his green card — something government officials can’t actually do — and were getting ready to deport him. The administration has not only not shown evidence he’s committed any crime to justify this, they are
being
quite explicit that he
hasn’t
committed one, but that he has simply been targeted because of his political views.
It’s not that all Democrats have been MIA on the matter. The Senate Judiciary Committee’s official Twitter/X account
tweeted out
“Free Mahmoud Khalil” on Monday, the same day that New York attorney general Letitia James
said
she was “extremely concerned” about his arrest and was keeping an eye on it. Just yesterday, Sen. Chris Murphy (D-CT) put out a statement straightforwardly
defending
Khalil’s rights, hitting Trump on his hypocrisy here, and explaining why his targeting of Khalil is a threat to
all
Americans’ basic civil liberties.
“Once a citizen or a resident of America can be locked away with no charges against them simply because they protested, there is no going back for America,” Murphy said. “Even if you disagree with Khalil’s views, this practice should cross the line for you.”
Rep. Jamie Raskin, the top Democrat on the House Judiciary Committee,
condemned
it in similarly stark terms. “We cannot allow this nation to slide into a system of presidential authoritarianism, where people are seized at their homes, arrested, and detained simply for expressing disfavored political viewpoints,” he said. “The detention of Mahmoud Khalil is ripped straight from the authoritarian playbook.”
But most high-ranking Democrats couldn’t muster anything like this. Sen. Chuck Schumer (D-NY), the Democratic leader in the Senate and one of two senators from New York, put out a mealymouthed
statement
condemning Khalil and justifying
some
kind of punishment against him, stating that he “abhor[s] many of the opinions and policies that Mahmoud Khalil holds and supports,” and that he had “encouraged [Columbia] to be much more robust” in cracking down on protests. The
statement
of Hakeem Jeffries, another New York Democrat and the party’s leader in the House, similarly opened by advising that “to the extent his actions . . . created an unacceptable hostile academic environment for Jewish students and others, there is a serious university disciplinary process that can handle the matter.”
In fact, New York, a solid blue state and Khalil’s home, has generally been a bleak landscape for Democratic officials willing to robustly defend free speech. Rep. Adriano Espaillat, whose district Khalil lives in, didn’t say anything for days, eventually only
commenting
that he “expect[s] the Department of Justice to work within the confines of the law and that due process is guaranteed.”
“Did he break the law? Not break the law? Or is it just political punishment? I don’t know that answer right now,” Gov. Kathy Hochul said.
“If he has a gun, he needs to go,”
said
New York City mayor Eric Adams. (It was unclear what he was referring to.) Andrew Cuomo, the disgraced former governor who is currently the front-runner for Adams’s position, simply
dodged
the question.
In equivocating themselves to death, these Democrats have somehow ended up with less political clarity than, of all people,
Ann Coulter
, who reacted to Khalil’s detention by
writing
simply: “There’s almost no one I don’t want to deport, but, unless they’ve committed a crime, isn’t this a violation of the first amendment?”
While thirty New York elected officials
signed on
to a letter to the Department of Homeland Security demanding Khalil’s release, they were made up largely of progressives. Likewise, only fourteen House members
signed on
to a similar congressional letter, again, most of them members of the Squad and other progressives.
It is encouraging that there are Democratic officials with prominent voices and in positions of power who are speaking out on Khalil’s behalf and calling this what it is: a chilling and dangerous attack on free speech that threatens everyone’s rights, not just those of immigrants. But the overall Democratic response to something that is so clearly an overstep is still lacking — and may be
another sign
of the growing misalignment between rank-and-file Democrats and their political leadership.
Branko Marcetic is a
Jacobin
staff writer and the author of
Yesterday’s Man: The Case Against Joe Biden
.
Citizen Marx: Republicanism and the Formation of Karl Marx’s Social and Political Thought
Bruno Leipold
Princeton University Press
ISBN: 9780691205236
We treat words like “Marxism” and “communism” today as almost synonymous, but in the 19
th
century, the socialist scene was composed of multiple intellectuals vying to determine the course of progressive politics. Socialism was in fact far from being the dominant ideological opponent on the progressive spectrum to conservatism and liberalism. Republicanism championed the cause of popular freedom on mostly political, non-economic grounds. It favoured political reforms to enfranchise popular self-government but was weary of using the state to impose socio-economic equality on the population. Progressive republicans mainly criticised the arbitrary use of power by authoritarian state apparatuses and campaigned for the broadening of political participation. Only in the 19
th
century did their focus shift slightly from the political sphere to the factory, when some republicans realised that, even if people would be liberated from authoritarian kings and oppressive aristocracies, they would still suffer from social dependency under the system of capitalist wage-labour. The republicans strove for a return to a pre-bourgeois economy of small-scale artisans, local farmers, and independent business owners as a remedy against the ongoing proletarianisation of workers under industrial capitalism.
Leipold meticulously documents Marx’s vacillating journey between the philosophies of republicanism and socialism.
Long before Marx became the figurehead of socialist thought, his intellectual development was deeply rooted in this republican tradition. In
Citizen Marx,
political theorist and historian of political thought Bruno Leipold
meticulously documents Marx’s vacillating journey between the philosophies of republicanism and socialism. While republicanism focused on political reforms to constitutional systems, electoral laws, or civil rights, and promoted a return to a middle-class economy of small business owners, socialism fully embraced the economic and technological development of capitalism toward concentration of power and proletarianisation. The industrialisation of production was, for the socialist movement, a blessing in disguise insofar as it massively increased the collective production of wealth. Economies of scale augmented the overall wealth available to society, but capitalist property laws denied workers the benefits of this economic progress. The socialists subsequently imagined different ways for the working class to claim ownership over the enhanced means of production. However, most of these proposals were surprisingly silent on politics, constitutional reform, or revolution.
Robert Owen
thought capitalists could simply be convinced into founding worker-managed cooperatives while
Henri Saint-Simon
believed politics could be reduced to a technocratic science that would let experts administer the production and distribution of goods without much democratic input.
Classical history of political thought tends to present Marx’s thought as a stand-alone, independent reflection on the political tensions of industrial capitalism, as if Marx’s socialism emerged out of an intellectual vacuum. But Leipold shows that Marx continuously positioned himself vis-à-vis rival thinkers and activists.
Leipold’s book documents Marx’s intellectual development between these two philosophies to show how Marx articulated his own hybrid version of republican socialism, which would soon become the dominant intellectual tradition of progressive politics for one-hundred years. Leipold superbly surveys Marx’s expansive oeuvre from his early journalistic writings – and even his high-school essays – to his major works in political economy and his commentaries on the Paris Commune. He thereby distinguishes three phases in Marx’s thought. While Marx started as a republican critic of the Prussian State and Hegelian philosophy, around 1843 he broke publicly and privately with republicanism and shifted towards socialism. Only much later, in his
commentaries
on the failed republican experiments in France, would Marx wholeheartedly return to his republican principles. But by now, these principles would be so infused with socialist political economy that it put Marx in direct confrontation with the main republican thinkers of his day, like
Giuseppe Mazzini
. While the latter dismissed
the Paris Commune
from the start as futile working-class egoism, Marx viewed the Commune as a tragic prefiguration of a post-capitalist future that convincingly combined working-class social emancipation with republican institutions of democratic self-government. The road forward, he believed, lay in a combination of the collectivisation of the economic infrastructure and the establishment of a social republic that would put the proletariat definitively in charge of this infrastructure.
The key lesson from Leipold’s work is how Marx created his hybrid philosophy of republicanism and socialism through intense continued debate with other critical thinkers of his time. Most of these authors are currently forgotten, like the republican Hegelian
Arnold Ruge
, the antipolitical socialist
Karl Grün
, or the rabid anti-communist republican
Karl Heinzen.
Classical history of political thought tends to present Marx’s thought as a stand-alone, independent reflection on the political tensions of industrial capitalism, as if Marx’s socialism emerged out of an intellectual vacuum. But Leipold shows that Marx continuously positioned himself vis-à-vis rival thinkers and activists. Without the intellectual conflict between these competing views, there simply would not have been a Marxian political philosophy. If class struggle is the motor of history, then intellectual struggle was the motor of Marx’s life. By excavating the writings of these understudied rival thinkers – including the witty details of how these individuals clashed with Marx’s challenging personality – Leipold throws Marx’s thought in the flames from which it drew its vigorous spark.
Socialist political philosophy as a political practice should not consistently reaffirm a coherent, static doctrine of theoretical principles. Rather, it is a dynamic series of interventions in concrete political situations
A new look at the chemical reaction of republican and socialist elements that produced the combustion we nowadays call “Karl Marx’s philosophy” is particularly interesting for today’s left-wing political thought. On the one hand, an influential strand of
accelerationist Marxism
has moved away from experiments in local anti-hierarchical social movements and democratic self-government, which they dismissively describe as “
folk politics
”. They champion technological innovation as leverage toward a postcapitalist future where humanity can enjoy renewed free time as the production process becomes increasingly automated. On the other hand, new left-wing republicanisms have either uncoupled progressive demands for popular self-government from economic working-class emancipation (
Ernesto Laclau and Chantal Mouffe
) and/or revived the ideal of property-owning democracy for independent small-scale business owners (
Elizabeth Anderson
). Even within the communist movement itself, calls for degrowth communism (
Kohei Saito
) dismiss Marx’s pleas for collectivising the industrial production apparatus in favour of small-scale, self-reliant economies reminiscent of republican agrarianism.
While these debates between pro- or anti-growth, pro- or anti-technology, pro- or anti-agrarian communes are currently ravaging left-wing intellectual debate, Marx’s hybrid and flexible positioning between republican self-government and socialist large-scale collectivised production provides a helpful new perspective. Socialist political philosophy as a political practice should not consistently reaffirm a coherent, static doctrine of theoretical principles. Rather, it is a dynamic series of interventions in concrete political situations, where the “correctness” of a position is determined through a theory’s effectiveness in political and intellectual struggle. By showing the malleability and relativity of Marx’s own thought throughout his struggles with other progressive thinkers, Leipold teaches progressives today the same lesson of identifying philosophical thought as an exercise in political intervention.
Note:
This review gives the views of the author, not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science.
Dr. Tim Christiaens is assistant professor of philosophy and economic ethics at Tilburg University in the Netherlands. He writes about critical theory in relation to work and/or technology with, among others, a book on platform work called
Digital Working Lives
(Rowman & Littlefield, 2022).
Tidbits – Mar.13 – Reader Comments: Mahmoud Khalil Arrest Greatest Threat to Free Speech Since Red Scare; Assault on Social Security; DHS Ends TSA Officers’ Contract; Teachers and Schools; Call Script: Demand the Release of Mahmoud Khalil; More…
Portside
portside.org
2025-03-13 02:56:15
Tidbits – Mar.13 – Reader Comments: Mahmoud Khalil Arrest Greatest Threat to Free Speech Since Red Scare; Assault on Social Security; DHS Ends TSA Officers’ Contract; Teachers and Schools; Call Script: Demand the Release of Mahmoud Khalil; More…
jay
Wed, 03/12/2025 - 21:56
...
In a post on Truth Social, Donald Trump made it clear that Mahmoud Khalil was snatched because of his activism. “This is the first arrest of many to come,” wrote Trump.
Despite his previous rhetoric to the contrary, it’s clear that Donald Trump is trying to finally be the one to successfully carry out the long-standing GOP goal: destroying Social Security.
"After years of hard work and a lifetime of contributions, our seniors shouldn't have to worry about Republicans meddling with their Social Security," said one House Democrat.
Association of Flight Attendants President SaraNelson made her comments following the Dept. of Homeland Security announcement on Friday that it is stripping Transportation Safety Administration (TSA) workers of their collective bargaining rights.
Elon Musk said it. The problem with Western Civilization is empathy. Sooo... in the interest of efficiency (defined as that which puts money in the pockets of the billionaires), no need to worry about civilian casualties, who cares? Similarly, who needs the VA? Medicaid? Medicare? Social Security? Public education? Clean air and water and food? Public health? Medical research?
Elon and is ilk believe the planet is doomed, and the best thing to do is give them all the money so they can escape to Mars. Seriously.
It is the fifth anniversary of the COVID-19 virus. The Lancet Commission estimated that 40% of COVID-19 deaths could have been averted if it hadn't been for Trump's lies and inadequate response to the pandemic.
Former ADL education director lays bare the dirty secret: "the ADL has long abandoned its stated mission of securing justice and fair treatment for all. Instead, it shields Israel from criticism over its decades-long oppression of the Palestinian people and dangerously conflate that critique with antisemitism, while giving cover to right-wing extremists."
Discussion of Mr. Musk, especially in the United States, often misses something: He is a white South African, part of a demographic that for centuries sat atop a racial hierarchy maintained by violent colonial rule. That history matters. For all the attempts to describe Mr. Musk as a self-made genius or a dispassionate technocrat, he is in fact a distinctly ideological figure, one whose worldview is inseparable from his rearing in apartheid South Africa. More than just an eccentric billionaire, Mr. Musk represents an unresolved question: What happens when settler rule fails but settlers remain? That’s what is playing out in America today.
I want that the war in Ukrainia stops. Dislike warmongers. Neither those pro Russia nor those pro NATO - including Stiglitz or Kosenko.
Another subject: I miss articles about the terrible crimes that the new leaders in Syria are committing against civilians - including women and children.
Folk music, for Woody Guthrie and Pete Seeger, was never just music — it was memory, resistance, and a reminder that even in the harshest of times, the simplest songs can still carry the weight of a better world. Writing about Guthrie, Mike Gold posed a question: “Where are we all heading who have bet our lives on democracies? Who can say?” He found the answer in Guthrie’s “harsh and painful” songs — songs that “reek of poverty and genuine dirt and suffering.” “Democracy is like that,” he wrote, “and it is a struggle and a song.”
Perhaps it’s time for a new “Guthrie phase” — to pick up our machines against fascism, like the communist folk singers once did, and dare to imagine a new world.
Dave
=====
It may not be for everyone but if it’s for you, do not miss Taylor Dorrell’s essay (published in
Jacobin magazine
and circulated by
Portside
) on the Almanac Singers and the Weavers. Some of the stories and links are new to me and fascinating.
Daniel Millstone
Post on Portside
=====
Very good article but I think to discuss the communist folk singers and not mention Paul Robeson leaves out an important part of the history
I teach 185 students across 6 sections of different history courses for a Philly high school, but teaching *history* is really such a minor part of my job. More often than not I am doing therapy, life coaching, career counseling, triage nursing, trauma management, addiction management, conflict resolution, and surrogate parenting, little of which I was ever trained for before starting teaching. In the last week alone, beyond teaching history, I have broken up fistfights, helped students cope with religious trauma-induced panic attacks, intervened SO many times to deprogram homophobic language, helped a student grieve a friend who just got shot, and tried to keep a kid conscious as he was ODing and vomiting like a sprinkler (he's fine).
It certainly doesn't have to be this way. In the complete absence of guaranteed social services and safety nets and the overwhelmingly violent presence of racial and class inequalities, gun violence, and segregated futures for the rich and poor, schools regularly function as a sort of makeshift social service last (and only) resort. Much like how the ER functions as "normal" healthcare for many of the poorest people in a country without universal healthcare, schools operate as overstretched, under-resourced sites of threadbare social service interventions at a point where it is almost too late, and far less effectively than if the various harms, injustices, and deprivations had been treated at their root.
For all of its posturing about the importance of family, I have never been more convinced of how much this country hates its children, especially kids of color. It has never been more clear to me at a visceral level how antithetical to *life* -- let alone human flourishing -- the "American way of life" actually is. The compound effects of intentional generational deprivation, societal cruelty, monetary gatekeeping for the enjoyment of so many things, a cult of anti-intellectualism, and a generalized poverty of time make for problems schools could never begin to solve... and if we built a truly humane and just society, they wouldn't be asked to.
Justin Mueller
[
Justin Mueller
, ph.d, is a political theorist, writer, and public school teacher in Philadelphia. He writes about the politics of time, abolitionism, and ways of implementing critical pedagogies in the classroom.]
(Proudly representing the 18 communities of the 1st Franklin District, Massachusetts)
PO Box 450, Sunderland, MA
I grew up on the Canadian border in Derby Line, Vermont — across from Stanstead, Quebec. Our communities were intertwined. Going through customs was just something we did. We knew the border agents by name. They were our neighbors and a part of our community. Canadian Dr. Gilles Bouchard would turn on his porchlight on Rue Dufferin if he was seeing patients. As Americans, we would cross the line with colds, plantar warts and all. A sign in his waiting room read, “No one MUST pay. If you are short of money, just say, ‘Thanks Doc!’ You will not get a bill.” Homes were built over the line. Streets crossed the line. The border was invisible to us all.
Today, the Haskell Free Library and Opera House still straddles the line. As a kid, it was my refuge. My library card was my ticket to other worlds. The Haskell’s librarians helped me to find them.
A scuffed piece of black tape runs across the floors to let visitors know what country they’re standing in. But unless you’re there to take a picture, no one pays the line any attention.
Which is why I was so disturbed to learn that, according to
The Boston Globe
, during a recent visit, Donald Trump’s Secretary of Homeland Security went to the Haskell Free Library, stood on the American side, and said, “USA No. 1” and then crossed the line and said, “The 51st state.” Her remarks came while she was in the area following the death of U.S. Border Patrol Agent David “Chris” Maland, who was shot and killed in service on January 20, 2025.
Since I first read Globe reporter Paul Heintz’s story, I have been trying to wrap my head around how a U.S. Secretary of Homeland Security could turn a visit to honor the life of Agent Maland into one celebrating this Administration’s goal of destroying our relationships with long-standing allies.
The impacts of 9/11 rippled to communities like Derby Line. The border shut down. Facilities were fortified. Barriers went up. Surveillance increased. You could no longer just let the border officer know that you were heading over to see Dr. Bouchard or your family member. You had to have identification. The attacks on our nation that day forever changed the fabric of small towns along the border. Working together, the United States and Canada implemented security measures that would help to prevent future attacks..
Today we’re facing a different kind of attack: one from political leaders who exploit fear and division to advance their own agendas. Last week’s implementation of tariffs against our long-standing neighbor and ally to the north not only serves to divide our two nations but also hurts everyday Americans.
The construction of a library and opera house along an international border was a gift from Martha Stewart Haskell. She chose the site purposefully so that Canadians and Americans could have equal access to literature and art.
I never could have imagined that this haven for me as a child would become a symbol to me as an adult.
Today, this building, commissioned by a woman and built over an international boundary line over 100 years ago, has given me hope.
Haskell found a way to build something bigger than anyone ever could have imagined at the time — the Haskell Free Library and Opera House is a symbol that borders do not exist unless we construct them. Division does not grow unless we sow it. And disregard for our neighbors happens only when we choose to acknowledge the lines that could separate us.
The Haskell Free Library has stood through more than a century of political changes, wars, economic shifts, terrorist attacks, and much more. It is a reminder that no single person can destroy what Haskell built and that we are strongest when we push through boundaries, disregard the lines, and stand by our neighbors.
Hello, my name is [Your Name], and I am calling to demand the immediate release of Mahmoud Khalil, a Columbia University student detained by ICE for his political activism.
This is an unconstitutional attack on free speech. The First Amendment protects all individuals, including non-citizens, from being punished for expressing their views. The Supreme Court has ruled that peaceful activism is not a crime. If we protected the speech of the Westboro Baptist Church, how can we justify jailing students for speaking out?
This detention echoes McCarthyism, where people were persecuted for their beliefs. America must not repeat its past mistakes. Targeting activists for dissent is illegal and un-American.
I demand an immediate answer: On what legal grounds is Mr. Khalil being held? If there are no formal charges, he must be released immediately.
I expect a response, and I will continue to follow up. Thank you.
Call now and demand his freedom! #FreeMahmoudKhalil
He is at the Central Louisiana ICE Processing Center.
If you would like to call and demand his release, you can reach the facility at (318) 992-1600 between 8 a.m. and 5 p.m
The page will guide through creating the appropriate credentials to connect to Google Spreadsheet.
Frequently Asked Questions
Q: What?!
A: The following quote best summarises this project:
They were so preoccupied with whether or not they could, they didn't stop to think if they should.
Q: Not but really, what's going on here?!
A: Kubernetes exposes a robust API that is capable of streaming incremental updates. Google Spreadsheet can be scripted to read and write values, so the next logical step is to connect the two (also,
credit to this person
).
Q: Is this production-ready?
A: We're looking for fundings to take this to the next level. Replacing YAML with spreadsheets has always been our mission as a company, and we will continue to do so.
Those of you with a life may be unaware that the past 10 years have seen an explosion of alternative Sudoku rules:
German Whisper Lines, Dutch flat mates, Killer cages, Yin-yang (My favorite), Fog-of-war, Sum arrows, Even & Odd squares, Thermometers, Black dots, White dots, Jigsaw Sudo, Renban lines, Nabner lines, Star Battle, Masyu loop, Inequality signs, Region sum lines
Whew.
Despite all the innovation in
rules
, there has been comparatively little innovation in
layout
. Pretty much all Sudoku use a 9x9 grid. There are variants using hexagons or triangles, but they’re rare and the solving experience isn’t that different from a grid.
To address this, I realized that it’s possible to tweak Sudoku’s rules to allow for cells of any shape, which produces a different layout on every puzzle:
These layouts are generated from
Voronoi Diagrams
, but you could design a Cracked Sudoku other ways too.
They still have 81 cells in 9 groups just like classic Sudoku. However instead of “rows” and “columns” Cracked Sudoku have “runs”. Runs are a strip of 2 to 9 cells that can’t contain repeated numbers. Runs aren’t “placed” by a designer per-se, they are determined entirely by the layout. Here’s all the runs in one puzzle:
The randomly assigned line colors exist only to help distinguish separate runs
1
. The
cracked sudoku page
currently only shows runs of selected cells, but I’m working on a version that shows all non-redundant runs all the time.
Runs connect all neighboring cells and can continue through the opposite side to form a longer run. “Opposite” for a N-sided polygon means “two sides that are N/2 sides away from each other”. That’s a confusing explanation for something I hope you intuitively understand from the image above.
Runs terminate on cells with an odd number of sides because the concept of “opposite sides” doesn’t work with an odd number of sides. I slightly darken odd-sided cells to help the solver’s intuition for where long runs might be.
Weird Runs
Runs can form interesting structures. Here, two runs to cross in more than one cell:
Runs can also be totally useless by staying within a group. I’m in the process of making the app not show these to reduce clutter.
Theoretically a run could form a loop. I didn’t encounter a loop in any of my generated puzzles, but it should be possible to setup by hand. I’m imagining an arrangement like this:
Puzzle Generation
Human-designed puzzles are more interesting to solve than computer generated, so ideally I’d have a human designed Cracked Sudoku for you (and me!). Unfortunately I’m not an experienced puzzle constructor, so I wrote a puzzle generator.
It works in 4 phases:
Layout generation: Randomly place 81 points and generate the Voronoi diagram for them. If the diagram has any runs longer than 9, restart step 1.
Make 9 groups of 9 cells. This loops through the groups, adding one cell each time, and backtracks if a group is “pinned in” by the other groups.
Fill cells with numbers. This picks random numbers for each cell and backtracks if a conflict is found. There is often no solution so I have to go all the way back to where I left off in step 2 to try a different grouping. After several failed groupings I give up and go back to step 1.
Reduce the given numbers. Loop through each cell, clearing it’s value and checking if the puzzle still has exactly one solution. If not, I put the value back.
And with that, I could generate Cracked Sudoku. I left it running long enough generate thousands of puzzles – enough for several years of daily puzzles.
Future work
A few things I’d do if I kept working on this:
Try other layouts: concave polygons. curved dividing lines. etc. My gut tells me the puzzle rules still work for concave polygons, but you’d have to use something other than Voronoi Diagrams to generate them.
Experiment with different group shapes. I think it be cool to make a puzzle where one group completely encircles another group
Ensure cells have a minimum area and edges a minimum length.
Make the generator produce puzzles with more long runs.
Prevent groupings from being too “jagged”, this is just a matter of taste.
Manually design a cracked Sudoku. I’d like to experiment with unique cell layouts to purposely create looped runs. I also think there is opportunity for a color-by-numbers puzzle that reveals something when you “color all even numbers brown” or whatever. Which leads me to…
A Call for Puzzle Constructors
Have you made a Sudoku that was featured by
Cracking the Cryptic
? If so, I would love to coauthor a human-made Cracked Sudoku and have Cracking the Cryptic solve it (a nerd dream of mine). I don’t have enough construction experience to make a high quality one by myself. Email “daniel” at this website’s domain if you’re interested. For everyone else, subscribe to my newsletter to be alerted about the first human-made Cracked Sudoku.
Try it
You can play Cracked Sudoku
here
. It has a new puzzle every day, just like Wordle.
Solid-state batteries
remain several years from auto showrooms. The auto industry is pursuing the
batteries
, which replace liquid
electrolytes
with a solid ceramic or glass material, because of their potential to carry decisively more energy, charge faster and improve
vehicle safety
by reducing flammability over other types of
lithium-ion batteries
.
Now,
Mercedes-Benz
and solid-state battery manufacturer Factorial Energy have reached a hopeful halfway point, strapping
semi-
solid-state cells to the German automaker’s flagship electric sedan.
Mercedes says the battery delivers a real-world driving range beyond 1,000 kilometers (620 miles), about 25 percent farther than a conventional
lithium nickel manganese cobalt oxide
battery of the same size and mass. For comparison, the
2025 EQS 450+
is currently rated for 800 kilometers (497 miles) of range on Europe’s optimistic Worldwide Harmonized Light vehicles Test Procedure (WLTP) cycle, but just 627 kilometers (390 miles) on the U.S. Environmental Protection Agency’s more realistic estimates.
Mercedes and Factorial say the semi- (or “quasi-”) solid-state lithium-metal battery is the world’s first to make the jump from laboratory benches to a roadgoing car. Mercedes began conducting lab tests in Stuttgart at the end of 2024 before embarking on street testing in February.
“Being the first to successfully integrate lithium-metal
solid-state batteries
into a production vehicle marks a historic achievement in electric mobility,” says Siyu Huang, CEO and co-founder of Factorial Energy.
Mercedes-Benz Tests Semi-Solid Batteries
Mercedes
Huang says the FEST cells (for “
Factorial Electrolyte System Technology
“) feature a robust
energy density
of 391 watt-hours per kilogram. That compares with 300 Wh/kg or less for today’s leading high-nickel cells. The 106 amp-hour cells feature a
solid electrolyte
infused with gel or liquid. Factorial is also developing an all-solid-state “Solstice” cell with a sulfide electrolyte that targets up to 500 Wh/kg at the cell level, for up to 80 percent longer driving range. That 500 Wh/kg is double that of many high-nickel batteries, and about 2.5 times as good as that of the best lithium iron phosphate batteries.
“That’s an upper limit of where solid-state could go,” Huang says.
So automakers could potentially cut today’s battery packs roughly in half—in size, weight and capacity—while delivering identical driving range; or keep packs roughly the same size while dramatically boosting range or performance. Sharply downsized packs would create a chain of engineering gains including lighter chassis, suspensions, cooling systems, and other components to support batteries, as well as cost savings in packaging and space for occupants and cargo.
Initially, Huang believes the tech will be best suited to premium and high-performance cars, as a superior alternative to high-nickel cells. The company also sees potential for
consumer devices
,
drones
, or marine applications with serious power-discharge demands.
Just as important as performance gains, the FEST cells are compatible with existing
lithium-ion battery
manufacturing, according to Huang, which saw
846 gigawatt-hours
of cells deployed globally in electrified cars in 2024. Factorial says its processes would allow battery makers to use roughly 80 percent of existing equipment. Huang estimated a battery maker could convert a 1-gigawatt factory to quasi-solid-state production for as little as US $10 million.
“So it’s much easier to ramp up, not only for manufacturing, but in yield improvements and cost reduction,” Huang says.
The Factorial product development team use an insulator in the lab.
Factorial Energy
Factorial’s Manufacturing Footprint
Factorial currently has a manufacturing footprint for 500,000 FEST cells annually, and is seeing battery yields of 85 percent—solid performance for a pilot facility, though Huang says a commercial manufacturer would need to see a roughly 95 percent yield to make cells profitably. The company
, which raised $200 million in a funding round led by Mercedes and Stellantis, is also supplying FEST cells for a
demonstration fleet
of Dodge Charger Daytonas. Those electric muscle cars are expected to hit streets in 2026. The Charger Daytona is among several electrified models from Alfa Romeo,
Chrysler
, Dodge,
Jeep
, and Maserati to be built on Stellantis’
STLA Large platform
.
The quasi-solid-state design also allows for a lithium-metal anode that replaces
graphite
and carries more energy. The Mercedes battery is also notable for its patented “floating cell carrier.” Those pouch cells expand during charging, and contract during discharge. Mercedes’ F1 division developed a flexible carrier to manage those volume changes, with
pneumatic actuators
to help maintain proper pressure inside cells.
“There’s like a ‘breathing behavior’ during the cycling of the cell,” Huang says. “So you need a system to accommodate that breathing and make it more efficient over its lifetime, while still maximizing energy efficiency.”
As for its all-solid-state Solstice batteries, Factorial has developed a 100 percent dry-cathode process that can eliminate all hazardous solvents in cathode coating. Those carcinogenic solvents currently require energy-intensive evaporation to safely recycle them. The design also bypasses the energy-sapping “formation” process, in which freshly assembled cells must be charged and discharged to activate materials and create a stable electrolyte layer on electrodes. Together, the dry coating and cell design promise more sustainable battery production with fewer costs and environmental impacts.
In the two decades I’ve been in this racket, I’ve never been angrier at myself for missing a story than I am about Apple’s
announcement on Friday
that the “more personalized Siri” features of Apple Intelligence, scheduled to appear between now and WWDC, would be delayed until “the coming year”.
I should have my head examined.
This announcement dropped as a surprise, and certainly took me by surprise to some extent, but it was all there from the start. I should have been pointing out red flags starting back at WWDC last year, and I am embarrassed and sorry that I didn’t see what should have been very clear to me from the start.
How I missed this is twofold. First, I’d been lulled into complacency by Apple’s track record of consistently shipping pre-announced products and features. Their record in that regard wasn’t perfect, but the exceptions tended to be around the edges. (Nobody was particularly clamoring for Apple to make a multi-device inductive charging mat, so it never generated too much controversy
when AirPower turned out to be a complete bust
.) Second, I was foolishly distracted by the “Apple Intelligence” brand umbrella. It’s a fine idea for Apple to brand its AI features under an umbrella term like that, similar to how a bunch of disparate features that allow different Apple devices to interoperate are under the “
Continuity
” umbrella. But there’s no such thing, technically speaking, as “Continuity”. It’s not like there’s an Xcode project inside Apple named Continuity.xcodeproj, and all the code that supports everything from AirDrop to Sidecar to iPhone Mirroring to clipboard sharing is all implemented in the same framework of code. It’s a marketing term, but a useful one — it helps Apple explain the features, and helps users understand them.
The same goes for “Apple Intelligence”. It doesn’t exist as a single thing or project. It’s a marketing term for a collection of features, apps, and services. Putting it all under a single obvious, easily remembered — and
easily promoted
— name makes it easier for users to understand that Apple is launching a new initiative. It also makes it easier for Apple to just say “
These are the devices that qualify for all of these features, and other devices — older ones, less expensive ones — get none of them.
”
Let’s say Apple were to quietly abandon the dumb Image Playground app next year. It just disappears from iOS 19 and MacOS 16. That would just be Apple eliminating a silly app that almost no one uses or should use. That wouldn’t be a setback or rollback of “Apple Intelligence”. I would actually argue that axing Image Playground would improve Apple Intelligence; its mere existence greatly lowers the expectations for how good the whole thing is.
1
What I mean by that is that it was clear to me from the WWDC keynote onward that some of the features and aspects of Apple Intelligence were more ambitious than others. Some were downright trivial; others were proposing to redefine how we will do our jobs and interact with our most-used devices. That was clear. But yet somehow I didn’t focus on it. Apple itself strongly hinted that the various features in Apple Intelligence wouldn’t all ship at the same time. What they didn’t spell out, but anyone could intuit, was that the more trivial features would ship first, and the more ambitious features later. That’s where the red flags should have been obvious to me.
In broad strokes, there are four stages of “doneness” or “realness” to features announced by any company:
Features that the company’s own product representatives will demo, themselves, in front of the media. Smaller, more personal demonstrations are more credible than on-stage demos. But the stakes for demo fail are higher in an auditorium full of observers.
Features that the company will allow members of the media (or other invited outside observers and experts) to try themselves, for a limited time, under the company’s supervision and guidance. Vision Pro demos were like this at WWDC 2023. A bunch of us got to use to pre-release hardware and in-progress software for 30 minutes. It wasn’t like free range “Do whatever you want” — it was a guided tour. But we were the ones actually using the product. Apple allowed hands-on demos for a handful of media (not me) at Macworld Expo
back in 2007 with prototype original iPhones
— some of the “apps” were just screenshots, but most of the iPhone actually worked.
Features that are released as beta software for developers, enthusiasts, and the media to use on their own devices, without limitation or supervision.
Features that actually ship to regular users, and hardware that regular users can just go out and buy.
As of today — March 2025 — every feature in Apple Intelligence
that has actually shipped
was at level 1 back at WWDC. After the keynote, dozens of us in the press were invited to a series of small-group briefings where we got to watch Apple reps demo features like Writing Tools, Photos Clean Up, Genmoji, and more. We got to see predictive code completion in Xcode. What has shipped, as of today, they were able to show, in some functional state, in June.
For example, there was a demo involving a draft email message on an iPad, and the Apple rep used Writing Tools to make it “more friendly”. I was in a group of just four or five other members of the media, watching this. As usual, we were encouraged to interrupt with questions. Knowing that LLMs are non-deterministic, I asked whether, as the Apple rep was performing this same demo for each successive group of media members, the “more friendly” result was exactly the same each time. He laughed and said no — that while the results are very similar each time, and he hopes they continue to be (hence the laughing), that there were subtle differences sometimes between different runs of the same demo. As I recall, he even used Undo to go back to the original message text, invoked Writing Tools to make it “more friendly” again, and we could see that a few of the word choices were slightly different. That answered both my explicit question and my implicit one: Writing Tools generates non-deterministic results, and, more importantly, what we were watching really was a live demo.
We didn’t get to try any of the Apple Intelligence features ourselves. There was no Apple Intelligence “hands on”. But we did see a bunch of features demoed, live, by Apple folks. In my above hierarchy of realness, they were all at level 1.
But we didn’t see
all
aspects of Apple Intelligence demoed. None of the “more personalized Siri” features, the ones that Apple,
in its own statement announcing their postponement
, described as having “more awareness of your personal context, as well as the ability to take action for you within and across your apps”. Those features encompass three main things:
“Personal context” — Knowing details and information about you from a “semantic index”, built from the contents of your email, messages, files, contacts, and more. In theory, eventually, all the information on your device that you wish to share with Siri will be in this semantic index. If you can look it up on your device, Siri will be able to look it up on your device.
“Onscreen awareness” — Giving Siri awareness of whatever is displayed on your screen.
Apple’s own example usage
: “If a friend texts you their new address, you can say ‘Add this address to their contact card,’ and Siri will take care of it.”
“In-app actions” — Giving Siri the ability, through the App Intents framework, to do things in and across apps that
you
can do, the old fashioned way (yourself) in and across apps. Again, here’s Apple’s own example usage:
You can make a request like “Send the email I drafted to April and
Lilly” and Siri knows which email you’re referencing and which app
it’s in. And Siri can take actions across apps, so after you ask
Siri to enhance a photo for you by saying “Make this photo pop,”
you can ask Siri to drop it in a specific note in the Notes app — without lifting a finger.
There were no demonstrations of
any
of that. Those features were all at level 0 on my hierarchy. That level is called vaporware. They were features Apple
said
existed, which they
claimed
would be shipping in the next year, and which they
portrayed
, to great effect, in the signature “Siri, when is my mom’s flight landing?” segment of the WWDC keynote itself,
starting around the 1h:22m mark
. Apple was either unwilling or unable to demonstrate those features in action back in June, even with Apple product marketing reps performing the demos from a prepared script using prepared devices.
This shouldn’t have just raised a concern in my head. It should have set off blinding red flashing lights and deafening klaxon alarms.
Even the very engineers working on a project never know exactly how long something is going to take to complete. An outsider observing a scripted demo of incomplete software knows far less (than the engineers) just how much more work it needs. But you can make a rough judgment. And that’s where my aforementioned hierarchy of realness comes into play. Even outsiders can judge how close a public beta (stage 3) feels to readiness. A feature or product that Apple will allow the press to play with, hands-on (stage 2) is further along than a feature or product that Apple is only willing to demonstrate themselves (stage 1).
But a feature or product that Apple is unwilling to demonstrate, at all, is unknowable. Is it mostly working, and close to, but not quite, demonstratable? Is it only kinda sorta working — partially functional, but far from being complete? Fully functional but prone to crashing — or in the case of AI, prone to hallucinations and falsehoods? Or is it complete fiction, just an idea at this point?
What Apple showed regarding the upcoming “personalized Siri” at WWDC was
not
a demo. It was a concept video. Concept videos are bullshit, and a sign of a company in disarray, if not crisis.
The Apple that commissioned the futuristic “Knowledge Navigator” concept video
in 1987 was the Apple that was on a course to near-bankruptcy a decade later. Modern Apple — the post-NeXT-reunification Apple of the last quarter century — does not publish concept videos. They only demonstrate actual working products and features.
Until WWDC last year, that is.
My deeply misguided mental framework for “Apple Intelligence” last year at WWDC was, something like this:
Some of these features are further along than others, and Apple is showing us those features in action first, and they will surely be the features that ship first over the course of the next year. The other features must be coming to demonstratable status soon.
But the mental framework I should have used was more like this:
Some of these features are merely table stakes for generative AI in 2024, but others are ambitious, groundbreaking, and, given their access to personal data, potentially dangerous. Apple is only showing us the table-stakes features, and isn’t demonstrating any of the ambitious, groundbreaking, risky features.
It gets worse. Come September, Apple held its annual big event at Apple Park to unveil the iPhone 16 lineup. Apple Intelligence features were highlighted in the announcement. Members of the media from around the world were gathered. That was a new opportunity, three months after WWDC, for Apple to demonstrate — or even better, offer hands-on access to the press to try themselves — the new personalized Siri features. They did not. No demos, at all. But they did promote them, once again, in the event keynote.
2
But yet while Apple still wouldn’t demonstrate these features in person, they did commission and broadcast a TV commercial showing these purported features in action, presenting them as a reason to purchase a new iPhone —
a commercial they pulled, without comment, from YouTube this week
.
Last week’s announcement — “It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year” — was, if you think about it, another opportunity to demonstrate the current state of these features. Rather than simply issue a statement to the media, they could have invited select members of the press to Apple Park, or Apple’s offices in New York, or even just remotely over a WebEx conference call, and demonstrate the current state of these features live, on an actual device. That didn’t happen. If these features exist in any sort of working state at all, no one outside Apple has vouched for their existence, let alone for their quality.
Duke Nukem Intelligence
Why did Apple show these personalized Siri features at WWDC last year, and promise their arrival during the first year of Apple Intelligence? Why, for that matter, do they now claim to “anticipate rolling them out in the coming year” if they still currently do not exist in demonstratable form? (If they
do
exist today in demonstratable form, they should, you know,
demonstrate them
.)
I’m not trying to be obtuse here. It’s obvious why some executives at Apple might have
hoped
they could promote features like these at WWDC last year. Generative AI is the biggest thing to happen in the computer industry since previous breakthroughs this century like mobile (starting with the iPhone, followed by Android), social media (Meta), and cloud computing (Microsoft, Google, and Amazon). Nobody knows where it’s going but wherever it’s heading, it’s going to be big, important, and perhaps lucrative. Wall Street certainly noticed. And prior to WWDC last year, Apple wasn’t in the game. They needed to pitch their AI story. And a story that involved nothing but table-stakes AI features isn’t nearly as compelling a story as one that involves innovative, breakthrough, ambitious personal features.
But while there’s an obvious appeal to Apple pitching the most compelling, most ambitious AI story possible, the only thing that was essential was telling a story that was true. If the truth was that Apple only had features ready to ship in the coming year that were table-stakes compared to the rest of the industry, that’s the story they needed to tell. Put as good a spin on it as possible, but them’s the breaks when you’re late to the game.
The fiasco here is not that Apple is late on AI. It’s also not that they had to announce an embarrassing delay on promised features last week. Those are problems, not fiascos, and problems happen. They’re inevitable. Leaders prove their mettle and create their legacies not by how they deal with successes but by how they deal with — how they acknowledge, understand, adapt, and solve — problems. The fiasco is that Apple pitched a story that wasn’t true, one that
some
people within the company surely understood wasn’t true, and they set a course based on that.
The Apple of the Jobs exile years — the Sculley / Spindler / Amelio Apple of 1987–1997 — promoted all sorts of amazing concepts that were no more real than the dinosaurs of
Jurassic Park
, and promised all sorts of hardware and (especially) software that never saw the light of day. Promoting what you
hope
to be able to someday ship is way easier and more exciting than promoting what you
know
is actually ready to ship. However close to financial bankruptcy Apple was when Steve Jobs returned as CEO after the NeXT reunification, the company was already completely bankrupt of credibility. Apple today is the most profitable and financially successful company in the history of the world. Everyone notices such success, and the corresponding accumulation of great wealth. Less noticed, but to my mind the more impressive achievement, is that over the last three decades, the company
also
accumulated an abundant reserve of credibility. When Apple showed a feature, you could bank on that feature being real. When they said something was set to ship in the coming year, it would ship in the coming year. In the worst case, maybe that “year” would have to be stretched to 13 or 14 months. You can stretch the truth and maintain credibility, but you can’t maintain credibility with bullshit. And the “more personalized Siri” features, it turns out, were bullshit.
Keynote by keynote, product by product, feature by feature, year after year after year, Apple went from a company that you couldn’t believe would even remain solvent, to, by far, the most credible company in tech. Apple remains at no risk of financial bankruptcy (and in fact remains the most profitable company in the world). But their credibility is now damaged. Careers will end before Apple might ever return to the level of “if they say it, you can believe it” credibility the company had earned at the start of June 2024.
Damaged
is arguably too passive. It was
squandered
. This didn’t happen to Apple. Decision makers within the company did it.
Who
decided these features should go in the WWDC keynote, with a promise they’d arrive in the coming year, when, at the time, they were in such an unfinished state they could not be demoed to the media even in a controlled environment? Three months later, who decided Apple should double down and advertise these features in a TV commercial, and promote them as a selling point of the iPhone 16 lineup — not just any products, but the very crown jewels of the company and the envy of the entire industry — when those features still remained in such an unfinished or perhaps even downright non-functional state that they
still
could not be demoed to the press? Not just couldn’t be shipped as beta software. Not just couldn’t be used by members of the press in a hands-on experience, but could not even be shown to work by Apple employees on Apple-controlled devices in an Apple-controlled environment? But yet they advertised them in a commercial for the iPhone 16, when it turns out they won’t ship, in the best case scenario, until months after the iPhone 17 lineup is unveiled?
When that whole campaign of commercials appeared, I — along with many other observers — was distracted by the fact that
none
of the features in Apple Intelligence had yet shipped. It’s highly unusual, and arguably ill-considered, for Apple to advertise
any
features that haven’t yet shipped. But one of those commercials was not at all like the others. The other commercials featured Apple Intelligence features that were close to shipping. We know today they were close to shipping because they were either in the iOS 18.1 betas already, in September, or would soon appear in developer betas for iOS 18.2 and 18.3. Right now, today, they’ve all actually shipped and are in the hands of iPhone 16 users. But the “Siri, what’s the name of the guy I had a meeting with a couple of months ago at Cafe Grenel?” commercial was entirely based on a feature Apple still has never even demonstrated.
Who said “
Sure, let’s promise this
” and then “
Sure, let’s advertise it
”? And who said “
Are you crazy, this isn’t ready, this doesn’t work, we can’t promote this now?
” And most important, who made the call which side to listen to? Presumably, that person was Tim Cook.
Even with everything Apple overpromised (if not outright lied about) at the WWDC keynote, the initial takeaway from WWDC from the news media was wrongly focused on their partnership with OpenAI. The conventional wisdom coming out of the keynote was that Apple had just announced something called “Apple Intelligence” but it was powered by ChatGPT, when in fact, the story Apple told was that they — Apple — had built an entire system called Apple Intelligence, entirely powered by Apple’s own AI technology, and that it spanned both on-device execution all the way to a new Private Cloud Compute infrastructure they not only owned but are powering with their own custom-designed server hardware based on Apple Silicon chips. And that on top of all that, as a proverbial cherry on top, Apple
also
was adding an optional integration layer with ChatGPT.
So, yes, given that the news media gave credit for Apple’s own actual announced achievements to OpenAI, Apple surely would have been given even
less
credit had they not announced the “more personalized Siri” features. It’s easy to imagine someone in the executive ranks arguing “
We need to show something that only Apple can do.
” But it turns out they announced something Apple
couldn’t
do. And now they look so out of their depth, so in over their heads, that not only are they years behind the state-of-the-art in AI, but they don’t even know
what
they can ship or
when
. Their headline features from nine months ago not only haven’t shipped but still haven’t even been demonstrated, which I, for one, now presume means they
can’t
be demonstrated
because they don’t work
.
Apple doesn’t often fail, and when it does, it isn’t a pretty
sight at 1 Infinite Loop. In the summer of 2008, when Apple
launched the first version of its iPhone that worked on
third-generation mobile networks, it also debuted MobileMe, an
e-mail system that was supposed to provide the seamless
synchronization features that corporate users love about their
BlackBerry smartphones. MobileMe was a dud. Users complained about
lost e-mails, and syncing was spotty at best. Though reviewers
gushed over the new iPhone, they panned the MobileMe service.
Steve Jobs doesn’t tolerate duds. Shortly after the launch event,
he summoned the MobileMe team, gathering them in the Town Hall
auditorium in Building 4 of Apple’s campus, the venue the company
uses for intimate product unveilings for journalists. According to
a participant in the meeting, Jobs walked in, clad in his
trademark black mock turtleneck and blue jeans, clasped his hands
together, and asked a simple question:
“Can anyone tell me what MobileMe is supposed to do?” Having
received a satisfactory answer, he continued, “So why the fuck
doesn’t it do that?”
For the next half-hour Jobs berated the group. “You’ve tarnished
Apple’s reputation,” he told them. “You should hate each other for
having let each other down.” The public humiliation particularly
infuriated Jobs. Walt Mossberg, the influential Wall Street
Journal gadget columnist, had panned MobileMe. “Mossberg, our
friend, is no longer writing good things about us,” Jobs said. On
the spot, Jobs named a new executive to run the group.
Tim Cook should have already held a meeting like that to address and rectify this Siri and Apple Intelligence debacle. If such a meeting hasn’t yet occurred or doesn’t happen soon, then, I fear, that’s all she wrote. The ride is over. When mediocrity, excuses, and bullshit take root, they take over. A culture of excellence, accountability, and integrity cannot abide the acceptance of any of those things, and will quickly collapse upon itself with the acceptance of all three.
Released my full transputer OS, K&R C compiler and utilities (1996)
In
my previous article
I talked about how I managed to create a self-contained operating system for transputer. This included the basic operating system, a text editor, a Small-C compiler, and an assembler.
The year was 1995, I was age 16, Lemon Tree and Zombie were being played in the radio, ARPANET closed around 1990, and it started to be known as Internet, and in Mexico only a handful of people used it through Compuserve Mexico. I wasn't so lucky to have this service.
I was evolving my operating system, modifying it and improving it, recompiling and rebooting. I was pretty interested in compiling any C language source code I could get my hands on, and I got hold of several CD-ROM discs, but I couldn't compile most things because I had a Small-C compiler, not a full C compiler.
This prompted me to extend my C language compiler to support more C features. Because youth is bold, I kept doing comparisons with text pieces from the input just like Small C did. I was working in a platform with only 128 KB. of RAM, and I could manage that feat because the transputer instruction set generated very small executables. But of course, a proper lexer would have made faster compilations.
As I progressed in my implementation of the C language features, following the K&R book's appendix A, I discovered that developing most of these was pretty direct, like
struct
, and
union
.
Typedef
was the most complicated to understand. The pointer arithmetic syntax precedence was very difficult, in particular for getting an array of function pointers. I'm still proud of how I made byte-code descriptions of C types, the only difficult thing was getting right the recursive calls to handle precedence. Also getting initializers to work was hard to do as these had very loose syntax in real compilers.
The C compiler took me a lot more time than the Pascal compiler, and from my revision notes I used almost a full year to get an almost full K&R C compiler (K&R means Kernighan & Ritchie, or a compiler as described in their original book). My information was outdated, because I had the 1978 book, but the book was updated in 1988, and ANSI C was approved in 1989.
I tuned the preprocessor with each new source code I found for testing. I was amazingly happy when my compiler finally managed to run a chess program by Vern Paxson from an obscure contest ran in USENET, the
IOCCC
(International Obfuscated C Code Contest). I found this program in the book
Expert C Programming
by Peter Van Der Linden (1994).
Once the C compiler was supporting floating point, I was able to port back the Ray Tracer I made in
Pascal
, and I developed a 3D poligonal model program following the course and exercises in the book
3D Computer Graphics
by Alan Watt (1993).
The last additions to the host Z280 computer were a SCSI card, and all the peripherals were basically recycled from tech trash, for example, a SCSI hard disk drive (powerful 40 MB when the common ones were 500 MB), a SCSI DAT tape drive, and a SCSI CD-ROM reader (at least this was new!).
I added to my transputer operating system a way to read High-Sierra and ISO-9660 format structures to access CD-ROM data, and a program to unzip files from these discs. I was pretty proud of how I managed to compile the source code of a public domain version of Inflate for ZIP files. Another thing is that these CD-ROM drives could play audio CD automatically, you just put a CD (mine was Everybody Else Is Doing It, So Why Can't We?) and hear music while working.
The tape drive was DAT format, and I made some backups using my TAR program. I have forgotten completely where are these tape stored, now that's the most secure backup storage! The one no one can find.
I reached the high-point of my development for transputer around the summer of 1996, but the processor started to show its age. 128K RAM of memory was barely enough for working. As I bloated my programs with more features, the transputer started to look more and more slow. And unfortunately, another board was never made. I would have been very interested in handling multiprocessing.
Restoring the operating system
The work for restoring my operating system wasn't so easy as I thought. I had to expand the
buildboot.c
program to create floppy disk images in the latest file system format, and also hard disk images. Later as I put together the image file, I needed support for growing disk directories.
The main difference is the expansion of directory entries to 64 bytes. This was a by-product of discovering that I was free to specify my operating system as I liked, so the first thing for me was "the great escape" from the 8.3 filename format common in MS-DOS at the time (Windows 95 was barely released a few months ago). It wasn't done in a single step, my first expansion of the file system was fifteen letter names using the 32 bytes directory entries (Jan/01/1996), and the next expansion was thirty-one letter names using 64 byte entries (Feb/06/1996).
The 64-byte entries of my file system. Notice flags for the C subdirectory.
Once the disk image builder was ready, I put the
SOM.32.bin
file along
Interfaz.p
inside the floppy image, and tried to boot it up after adding the required services in the transputer emulator (including three drive units: floppy disk, RAM disk, and hard drive). It got stuck after reading the boot sector, and the cause was the byte order for the sector number was big-endian. After doing the change in the emulator, it was able to read the whole file inside the memory and it got stuck.
Apparently the timers didn't worked as the screen wasn't updated, so I added timer handling for low-priority threads, and the screen become visible. Still, it got stuck. I made a brief modification to the interface code with the host (a semaphor), but it still didn't worked. The only reason for this is that the
MAESTRO.CMG
file used a different protocol to interface with the operating system. I searched in my files until I found the correct file and source code, I used this file to boot up the operating system and it worked on the first try, and it advanced enough to complain of a missing
C:/Interfaz.p
I had forgotten how my operating system booted up!
Very worn 3.5" floppy disk from 1993 with MAESTRO assembler code for transputer.
Notice it is double density (720 kb).
The minimum required files
It turns out that the floppy disk was used to boot the operating system (the file
SOM.32.bin
), and in turn it would access a hard disk through drive C: to load the command-line processor.
Once this file was loaded, I needed a few more files to be able to compile programs inside the operating system:
editor.p
,
cc.p
,
ens.p
, and
ejecutable.p
.
I started by testing my text editor, and it was pretty pleased watching how it colorized the preprocessor directives and the C language elements. All the source code is written with Spanish comments, and it is just too much code to translate it to English.
The visual text editor displaying C source code with syntax coloring.
I never implemented a current directory feature, so you need to type the full path for each file. A feature of my operating system is that the A: drive is assumed, so you need to write
c:cc
to execute the C compiler from the hard disk drive. Also this is why all the paths have short names like C (for my source code) and lib (for the libraries). And of course, you need to type the path for each DIR command.
However, I was mystified when I tried to run the C compiler. After typing
cc
, nothing appeared, it kept saying "Archivo inexistente" (file don't exists) I was pretty sure it worked the last time (almost 30 years ago)
I tried the MEM command, and it said 99240 bytes free. And running the
Info.p
utility with the C compiler executable gave me the answer: program size = 38112 bytes, stack size = 8192 bytes, global data = 56816 bytes, after adding all this it required 103120 bytes. I was baffled! For sure I compiled some programs without any extra memory.
It took me a full hour to remember the transputer processor has 4 kilobytes of internal memory, and if you map the 128K RAM in a classical mirrored way, the processor never uses the bus when using the lowest memory area $80000000 - $80000fff, but it uses the bus for any address outside like $80020000 - $80020fff, and this mirrors in the lower 4K of the external RAM. So a simple enhancement to get extra 4K was expanding the workspace to start at $80021000 (it moves toward lower addresses).
I modified the boot up code, and the C compiler was up and running!
How to assemble your own files
Given the different size instructions in the transputer, it was too difficult to write a linker. So instead I resorted to a mixed approach, my assembler allowed to process concatenated transputer assembler files, and in turn generate a preliminary executable. This file should be processed by
ejecutable.p
and this way the user enters the stack size, and the program adds a header information to indicate how much stack space it would use.
Executable header
Offset
Field
+0
OTBE (0x4542544f) Oscar Toledo Binario Ejecutable
+4
0x08031996 (Mar/08/1996)
+8
Code size
+12
Extra data size
+16
Stack size
+20
Data size
+24
Unused
...
...
+60
Unused
I never implemented proper command-line processing in my operating system, so every program had to start with an entry sequence to provide the options and filenames. This in turn made harder to implement batch command files, because all the programs required user input.
I started to compile each program in the source tree I had, and when I executed the raytracing program and the 3D modeler program, I couldn't get any image in the output.
I also had forgotten how to link my own programs! Turns out I made a document with a description of how to rebuild each program, sitting nicely as
Documentos/Programas.doc
and the
Mat.len
library was meant to be put at the start of the assembler program, so it could load properly its constants. Without the startup code every mathematical function returned zero.
By the way, I rediscovered why I made a small 512 KB RAM disk in the B drive. First, it was the memory available in the host Z280 system, second, it helped me to assemble faster the programs as it didn't had to access the SCSI hard drive.
So I only had to compile the C language program (read from the hard disk drive) to the RAM disk, and assemble over the RAM disk. Finally, the completed executable could be written back to the hard disk. This was way faster than any assembling I had done before over floppy disk, and it was pretty much required as my programs had grown.
Floating-point errors
Now I got an image in the 3D modeler but it had obvious mathematical miscalculations. I could revise routine by routine, and try to get an idea of what was wrong, or I could go over the small bunch of floating-point instructions and check for correct emulation. I was lucky to find almost immediately that
fpldnldbi
only removed items in a strange way from the processor stack, and that
fpldnlsni
had a doubled index (probably I cut and pasted the code from
fpldnldbi
). And the Utah teapot got displayed in all its 3D glory for the first time since 1996. Good memories!
Utah teapot with rendering bugs in transputer emulation.
Utah teapot rendered in 3D after the corrections.
I got the Utah teapot vertex data from the Appendix D of the book "3D Computer Graphics" which included a course on 3D computer graphics, and the project for the reader (student) was writing a 3D modeler based on the calculations presented by the book. It learned a lot with this book, and I was able to code the modeler in a few weeks.
Interestingly, I thought the ray tracer worked, but it had a very particular failure while handling cylinders, creating random black pixels on the image. Along the path I corrected the cloudy sky function, it passed col[2] instead of col2 causing a crash because it wrote on random memory addresses.
I reminded this was a known bug, I never could make the cylinder code to work at the time. For some reason, I couldn't find the bug. Now with a full emulation, I was able to pinpoint the bug to the quadratic intersection function:
It did a floating-point load and multiply in single precision, and then immediately saved the value as a double. So the bug wasn't in the ported code, but in my C compiler.
It happens my C compiler tries to optimize code using 32-bit floating point if all values are 32-bit, but if the operation is too complex, it saves the current value in the workspace as a temporary. There is just a small problem: It always saves the temporary as a 64-bit floating point value.
The change was "relatively" easy. The original code in my compiler
ccexpr.c
file reads as:
The function haz_compatible() makes compatibles the types for the left and right nodes of the expression tree, and returns a non-zero value for floating-point operations. The transputer assumes we are passing the right values to the instruction, so we can use the
fpmul
instruction for both float and double.
However, we need to add information for the exception of both being float type, for the purposes of saving the temporary value if the expression tree is too complex:
This way the
esp[]
array (written by
crea_nodo
) will contain one if the operands are floats instead of double. As the code evolved from a C compiler that didn't supported
struct
, the pointers are saved as integers in arrays (yes, very unportable!)
Now we should modify the code for saving temporary values in
ccgencod.c
. This is the original code in the function
gen_nodo()
:
} else {
gen_nodo(nodo_der[nodo]);
salva(1);
gen_nodo(nodo_izq[nodo]);
if(op == N_SUMAPF) {
recupera(2);
return;
} else if(op == N_MULPF) {
recupera(3);
return;
}
recupera(1);
}
/*
** Salva el registro A en la pila.
*/
salva(flotante)
int flotante;
{
if(flotante) {
emite_texto("ajw -2\r\nldlp 0\r\nfpstnldb\r\n");
pila -= 2;
} else {
emite_texto("ajw -1\r\nstl 0\r\n");
--pila;
}
}
/*
** Recupera el registro A desde la pila.
*/
recupera(flotante)
int flotante;
{
if(flotante) {
emite_linea("ldlp 0");
if(flotante == 1)
emite_linea("fpldnldb");
else if(flotante == 2)
emite_linea("fpldnladddb");
else if(flotante == 3)
emite_linea("fpldnlmuldb");
emite_linea("ajw 2");
pila += 2;
} else {
emite_texto("ldl 0\r\najw 1\r\n");
++pila;
}
}
It saves the temporary value using the function
salva()
, and restores it using
recupera()
. We add the information for float values:
It only remained compiling again the C compiler, and the ray tracer program. After the correction it could render the test cylinder correctly, and finally I could see correctly the ray traced robot I made in 1996. It only took me 29 years to correct the bug! Now it looks so simple, but at the time I couldn't find it.
Ray traced robot with rendering bugs because a bug of the C compiler for transputer.
Ray traced robot in the transputer after the C compiler corrections.
Installing it (just like new!)
Make sure your Terminal is setup as 80x25 characters, and using ISO Latin 1 (ISO-8859-1) as character set. Did I told you I discovered I could innovate? I used the $80-$9f characters (not used by ISO-8859-1) to put box graphics like in PC as I could define whatever I want in the VGA graphics card, but currently these will appear as blank in the terminal. Maybe I could make a converter to UTF-8.
The
os_final
directory was added to the git, and inside the
tree
directory contains the backup of my operating system. There is already a prebuilt image of the floppy disk, and the hard disk, so you only need to run the emulator with the proper arguments. However, for the sake of completeness, here is the sequence of steps to rebuild the drive image files.
The following command lines are provided into the
build_f1.sh
,
build_f2.sh
, and
build_hd.sh
shell files.
With the command line at the
os_final
directory, the 40 mb. hard disk drive image file is created using the
buildboot
utility, it includes only the file
Interfaz.p
(the command-line processor):
Now the files can be copied to the hard disk drive using this command line (CD creates a directory, and COPIA copies files):
CD C:/Desarrollo
COPIA Ajedrez.c C:/Desarrollo
COPIA Reloj.c C:/Desarrollo
COPIA Tetris.c C:/Desarrollo
COPIA Modelado.c C:/C
COPIA M3D.c C:/C
CD C:/Documentos
COPIA *.doc c:/Documentos
HALT
You can now reset the floppy disk image (just make sure it contains the SOM.32.bin file), or delete manually the files using the BORRA command (delete). The whole operating system (including source code) fits in under 3 megabytes.
Using the operating system
You can see a list of the available commands typing AYUDA (if you read my previous article, this is way more complex than the few commands of my early operating system).
AYUDA, shows a list of the available commands.
DIR [path] -A -P, displays the directory for the path, the -A option shows names in two columns, and the -P option show the directory in pages.
COPIA [wildcard] [path], copies files from the source to the target path.
REN path new_name, renames a file to the new name.
ILUSTRA path, displays the content of a file.
BORRA wildcard, deletes a file. It can use wildcards.
VER, shows the command-line processor version.
MEM, displays the available memory bytes.
LINEA text, displays the text in the screen (useful for batch commands)
BLP, clears the screen.
FIN, exits the command-line processor.
BD path, deletes a directory.
CD path, creates a directory.
To run several commmands in sequence you can use the semicolon, for example:
C:EDITOR ; C:CC ; C:ENS
Or you can see an small example of multitasking typing this command to get a clock in the upper-right corner of the screen:
@C:/DESARROLLO/RELOJ
To exit the operating system use
C:HALT
as this will close correctly the drive image files.
Compiling programs
To compile programs just type
C:CC
and press Enter. Type N for the first two questions, then type the path to the source code file, for example,
c:/C/Hora.c
and then the target assembler file, for example,
b:hora.len
You should assemble the program, type
C:ENS
and press Enter. Type
b:hora.len
and press Enter, type
c:/lib/stdio.len
and press Enter, now press Enter alone, and type
b:hora.e
as output file.
Now we need to convert it into an executable file, type
C:EJECUTABLE
and press Enter, type
b:hora.e
and press Enter, type
8192
and press Enter (stack size), type
0
(zero) and press Enter (extra data size), and type
C:hora.p
as output file.
To run the program type
C:Hora
and press Enter.
I was starting to get a grasp of how documentation was going to be fully electronic, so I wrote some incomplete descriptions of the operating system, file system, programs, and some further notes. All these files are available at the
Documentos
directory. It was a very different time.
If you read my previous article, you can access the help window of the visual editor using Fn+F1 in macOS. You can read text files using Fn+F4 and navigate files with the arrow keys.
There are three animations of the ray traced robot walking (I used these with the Pascal version of the Ray Tracer), however, I don't have tested these, and the process utility to create a FLI animation was written in Z280 machine code.
I like how this operating system shows original thinking for solving problems, like working with only 128 kb. of RAM, having 31 letter filenames, using full pathnames for referring to files, adapting the free space of the character set for graphical characters, managing a RAM disk to accelerate development, and of course getting to develop a K&R C compiler over it.
I could have made more things, however, the host computer was showing its slowness in particular with graphics, and I reached the limit of the RAM memory. This prevented me of expanding the C compiler to ANSI C, I could compile very few programs from external sources, the linker was never finished so I couldn't compile programs in modules (and in turn the C compiler didn't implemented extern or static). I was stuck.
My experiment with transputer was technically finished, although I did a few more programs thinking in the future (like a program to create CD-ROM file systems expecting a CD recorder I never had).
My first visit to an Internet coffee shop was around the summer of 1996, I discovered sadly the Inmos demise, and it was a total disappointment as their promised T9000 with a ten times speed fold never materialized.
Also a newer 32-bit processor was coming and my father was building a new computer where I built new software, and later I would make my windowed operating system, and all my tools from transputer would be ported to this bigger machine.
A by-product of my C compiler, was porting it back to the Z280 computer to compile an educative integrated Z80 development tool (editor/assembler/uploader) I made already using Mark DeSmet PCC for MS-DOS, I cheered myself when I compiled the same program for my Z280 computer without any changes. That was the last gasp of the Z280 computer.
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on March 20, 2025)
'Uber for nurses' exposes 86K+ medical records, PII via open S3 bucket
Cybersecurity Researcher, Jeremiah Fowler, discovered and reported to
Website Planet
about a non-password-protected database that contained over 86,000 records belonging to
ESHYFT
— a New-Jersey-based HealthTech company that operates in 29 states. It offers a mobile app platform that connects healthcare facilities with healthcare workers, including Certified Nursing Assistants (CNAs), Licensed Practical Nurses (LPNs), and Registered Nurses (RNs).
The publicly exposed database was not password-protected or encrypted. It contained 86,341 records totaling 108.8 GB in size. The majority of the documents were contained inside of a folder labeled “App”. In a limited sampling of the exposed documents, I saw records that included profile or facial images of users, .csv files with monthly work schedule logs, professional certificates, work assignment agreements, CVs and resumes that contained additional PII. One single spreadsheet document contained 800,000+ entries that detailed the nurse’s internal IDs, facility name, time and date of shifts, hours worked, and more.
I also saw what appeared to be medical documents uploaded to the app. These files were potentially uploaded as proof for why individual nurses missed shifts or took sick leave. These medical documents included medical reports containing information of diagnosis, prescriptions, or treatments that could potentially fall under the ambit of HIPAA regulations.
The name of the database as well as the documents inside it indicated that the records belonged to ESHYFT. I immediately sent a responsible disclosure notice to the company, and the database was restricted from public access over a month later. I received a response thanking me for the notification stating: “Thank you! we’re actively looking into this and working on a solution”. It is not known if the database was owned and managed by ESHYFT directly or via a third-party contractor. It is also not known how long the database was exposed before I discovered it or if anyone else gained access to it. Only an internal forensic audit could identify additional access or potentially suspicious activity.
ESHYFT provides a mobile platform that connects healthcare facilities with qualified nursing professionals. It is available on both Apple’s App Store and the Google Play Store. Apple no longer provides user statistics, but the ESHYFT app has been downloaded more than 50,000 times from the Google Play Store.
The application claims to offer nurses the flexibility to select shifts that fit their schedules while providing healthcare facilities with access to vetted W-2 nursing staff to meet their needs. The platform is available in following U.S. states: AL, AZ, AR, CA, CT, DE, FL, GA, IL, IN, IA, KS, KY, MD, MI, MN, MO, NE, NJ, OH, PA, RI, SC, TN, VT, VA, WA, WI, and WV.
This collage of screenshots shows how identity or tax documents appeared in the database.
This collage shows how app user’s individual profile pictures appeared in the database. Some included lanyards showing medical IDs or other credentials.
This screenshot shows a disability claim document that included PII, doctor’s data, and additional information about the claim.
This screenshot shows a medical visit summary that discloses PII and a diagnosis.
This screenshot shows how .csv files appeared in the database. The documents were variously labeled as timecards, user addresses, disabled users, and various user related information.
A
report
by the Health Resources & Services Administration (NCHWA) projects a 10% nationwide shortage of registered nurses by 2027. With the growing need for healthcare workers, platforms like ESHYFT serve an important role in filling those staffing deficiencies. This means that workers who perform offline jobs (like nursing) are now integrated with online technology, and
healthtech companies need to implement robust privacy safeguards to bridge the gap securely.
As more hospitals and healthcare workers depend on technology for data storage, care management, and employment, the need for increased cybersecurity measures across the entire industry is inescapable. Hospitals are considered critical infrastructure and have been routinely targeted by cybercriminals. In recent years, numerous networks have been hit with devastating ransomware attacks. In my opinion, the data and PII of the doctors, nurses, and staff are equally as important as the healthcare facilities themselves.
Any exposure of Personally Identifiable Information (PII), salary details, and work history of nursing professionals could result in significant potential risks and vulnerabilities
— both for the affected individuals and the healthcare facilities that employ them. Scans of identification documents (such as driver’s licenses or Social Security cards) when combined with addresses and other contact details could hypothetically provide a gateway for cybercriminals to commit identity theft or financial fraud in the victims’ names.
Another potential risk of exposed personal and professional details would be highly targeted phishing campaigns
that could use real information to deceive victims with employment scams or drive them into revealing additional personal or financial data. I am not suggesting or implying that ESHYFT’s data or that of their users is at risk of any type of scam or fraudulent activity, or compromised in any manner. I am only providing a general observation of potential cybersecurity risks as it relates to the exposure of personally identifiable information. This commentary is intended solely to highlight the broader landscape of privacy and cybersecurity considerations.
My recommendations for healthtech companies and medical software providers would be to take proactive cybersecurity measures to protect their data and prevent unauthorized access. This would include mandatory encryption protocols of sensitive data and regular security audits of internal infrastructure to identify potential vulnerabilities.
It is always a good idea to limit the storage of sensitive data and anonymize records where possible.
This includes assigning data an expiration date once it is no longer in use.
In this particular case, it appears that user files were uploaded to a single folder and not segregated based on their sensitivity. As an example, a user’s profile image could have a low sensitivity rating while proof of a medical exam could have a high sensitivity rating. In theory, these two documents should not be stored in the same folder.
When sensitive data is segregated and encrypted, it adds an additional layer of protection
in the event of an accidental exposure or malicious attack.
Moreover, multi-factor authentication (MFA) should be a requirement for any application where the user has access to potentially sensitive information or documents. By requiring MFA, even if user credentials (like username and password) are exposed, it is more difficult for someone to simply login and gain full access to the application or user dashboard.
Lastly, for healthtech companies of any size,
it is a good idea to have a data breach response plan in place and a dedicated communication channel for reporting potential security incidents.
Far too often I observe that companies have only customer support or sales contacts listed, forgoing the contact information of key stakeholders who need to act in case of a data breach. When sensitive data is publicly exposed, every second counts, and any delay in mitigation and recovery could be potentially catastrophic.
It is also important to give timely responsible disclosure notices to anybody who could be directly affected by the breach. After any data incident, users should be notified and educated on how to recognize phishing attempts in relation to the specific application or service that was potentially compromised. Protecting users from the real risks of social engineering and phishing attempts makes common sense and benefits both the service provider and the user or customer.
I imply no wrongdoing by Shiftster LLC dba ESHYFT, any contractors or affiliates. I do not claim that internal data or user data was ever at imminent risk. The hypothetical data-risk scenarios I have presented in this report are exclusively for educational purposes and do not reflect any actual compromise of data integrity. It should not be construed as a reflection of any organization’s specific practices, systems, or security measures.
As an ethical security researcher, I do not download the data I discover. I only take a limited number of screenshots solely for verification purposes. I do not conduct any activities beyond identifying the security vulnerability and notifying the relevant parties. I disclaim any responsibility for any and all actions that may be taken as a result of this disclosure. I publish my findings to raise awareness on issues of data security and privacy. My aim is to encourage organizations to proactively safeguard sensitive information against unauthorized access.
Campaign to bar under-14s from having smartphones signed by 100,000 parents
Guardian
www.theguardian.com
2025-03-13 01:05:46
Surrey was region of UK with most sign-ups for Smartphone Free Childhood’s parent pact, launched last year An online campaign committing parents to bar their children from owning a smartphone until they are at least 14 has garnered 100,000 signatures in the six months since its launch. The Smartphon...
An online campaign committing parents to bar their children from owning a smartphone until they are at least 14 has garnered 100,000 signatures in the six months since its launch.
The Smartphone Free Childhood campaign launched a “parent pact” in September in which signatories committed to
withhold handsets from their children
until at least the end of year 9, and to keep them off social media until they are 16.
Daisy Greenwell, a cofounder of Smartphone Free Childhood, said parents had been put in an “impossible position” by the weak regulation of big tech companies, leaving them with a choice of getting their children a smartphone “which they know to be harmful” or leaving them isolated among their peers.
“The overwhelming response to the parent pact shows just how many families are coming together to say ‘no’ to the idea that children’s lives must be mediated by big tech’s addictive algorithms,” she said.
The biggest regional backing of the pact is in Surrey, where there have been 6,370 signatories, followed by Hertfordshire, where the city of St Albans is attempting to become
Britain’s first
to go smartphone-free for all under-14s.
More than 11,500 schools have signed – representing more than a third of the total of 32,000 in the UK.
Celebrity signatories include the singer Paloma Faith, the actor Benedict Cumberbatch and the broadcaster Emma Barnett.
According to research by the media regulator, Ofcom, 89% of 12-year-olds own a smartphone, a quarter of three- and four-year-olds do, and half of children under 13 are on social media.
Supporters of a handset ban
argue that smartphones distract children from schoolwork, expose them to harmful online content and facilitate addictive behaviour.
Last week, after opposition from ministers, the Labour MP Josh MacAlister
amended his private member’s bill
that had proposed raising the age of digital consent from 13 to 16, meaning that social media companies would have required a parent’s permission to handle the data of a child under that age.
The bill now commits the government to researching the issue further rather than implementing immediate change.
Some experts have cautioned that a full ban is impractical or excessive. Sonia Livingstone, a professor of social psychology at the London School of Economics, said it was “too simplistic” as it reduced the pressure on social media companies to reform their services so that children can get the benefits without the harms.
She said any restrictions should be accompanied with alternative activities for children, especially opportunities to meet or play with friends, and it was important to recognise the practical uses of smartphones such as using maps, doing homework and contacting parents.
“I completely understand why there is a desire for an age limit on owning smartphones, but I don’t think a blanket ban is the way to go,” Livingstone said.
In a random dataset, there are no internal relationships; with each element, our explanation must begin anew.
In the
previous post
, I characterized Ron Jeffries' meandering approach to software design as "a shrug in the face of entropy." Some readers seem to have taken this as a strange, flowery metaphor. It wasn't.
In this newsletter, our analytic framework is borrowed from information theory: simplicity is a fitness between content and expectation. When we say software is complex, we mean it's difficult to explain; we need to bridge the distance between our audience's expectations and the realities of our software. This distance — the subjective complexity of our software — can be measured in bits.
In some literature, this metric is called
surprisal
. This is a good name; it reminds us that the complexity of our code varies with its audience. But there is a more common term for the number of bits in a message, borrowed from thermodynamics: entropy.
To understand the relationship between information and disorder in a physical system, consider why a sequence of random numbers cannot be compressed. When we compress data, we exploit its internal relationships; we use one part to describe another. You can see this in
run-length encoding
, or in how gzip
encodes duplicate strings as back-references
. But in a random dataset, there are no internal relationships; with each element, our explanation must begin anew.
And this is why, in this newsletter, we spend so much time on the
structures
that bind the disparate parts of our software together. These structures compress our code, amplify our explanations. They make our software simpler.
entropy as decay
While entropy is an instantaneous measure, it's often used as a metonym for the
second law of thermodynamics
: in physical systems, entropy increases over time. When we talk about entropy, it connotes more than temporary disorder; it has the weight of inevitability, an inexorable decline.
This also applies to entropy in our software. Consider how, in the previous post, Jeffries' code was full of tiny, needless complexities: muddled naming conventions, a single mutable method on an otherwise immutable class. These small inconsistencies make the existing relationships harder to see, easier to ignore. If Jeffries continued to tinker with his solver, we'd expect this complexity to compound.
We cannot prevent our software from being misunderstood, even by ourselves. Its structures will, inevitably, be weakened as our software changes. Good design requires continuous maintenance.
According to Ron Jeffries, his pinhole approach to software design provides this maintenance. This is demonstrably false; despite spending months iterating on a few hundred lines of code, the inconsistencies remain. But Jeffries is not alone in his belief. Many writers espouse a similar approach to software maintenance, and all of them should be approached with skepticism.
broken windows, clean code
Shortly before co-authoring the Agile Manifesto, Andrew Hunt and David Thomas wrote
The Pragmatic Programmer
. In the first few pages, they introduce the metaphor of
broken windows
:
One broken window, left unrepaired for any substantial length of time, instills in the inhabitants of the building a sense of abandonment.
1
And the same was true for software. "We've seen," they said, "clean, functional systems deteriorate pretty quickly once windows start breaking."
This metaphor was borrowed from an article in
The Atlantic
, published in 1982. It described how a social psychologist named Philip Zimbardo had parked a car in Palo Alto. He left the hood up and removed the license plate, as an invitation for passersby to do something antisocial. He observed it for an entire week, but nothing happened.
"Then," the article tells us, "Zimbardo smashed part of it with a sledgehammer. Soon, passersby were joining in. Within a few hours, the car had been turned upside down and utterly destroyed."
2
It is implied, but never stated, that the car was destroyed because of a simple broken window.
This is not what actually happened
3
, but few people noticed or cared. The article justified a belief that most readers of
The Atlantic
already held: systems decay from the bottom-up. And this, in turn, suggests the contrapositive: systems are
restored
from the bottom-up.
This preexisting belief was a reflection of both the publication and the time.
The Atlantic
is a center-right magazine, and individualism was the ascendant political philosophy of the 1980s. As Margaret Thatcher put it:
There is no such thing as society. There is living tapestry of men and women and people and the beauty of that tapestry and the quality of our lives will depend upon how much each of us is prepared to take responsibility for ourselves.
4
The moral of the broken window fable was one of personal responsibility. If we have problems — in our neighborhood or in our software — we should look to ourselves. Systems are just the sum of our choices.
We can find these same themes in Robert Martin's paean to hygiene,
Clean Code
:
Have you ever waded through a mess so grave that it took weeks to do what should have taken hours? Have you seen what should have been a one-line change, made instead in hundreds of modules?
5
"Why does this happen to code?" he asks. Is it external constraints? Is it poorly articulated goals? No, says Martin, this only happens to us when "[w]e are unprofessional." If our software is complex, it's only because we haven't kept it clean.
Martin does not offer a concise definition of clean code. His book, he says, will explain it "in hideous detail."
6
He does, however, ask some of his friends to define the term for him. Ward Cunningham provides a
familiar
answer:
You know you are working on clean code when each routine you read turns out to be pretty much what you expected.
7
Martin calls this "profound." But where are these expectations set? His rules are syntactic, focusing entirely on the text of our software.
8
It is implied, then, that our expectations arise from that text. Higher-level structures are just the sum of our syntactic choices.
This implies that code ought to be self-documenting. And indeed, Martin has repeatedly warned that comments are best avoided. If our software's
paratext
has anything to teach us, he says, that only means its text has fallen short:
It's very true that there is important information that is not, or cannot be, expressed in code. That's a failure. A failure of our languages, or of our ability to use them to express ourselves. In every case a comment is a failure of our ability to use our languages to express our intent.
And we fail at that very frequently, and so comments are a necessary evil — or, if you prefer, an unfortunate necessity. If we had the perfect programming language (TM) we would never write another comment.
9
As you read through
Clean Code
, you begin to understand the ideal Martin is chasing. Our classes should be small, with an eye towards reuse in other applications. Our functions should be "just two, or three, or four lines long."
10
Martin wants his software to resemble Thatcher's atomized society: a collection of components which are small, clean, decoupled.
But this atomization doesn't make our software simpler. It leads, instead, towards entropy; without structures, our software becomes little more than random numbers.
When we maintain our software, we are trying to preserve its simplicity. The cleanliness of our code is, at best, a small part of that simplicity. To focus entirely on broken windows — to dismiss the rest as incidental, regrettable — is less than a shrug. It's closing our eyes, and hoping the rest will take care of itself.
I've worked at two startups where hiring a product designer was more of an aspiration. Even after deciding we needed one, delays from interviews, notice periods, and onboarding meant at least three months of having to get things over the line, designer or not.
me talking to my designer
A common shortcut is using pre-built component libraries like Google's Material UI. They give you the building blocks, but they don’t think about the whole user flow for you. You still have to figure out how everything fits together.
But a lot of the time, we weren’t doing novel things. If you look closely at most software products, you'll notice a lot of overlap in user flows. There's usually little reason to reinvent simple things like account creation or password resets.
If your time should go toward what makes your product unique, how do you define a good user experience as quickly as possible?
Do not stare at a blank canvas wondering "Hm, ok how should the email field look like?”
teams spending time deciding what width the button should be before knowing what the feature looks like end-to-end
Multi-million dollar companies with bigger teams have thought long about this, and you can piggyback off that and get to a great experience faster.
Avoid looking at:
Design award websites
: They showcase originality, not proven usability.
Dribbble
: Prioritizes aesthetics over function.
Instead, look at:
Competitor sites:
Make accounts, take screenshots.
Common patterns—like password strength indicators—usually exist for good reasons.
Take notes on:
Common UI elements like email, password fields, confirmation flows
Visual and layout conventions (centered forms, responsive design, clear buttons, logos at the top)
Think of a Venn diagram. If every product in your space does something the same way, there’s probably a good reason. If one company does something different, ask yourself:
Is this intentional, or just a mistake?
Sometimes friction is
deliberate
. Some companies require credit card details upfront—not because they have to, but because they only want serious users. It’s not a fast experience, but that’s the point.
If what you’re building isn’t straightforward, look outside your industry. Say you’re designing a feature that collects medical data for prescription renewals.
If you can’t find direct comparisons, zoom out:
Who else collects sensitive data?
Mortgage lenders, tax services—they all deal with high-stakes information. Look at how they build trust, explain risks, and guide users through complex flows.
If you're designing a sign-up page, the goal isn’t just "two text fields and a sign-up button." It’s something real, like:
"Make signing up as effortless as possible."
Now, turn it into a question:
"How can we make signing up as easy and obvious as possible?"
Some answers:
Show password strength
before
users hit submit.
Give them a reason to sign up, not just a form to fill out.
This also raises new questions:
Should users log in right away, or confirm their email first?
Should they land on a confirmation page, or just get a subtle success message?
You won’t have every answer upfront, but asking the right questions keeps you focused on what matters.
Real users don’t behave how you hope. They rush, skip instructions, and get distracted.
Always ask:
What could go wrong?
Take it
field by field
—what happens if a user rushes through and makes a mistake?
Then
zoom out
—what happens across the entire flow?
If you can make the experience smooth for someone impatient and distracted, your other users won’t have any problems.
Bad UX isn’t always about ugly design—it’s often just a confused user who doesn’t know what went wrong or how to fix it.
For example:
What if they don’t pay attention when creating a password?
They’ll set a bad one, then get locked out later.
Fix:
Add a "confirm password" field to force them to retype it.
What if passwords don’t match?
They’ll hit “Sign Up” and get an error. That’s frustrating.
Fix:
Show a mismatch warning
as soon as
they type the second password.
Tools like ChatGPT can highlight UX issues you might miss. It’s a quick sanity check—not perfect, but better than guessing.
Tools like ChatGPT can highlight UX issues you might miss. It’s not perfect, but it’s better than guessing. Some prompts to try:
Red Team vs. Blue Team:
“Critique this sign-up flow—where could users get stuck?” vs. “Defend this design—what makes it intuitive?”
Industry Standards:
“How do top SaaS companies design their sign-up flows?”
Edge Cases:
“What happens if a user mistypes their email but doesn’t notice?”
Decide what you're going to measure. "Good UX" might mean conversion rate, user-retention or user satisfaction. Design will always be subjective to a degree. The less subjective you can make it, the better your sanity will be.
Keep your colours simple. One primary, one secondary and one accent colour.
Keep the language familiar to the users, not to the people building it. Don't say "database error" say "We couldn't save your changes."
Startups need to move fast and perfectionism, especially around aesthetics, is often unnecessary.
If you're not a designer, but you need to get things over the line, focus on usability, not novelty. A clean, obvious user flow beats an original, but confusing one every time.
Sometimes you will need novelty, but ask yourself clearly:
"Where do we actually need to differentiate?"
Doing what others are doing isn’t just copying—it’s using other companies to train your users for you. When you follow established patterns, users already know what to do, and you don’t have to teach them from scratch.
Innovate on your core value; for everything else, stick to what works.
.
Discussion about this post
AI should replace some work of civil servants, Starmer to announce
Guardian
www.theguardian.com
2025-03-12 23:30:44
The new digital ‘mantra’ prompts unions to warn PM to stop blaming problems on Whitehall officials AI should replace the work of government officials where it can be done to the same standard, under new rules that have prompted unions to warn Keir Starmer to stop blaming problems on civil servants. ...
AI should replace the work of government officials where it can be done to the same standard, under new rules that have prompted unions to warn Keir Starmer to stop blaming problems on civil servants.
As part of his plans for reshaping the state, the prime minister will on Thursday outline how a digital revolution will bring billions of pounds in savings to the government.
Officials will be told to abide by a mantra that says: “No person’s substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard.”
In his speech, Starmer will claim that more than £45bn can be saved by greater use of digital methods in Whitehall, even before AI is deployed, with 2,000 new tech apprentices to be recruited to the civil service.
However, with bruising cuts on the way at this spring’s spending review, Dave Penman, the general secretary of the FDA union for senior civil servants, said: “Mantras that look like they’ve been written by ChatGPT are fine for setting out a mission, but spending rounds are about reality.”
He said civil servants will welcome the commitment of more support with digital transformation, but the government “needs to set out, in detail, how more can be delivered with less”.
Starmer’s rhetoric about reshaping the state has alarmed some trade unions who fear for civil service jobs and are also concerned about morale among officials, after years of being demonised as unproductive by the Tories.
Penman said it was “right that the prime minister sets out an ambitious agenda for transforming public services with digital and AI tools, but … many civil servants will be looking for the substance and feeling that, once again, the prime minister is using the language of blame rather than transformation”.
Mike Clancy, the general secretary of Prospect union, said: “Civil servants are not hostile to reforms, but these must be undertaken in partnership with staff and unions. I urge everyone in government to avoid the incendiary rhetoric and tactics we are seeing in the United States, and to be clear that reforms are about enhancing not undermining the civil service.”
Clancy said it was right to make better use of tech in the public security but added that the government will find it challenging to compete for the skills needed to deliver on this agenda under the current pay regime, which is why Prospect is campaigning for more pay flexibility to recruit and retain specialists in the civil service in areas like science and data.
“Government should also be doing more to utilise the talented specialists it already has at its disposal, many of whom are working in regulators and other agencies that have been starved of funding in recent years.”
In Starmer’s speech, he will also pledge to reduce regulation and cut some quangos, taking on the “cottage industry of checkers and blockers slowing down delivery for working people”. The government will have a new target of reducing the cost of regulation by 25%.
Starmer will give a diagnosis of the problem in the UK that the state has become “bigger, but weaker” and is not delivering on its core purpose.
“The need for greater urgency now could not be any clearer. We must move further and faster on security and renewal,” he will say. “Every pound spent, every regulation, every decision must deliver for working people …
“If we push forward with the digitisation of government services. There are up to £45bn worth of savings and productivity benefits, ready to be realised.”
In the US, Donald Trump has embarked on a radical programme of sacking government workers under the new department of government efficiency (Doge), advised by the billionaire businessman Elon Musk.
Starmer’s government is understood to want also to scale back the size of the state, reducing the number of civil servants by substantially more than 10,000. Pat McFadden, the Cabinet Office minister, on Sunday said the government is prepared to bring in tougher performance management requirements in a bid to shed underperforming officials, and put more emphasis on performance-related pay.
The Guardian revealed on Tuesday
that No 10 and the Treasury are taking a close interest in proposals drawn up by
Labour Together
, a thinktank with close links to the government, to reshape the state under plans dubbed “project chainsaw”.
No preview for link for known binary extension (.pdf), Link: https://www.usenix.org/system/files/1311_05-08_mickens.pdf.
Facebook discloses FreeType 2 flaw exploited in attacks
Bleeping Computer
www.bleepingcomputer.com
2025-03-12 22:04:10
Facebook is warning that a FreeType vulnerability in all versions up to 2.13 can lead to arbitrary code execution, with reports that the flaw has been exploited in attacks. [...]...
Facebook is warning that a FreeType vulnerability in all versions up to 2.13 can lead to arbitrary code execution, with reports that the flaw has been exploited in attacks.
FreeType is a popular open-source font rendering library used to display text and programmatically add text to images. It provides functionality to load, rasterize, and render fonts in various formats, such as TrueType (TTF), OpenType (OTF), and others.
The library is installed in millions of systems and services, including Linux, Android, game engines, GUI frameworks, and online platforms.
The vulnerability, tracked under
CVE-2025-27363
and given a CVSS v3 severity score of 8.1 ("high"), was fixed in FreeType version 2.13.0 on February 9th, 2023.
Facebook disclosed the flaw yesterday, warning that the vulnerability is exploitable in all versions of FreeType up to version 2.13 and that there are reports of it actively being exploited in attacks.
"An out of bounds write exists in FreeType versions 2.13.0 and below when attempting to parse font subglyph structures related to TrueType GX and variable font files,"
reads the bulletin
.
"The vulnerable code assigns a signed short value to an unsigned long and then adds a static value causing it to wrap around and allocate too small of a heap buffer."
"The code then writes up to 6 signed long integers out of bounds relative to this buffer. This may result in arbitrary code execution."
Facebook may rely on FreeType in some capacity, but it is unclear if the attacks seen by its security team took place on its platform or if they discovered them elsewhere.
Considering the widespread use of FreeType across multiple platforms, software developers and project administrators must upgrade to FreeType 2.13.3 (latest version) as soon as possible.
Although the latest vulnerable version (2.13.0) dates two years, older library versions can persist in software projects
for extended periods
, making it important to address the flaw as soon as possible.
BleepingComputer asked Meta about the flaw and how it was exploited, and was sent the following statement.
"We report security bugs in open source software when we find them because it strengthens online security for everyone," Facebook told BleepingComputer.
"We think users expect us to keep working on ways to improve security. We remain vigilant and committed to protecting people's private communications."
Chris Moore, Illustrator for Classic Sci-Fi Books, Dies at 77
OCaml 5 introduced numerous changes. Perhaps the largest and most-anticipated was the introduction of multicore support, which allows for shared-memory parallelism, something that was previously unavailable in OCaml. This required a significant overhaul of the garbage collector and memory allocator, the details of which were explained in the excellent
Retrofitting parallelism onto OCaml
(Sivaramakrishnan et al.).
The OCaml maintainers focused on achieving performance backwards compatibility for single-threaded programs while also ensuring good performance for multi-threaded programs. In support of this goal, they ran numerous benchmarks measuring both time and memory use with the new runtime for a variety of programs in the OCaml ecosystem. Nearly all showed similar or better performance: a great achievement given the complexity of this change.
Unfortunately, Semgrep's interfile analysis appears to be dissimilar to the workloads that were tested as part of this effort. When we
first got Semgrep building with OCaml 5
in late 2023 (shoutout to Brandon Wu for this work!), we found significant memory consumption regressions, in some cases more than a 2x increase! This led to resource exhaustion and crashes in our benchmarks. We couldn't ship something like that to our customers. However, we needed to upgrade. We wanted to use the improved support for parallelism, as well as a variety of other features like algebraic effects made available in OCaml 5. We were also aware of the risk of getting stuck on an old version and potentially missing out on newer package versions as our dependencies began relying on OCaml 5.
Initial investigation
Some brief initial research turned up that garbage collector (GC) compaction had
not yet been implemented
in OCaml 5. Compaction accomplishes two goals: First, it reduces fragmentation in the heap. Fragmentation can cause waste because the unused fragments are not always perfectly matched to future allocations (though the allocators in both OCaml 4 and 5 do quite well at minimizing that). Second, and perhaps more important, it returns unused memory to the operating system.
With other priorities to focus on, we assumed that the lack of compaction was likely the culprit (spoiler alert: it wasn't) and decided to make another attempt at upgrading once compaction was implemented.
Second attempt at upgrading
When OCaml 5.2 was released with an implementation of compaction, I tried again to upgrade, and encountered the same problems with memory consumption. This was unsurprising: compaction still does not run automatically, so we'd have to explicitly insert calls to
Gc.compact ()
. But where? How might I determine when to compact? Was the lack of compaction even the problem, as we had assumed? I needed better data.
In order to figure out what was going wrong, I needed a few pieces of information. I needed to know (1) how the application actually uses memory, (2) to what extent memory remains uncollected even after it is no longer referenced (i.e. how aggressive the garbage collector is), and (3) how much memory the whole process overall is actually using (accounting for wasted space).
I started with
Memtrace
, an excellent tool by Jane Street, based on the statistical memory profiling available from the OCaml runtime. Memtrace records information about where and when memory is allocated and deallocated. With a trace of a single execution of a program, you can see an overview of memory consumption by time or dive deep and answer specific questions about how memory is used.
I familiarized myself with the tool by profiling Semgrep on OCaml 4.14, and was able to find and fix a few minor inefficiencies in the process. For example, I
found a hash table
that was excessively large, and contributed significantly to memory usage.
This gave me a preliminary understanding of how Semgrep used memory in OCaml 4. But I needed to know how it differed with OCaml 5. Unfortunately, the statistical memory profiling upon which Memtrace relies was unimplemented in OCaml 5.0 through 5.2. An implementation was staged for 5.3, which at the time was unreleased. I was targeting 5.2 anyway, and wanted to minimize any possible differences between what I measured and what we would be using in production.
To do so, I
backported
the new implementation of memory profiling to the 5.2 compiler branch (I later learned that this was unnecessary: it had
already been backported
, but not merged into the 5.2 branch where I looked).
With a working Memtrace setup on OCaml 5.2, I had what I needed to start comparing the details of Semgrep's memory consumption between OCaml 4 and 5. I began by scanning
OWASP Juice Shop
with our interfile engine and our default ruleset. We often use this repository to exercise our interfile engine.
OCaml 4 with no GC tuning (Peak memory just over 400M)
OCaml 5 with no GC tuning (Peak memory just over 550M)
It was immediately clear that the actual live memory increased dramatically in OCaml 5! This meant that either the actual memory used by the program increased, or memory was not getting collected as quickly by the garbage collector (Memtrace considers values to be live from the time they are allocated until the time they are collected, regardless of when they are no longer referenced). This also meant that the lack of compaction was at the very least not the whole story. If the only difference was compaction, we would expect to see similar behavior on Memtrace and observe the difference only in Resident Set Size (RSS).
So, if the lack of compaction was not the culprit, what might be? It couldn't be the allocator either (which also changed dramatically in OCaml 5), because poorer allocation strategies would also not be reflected in Memtrace: Memtrace measures only the memory allocated by the program, and does not include wasted space in the heap. This left only two likely possibilities. First, the program could actually have been allocating more memory, and keeping references to it for longer. This could happen if, somehow, the standard library or some of our dependencies changed behavior in OCaml 5 in such a way that dramatically increased memory usage. The second possibility was that the garbage collector was leaving unreferenced memory around for longer in OCaml 5 than in OCaml 4.
The first possibility seemed far-fetched, so I decided to explore the second.
Garbage collector tuning experiments
OCaml exposes some details about the garbage collector in the
Gc
module. In particular, it allows programmers to set the GC parameter
space_overhead
, which essentially controls how aggressively the garbage collector will clean up unused memory. A low value for
space_overhead
indicates a preference to collect more aggressively, at the cost of increased time spent on garbage collection. A high value indicates the opposite. The default, in both OCaml 4 and 5, is 120.
Because our problem was excessive memory consumption, I decided to start reducing
space_overhead
. This quickly yielded positive results. An interfile scan on Juice Shop with
space_overhead
set to 40 in OCaml 5 exhibited improved memory characteristics (as measured by Memtrace) compared to OCaml 4 with default GC settings! The time spent was also comparable.
OCaml 5 with space_overhead set to 40
However, the memory use reported by Memtrace does not tell the whole story. It reports the total memory used by uncollected values at any given moment. It does not include any additional memory wasted for any reason, such as heap fragmentation or unused memory that has not been returned to the operating system. To get an accurate picture of the total memory used, I had to measure Resident Set Size (RSS). I wrote a short script to run a program, poll the operating system for its RSS every second, and plot the results.
This looked promising! It closely mirrored the Memtrace results. It showed somewhat improved memory consumption and comparable time spent!
These results strongly suggested that the memory issues we encountered were due entirely to different major garbage collector behavior (as opposed to different allocator behavior, a lack of compaction, or some combination). They also suggested that simply tuning the major GC would allow us to return to the performance characteristics of OCaml 4! However, I needed more than a couple of runs on a single repository to reach that conclusion.
I decided to repeat this experiment by scanning
Blaze Persistence
, a larger repository that we often use for benchmarking. I learned that setting
space_overhead
to 40 was inadequate on this larger repository: I still saw significantly increased memory usage. I eventually found that a setting of 20 worked well to mimic the performance of OCaml 4. However, applying this to Juice Shop and other smaller repositories led to a time regression! It was too aggressive a setting, and the garbage collector spent too much time trying to keep memory usage low. In contrast, benchmarks showed the opposite problem on even larger repositories: time somewhat improved, but memory usage degraded. Further experiments showed that on the largest repositories in our benchmarks, a
space_overhead
of 15 was appropriate.
Dynamic garbage collector tuning
After some thought, a possible solution seemed obvious. Why limit ourselves to a single static value for
space_overhead
when there seemed to be no suitable single value? I started by looking more closely at the primitives available in OCaml's
Gc module
to see what was possible. Two things jumped out at me as possible building blocks for dynamic GC tuning: GC alarms and the GC stats functions. The GC alarms provide a facility for us to trigger any arbitrary action at the end of each major GC cycle. The GC stats functions can provide us with a variety of information about the garbage collector and the heap. Specifically, they can tell us the size of the major heap.
I combined these primitives into a utility which, at the end of each major collection, adjusts
space_overhead
based on the size of the major heap by linearly interpolating according to its configuration. Each adjustment is very fast relative to a major collection and performs minimal allocations. We use the
Gc.quick_stat
function which provides only values which are immediately available (including the heap size). We perform some simple computations to arrive at a value for
space_overhead
, and changing
space_overhead
requires no ancillary work from the runtime: it simply records the new value.
For example, one could configure the utility to use a minimum
space_overhead
of 20, a maximum of 40, and to adjust
space_overhead
between these two values when the major heap size is between 2 GB and 4 GB. In this case, if the major heap were 1 GB,
space_overhead
would be set to 40, the least aggressive setting. At 2 GB, it would also be 40. At 3 GB, it would be set to 30. At 4 GB, it would be set to 20, where it would remain as the heap size increased.
This allowed us to use less aggressive GC settings for smaller repositories, prioritizing speed over memory consumption in these cases. It also allowed us to ramp up the GC aggressiveness as the heap size grows, shifting the priority to memory usage over time when scanning larger repositories. Of course, the goal here was to mimic the performance of OCaml 4 in order to deliver the upgrade with minimal negative impacts to our customers, rather than to choose tradeoffs between time and memory in isolation.
While this approach is simple, and in retrospect fairly obvious, it does work. We have been using OCaml 5 in production since the middle of February 2025, and the rollout has been entirely uneventful.
We have open sourced this utility in the hopes that it will be useful to others in the OCaml community. The source code is
available on GitHub
and we are awaiting publication on opam as “dynamic_gc”. While I used this only to mimic the performance of our workload on OCaml 4, its potential applications extend beyond that for anyone who wants more control over GC tuning than is directly achievable with the OCaml standard library.
Future work
Use live memory instead of heap size
At the moment, we tune the garbage collector based on the major heap size. This is not the same as the amount of live memory. When values are garbage collected, the amount of live memory decreases but the heap size is not reduced until there is a compaction. This distinction is particularly important in OCaml 5, where (at least as of OCaml 5.3) compaction is never run automatically. As of OCaml 5.2, it is available, but it requires an explicit call to
Gc.compact ()
.
I chose this approach because (a) our workload tends to use more memory over time as the analysis proceeds, so the distinction for us is unimportant at the moment, and (b) it allowed me to use
Gc.quick_stat
in favor of
Gc.stat
, which triggers a major collection in order to provide up to date data. I didn't investigate in detail how
Gc.stat
behaves when called from a GC alarm, which is triggered at the end of a major collection. In theory it seems like it could quickly return valid data without triggering another major collection in that case, but because it is unimportant for our current use case, I did not investigate whether it actually does.
In the future, we could consider basing this computation on live memory instead of heap size. This would likely yield better behavior, especially for applications that vary in their memory usage and do not regularly call
Gc.compact
.
Use a more sophisticated controller
Go
uses a PI controller
to adjust the aggressiveness of its garbage collector and implements a
soft memory limit
with similar aims as our dynamic GC utility. While this is not directly analogous to my work, we could use it as a source of inspiration. In comparison, my approach here is simplistic. It's based only on the current heap size and does not take into account the rate of new allocations or the rate at which the garbage collector is actually finding dead values to collect.
It is possible that a more sophisticated controller would improve the tradeoff between time and memory, allowing us to spend less time garbage collecting for any given memory constraint, and vice versa.
Make this tunable by end users
Semgrep has many sophisticated users who care about and tune performance, and they don't all have the same desires. Some prefer to throw big machines at Semgrep and prefer to see fast scans above all else. Some like to limit resource consumption and don't mind if it means longer scan times. Many are in between. It is increasingly clear that there is no one-size-fits-all solution.
In the future, we may expose the constants used to tune the garbage collector to end users through flags. This would allow users to tune the GC based on their preferences.
If you have ideas for future work or want to stay updated with the latest developments at Semgrep, join our
Semgrep lets security teams partner with developers and shift left organically, without introducing friction. Semgrep gives security teams confidence that they are only surfacing true, actionable issues to developers, and makes it easy for developers to fix these issues in their existing environments.
In Memoriam: Mark Klein, AT&T Whistleblower Who Revealed NSA Mass Spying
Electronic Frontier Foundation
www.eff.org
2025-03-12 21:01:59
EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans.
Mark didn’t set out to change the world. For 22 years, he was a telecommunicat...
EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans.
Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy.
Mark not only saw how it works, he had the documents to prove it.
When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?”
We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called
Room 641A
. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—
made a copy
of all data going through those circuits and delivered it into the secret room.
Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator
Chris Dodd
, took the floor of the Senate to acknowledge Mark as the great American hero he was.
Mark stood up and told the truth at great personal risk to himself and his family. AT&T threatened to sue him, although it wisely decided not to do so. While we were able to use his evidence to make some change, both EFF and Mark were ultimately let down by Congress and the Courts, which have refused to take the steps necessary to end the mass spying even after Edward Snowden provided even more evidence of it in 2013.
But Mark certainly inspired all of us at EFF, and he helped inspire and inform hundreds of thousands of ordinary Americans to demand an end to illegal mass surveillance. While we have not yet seen the success in ending the spying that we all have hoped for, his bravery helped to usher numerous reforms so far.
And the fight is not over. The law,
called 702
, that now authorizes the continued surveillance that Mark first revealed, expires in early 2026. EFF and others will continue to push for continued reforms and, ultimately, for the illegal spying to end entirely.
Mark’s legacy lives on in our continuing fights to reform surveillance and honor the Fourth Amendment’s promise of protecting personal privacy. We are forever grateful to him for having the courage to stand up and will do our best to honor that legacy by continuing the fight.
Join EFF Lists
Apple to appeal against UK government data demand at secret high court hearing
Guardian
www.theguardian.com
2025-03-12 20:52:56
Guardian understands tech company’s appeal against Home Office request for encrypted data is to be heard by tribunal on Friday Apple’s appeal against a UK government demand to access its customers’ highly encrypted data will be the subject of a secret high court hearing, the Guardian understands. Th...
Apple’s appeal against a UK government demand to access its customers’ highly encrypted data will be the subject of a secret high court hearing, the Guardian understands.
The appeal on Friday will be considered by the investigatory powers tribunal, an independent court that has the power to investigate claims that the UK intelligence services have acted unlawfully.
It is against an order served by the Home Office in February
under the Investigatory Powers Act
, which compels companies to provide information to law enforcement agencies.
The Home Office asked for the right to see users’ encrypted data in the event of a national security risk. Currently, not even Apple can access data and documents protected by its advanced data protection (ADP) programme.
ADP allows users with iCloud accounts and storage to secure photos, notes, voice memos and other data with end-to-end encryption, meaning only the user can access it. Messaging services such as iMessage and FaceTime would remain end-to-end encrypted by default.
Apple said the removal of the tool would make users more vulnerable to data breaches from bad actors and other threats to customer privacy. Creating a “back door” would also mean all data was accessible by Apple, which it could be forced to share with law enforcement possessing a warrant.
Last week, Computer Weekly reported that Apple was intending to appeal against the secret order.
The tribunal has taken the unusual step of publishing a notification of a closed-door hearing before its president, Lord Rabinder Singh, on the afternoon of 14 March.
The tribunal listing does not mention either Apple or the government, nor has the tribunal confirmed if they are the parties involved.
The hearing is due to be held in private because it relates to the security services, but a media campaign led by Computer Weekly argued that the hearing should be held in open court since the case is a matter of public interest and the appeal has already been leaked.
Representatives from news organisations including the Guardian, as well as some civil society groups, are supporting Computer Weekly in its petition.
In a statement issued in February, Apple said it was “gravely disappointed” it was forced to take the decision
to stop offering advanced data protection
in the UK, “given the continuing rise of data breaches and other threats to customer privacy”.
A spokesperson said: “Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before. Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom.
“As we have said
many times before
, we have never built a backdoor or master key to any of our products or services and we never will.”
Apple and the Home Office both declined to comment on Friday’s hearing. The tribunal has been approached by the Guardian.
Massive Layoffs at the Department of Education Erode Its Civil Rights Division
Portside
portside.org
2025-03-12 20:52:15
Massive Layoffs at the Department of Education Erode Its Civil Rights Division
Ray
Wed, 03/12/2025 - 15:52
...
With a mass email sharing what it called “difficult news,” the U.S. Department of Education has eroded one of its own key duties, abolishing more than half of the offices that investigate civil rights complaints from students and their families.
Civil rights complaints in schools and colleges largely have been investigated through a dozen regional outposts across the country. Now there will be five.
The Office for Civil Rights’ locations in Boston, Chicago, Cleveland, Dallas, New York, Philadelphia and San Francisco are being shuttered, ProPublica has learned. Offices will remain in Atlanta, Denver, Kansas City, Seattle and Washington, D.C.
The OCR is one of the federal government’s largest enforcers of the Civil Rights Act of 1964, investigating thousands of allegations of discrimination each year. That includes discrimination based on disability, race and gender.
“This is devastating for American education and our students. This will strip students of equitable education, place our most vulnerable at great risk and set back educational success that for many will last their lifetimes,” said Katie Dullum, an OCR deputy director who resigned last Friday. “The impact will be felt well beyond this transitional period.”
The Education Department has not responded to ProPublica’s requests for comment.
In all, about 1,300 of the Education Department’s approximately 4,000 employees were told Tuesday through the mass emails that they would be laid off and placed on administrative leave starting March 21, with their final day of employment on June 9.
The civil rights division had about 550 employees and was among the most heavily affected by Tuesday’s layoffs, which with other departures will leave the Education Department at roughly half its size.
At least 243 union-represented employees of the OCR were laid off. The Federal Student Aid division, which administers grants and loans to college students, had 326 union-represented employees laid off, the most of any division.
On average, each OCR attorney who investigates complaints is assigned about 60 cases at a time. Complaints, which have been backlogged for years, piled up even more after President Donald Trump took office in January and
implemented a monthlong freeze on the agency’s civil rights work
.
Catherine Lhamon, who oversaw the OCR under former Presidents Barack Obama and Joe Biden said: “What you’ve got left is a shell that can’t function.”
Civil rights investigators who remain said it now will be “virtually impossible” to resolve discrimination complaints.
“Part of OCR’s work is to physically go to places. As part of the investigation, we go to schools, we look at the playground, we see if it’s accessible,” said a senior attorney for OCR, who spoke on the condition of anonymity because he was not laid off and fears retaliation. “We show up and look at softball and baseball fields. We measure the bathroom to make sure it’s accessible. We interview student groups. It requires in-person work. That is part of the basis of having regional offices. Now, California has no regional office.”
The OCR was investigating about 12,000 complaints when Trump took office. The largest share of pending complaints — about 6,000 — were related to students with disabilities who feel they’ve been mistreated or unfairly denied help at school, according to a ProPublica analysis of department data.
Since Trump took office,
the focus has shifted
. The office has opened an unusually high number of “directed investigations,” based on Trump’s priorities, that it began without receiving complaints. These relate to curbing antisemitism, ending participation of transgender athletes in women’s sports and combating alleged discrimination against white students.
Traditionally, students and families turn to the OCR after they feel their concerns have not been addressed by their school districts. The process is free, which means families that can’t afford a lawyer to pursue a lawsuit may still be able to seek help.
When the OCR finds evidence of discrimination, it can force a school district or college to change its policies or require that they provide services to a student, such as access to disabilities services or increased safety at school. Sometimes, the office monitors institutions to make sure they comply.
“OCR simply will not be investigating violations any more. It is not going to happen. They will not have the staff for it,” said another attorney for the Department of Education, who also asked not to be named because he is still working there. “It was extremely time and labor intensive.”
The department said in a press release that all divisions at the department were affected. The National Center for Education Statistics, which collects data about the health of the nation’s schools, was all but wiped away.
Education Secretary Linda McMahon called the layoffs “a significant step toward restoring the greatness of the United States education system.” In addition to the 1,300 let go on Tuesday, 600 employees already had accepted voluntary resignations or had retired in the past seven weeks, according to the department.
Trump and his conservative allies have long wanted to shut the department, with Trump calling it a “big con job.” But the president hasn’t previously tried to do so, and officially closing the department would require congressional approval.
Instead, Trump is significantly weakening the agency. The same day Congress confirmed McMahon as education secretary, she sent department staff an email describing a “final mission” — to participate in “our opportunity to perform one final, unforgettable public service” by eliminating what she called “bloat” at the department “quickly and responsibly.”
Education Department employees received an email on Tuesday afternoon saying all agency offices across the country would close at 6 p.m. for “security reasons” and would remain closed Wednesday. That led many workers to speculate that layoffs were coming.
Then, after the workday had ended, employees who were being laid off began receiving emails that acknowledged “the difficult workforce restructuring.”
Emails also went to entire divisions: “This email serves as notice that your organizational unit is being abolished along with all positions within the unit — including yours.”
A
path traversal vulnerability
arises when an attacker can trick a program
into opening a file other than the one it intended.
This post explains this class of vulnerability,
some existing defenses against it, and describes how the new
os.Root
API added in Go 1.24 provides
a simple and robust defense against unintentional path traversal.
Path traversal attacks
“Path traversal” covers a number of related attacks following a common pattern:
A program attempts to open a file in some known location, but an attacker causes
it to open a file in a different location.
If the attacker controls part of the filename, they may be able to use relative
directory components ("..") to escape the intended location:
If the program defends against symlink traversal by first verifying that the intended file
does not contain any symlinks, it may still be vulnerable to
time-of-check/time-of-use (TOCTOU) races
,
where the attacker creates a symlink after the program’s check:
// Validate the path before use.
cleaned, err := filepath.EvalSymlinks(unsafePath)
if err != nil {
return err
}
if !filepath.IsLocal(cleaned) {
return errors.New("unsafe path")
}
// Attacker replaces part of the path with a symlink.
// The Open call follows the symlink:
f, err := os.Open(cleaned)
Another variety of TOCTOU race involves moving a directory that forms part of a path
mid-traversal. For example, the attacker provides a path such as “a/b/c/../../etc/passwd”,
and renames “a/b/c” to “a/b” while the open operation is in progress.
Path sanitization
Before we tackle path traversal attacks in general, let’s start with path sanitization.
When a program’s threat model does not include attackers with access to the local file system,
it can be sufficient to validate untrusted input paths before use.
Unfortunately, sanitizing paths can be surprisingly tricky,
especially for portable programs that must handle both Unix and Windows paths.
For example, on Windows
filepath.IsAbs(`\foo`)
reports
false
,
because the path “\foo” is relative to the current drive.
In Go 1.20, we added the
path/filepath.IsLocal
function, which reports whether a path is “local”. A “local” path is one which:
does not escape the directory in which it is evaluated ("../etc/passwd" is not allowed);
is not an absolute path ("/etc/passwd" is not allowed);
is not empty ("" is not allowed);
on Windows, is not a reserved name (“COM1” is not allowed).
In Go 1.23, we added the
path/filepath.Localize
function, which converts a /-separated path into a local operating system path.
Programs that accept and operate on potentially attacker-controlled paths should almost
always use
filepath.IsLocal
or
filepath.Localize
to validate or sanitize those paths.
Beyond sanitization
Path sanitization is not sufficient when attackers may have access to part of
the local filesystem.
Multi-user systems are uncommon these days, but attacker access to the filesystem
can still occur in a variety of ways.
An unarchiving utility that extracts a tar or zip file may be induced
to extract a symbolic link and then extract a file name that traverses that link.
A container runtime may give untrusted code access to a portion of the local filesystem.
Programs may defend against unintended symlink traversal by using the
path/filepath.EvalSymlinks
function to resolve links in untrusted names before validation, but as described
above this two-step process is vulnerable to TOCTOU races.
Before Go 1.24, the safer option was to use a package such as
github.com/google/safeopen
,
that provides path traversal-resistant functions for opening a potentially-untrusted
filename within a specific directory.
Introducing
os.Root
In Go 1.24, we are introducing new APIs in the
os
package to safely open
a file in a location in a traversal-resistent fashion.
The new
os.Root
type represents a directory somewhere
in the local filesystem. Open a root with the
os.OpenRoot
function:
Root
provides methods to operate on files within the root.
These methods all accept filenames relative to the root,
and disallow any operations that would escape from the root either
using relative path components ("..") or symlinks.
f, err := root.Open("path/to/file")
Root
permits relative path components and symlinks that do not escape the root.
For example,
root.Open("a/../b")
is permitted. Filenames are resolved using the
semantics of the local platform: On Unix systems, this will follow
any symlink in “a” (so long as that link does not escape the root);
while on Windows systems this will open “b” (even if “a” does not exist).
Root
currently provides the following set of operations:
In addition to the
Root
type, the new
os.OpenInRoot
function
provides a simple way to open a potentially-untrusted filename within a
specific directory:
The
Root
type provides a simple, safe, portable API for operating with untrusted filenames.
Caveats and considerations
Unix
On Unix systems,
Root
is implemented using the
openat
family of system calls.
A
Root
contains a file descriptor referencing its root directory and will track that
directory across renames or deletion.
Root
defends against symlink traversal but does not limit traversal
of mount points. For example,
Root
does not prevent traversal of
Linux bind mounts. Our threat model is that
Root
defends against
filesystem constructs that may be created by ordinary users (such
as symlinks), but does not handle ones that require root privileges
to create (such as bind mounts).
Windows
On Windows,
Root
opens a handle referencing its root directory.
The open handle prevents that directory from being renamed or deleted until the
Root
is closed.
Root
prevents access to reserved Windows device names such as
NUL
and
COM1
.
WASI
On WASI, the
os
package uses the WASI preview 1 filesystem API,
which are intended to provide traversal-resistent filesystem access.
Not all WASI implementations fully support filesystem sandboxing,
however, and
Root
’s defense against traversal is limited to that provided
by the WASI impementation.
GOOS=js
When GOOS=js, the
os
package uses the Node.js file system API.
This API does not include the openat family of functions,
and so
os.Root
is vulnerable to TOCTOU (time-of-check-time-of-use) races in symlink
validation on this platform.
When GOOS=js, a
Root
references a directory name rather than a file descriptor,
and does not track directories across renames.
Plan 9
Plan 9 does not have symlinks.
On Plan 9, a
Root
references a directory name and performs lexical sanitization of
filenames.
Performance
Root
operations on filenames containing many directory components can be much more expensive
than the equivalent non-
Root
operation. Resolving “..” components can also be expensive.
Programs that want to limit the cost of filesystem operations can use
filepath.Clean
to
remove “..” components from input filenames, and may want to limit the number of
directory components.
Who should use os.Root?
You should use
os.Root
or
os.OpenInRoot
if:
you are opening a file in a directory; AND
the operation should not access a file outside that directory.
For example, an archive extractor writing files to an output directory should use
os.Root
, because the filenames are potentially untrusted and it would be incorrect
to write a file outside the output directory.
However, a command-line program that writes output to a user-specified location
should not use
os.Root
, because the filename is not untrusted and may
refer to anywhere on the filesystem.
As a good rule of thumb, code which calls
filepath.Join
to combine a fixed directory
and an externally-provided filename should probably use
os.Root
instead.
// This might open a file not located in baseDirectory.
f, err := os.Open(filepath.Join(baseDirectory, filename))
// This will only open files under baseDirectory.
f, err := os.OpenInRoot(baseDirectory, filename)
Future work
The
os.Root
API is new in Go 1.24.
We expect to make additions and refinements to it in future releases.
The current implementation prioritizes correctness and safety over performance.
Future versions will take advantage of platform-specific APIs, such as
Linux’s
openat2
, to improve performance where possible.
There are a number of filesystem operations which
Root
does not support yet, such as
creating symbolic links and renaming files. Where possible, we will add support for these
operations. A list of additional functions in progress is in
go.dev/issue/67002
.
Hello everyone! Today, before our eyes, a truly significant event for web development in general is taking place. Just a couple of days ago, a project from Microsoft for the TypeScript language was put into the public space. And this is really important!
👀 What's new?
First of all, the original TypeScript compiler under the hood is moving in the new
version 7
from JavaScript to Go. That is, there will be conditionally TypeScript 6 (JS) and TypeScript 7 (Go). This was done mainly because of the scaling problem when used in very large projects, but also, of course, because of the speed.
Today, the new project shows a 10x increase in some tests, which makes it super promising. The main thing is, of course, the code editor. If we are talking about speed, then if you do not have some Intel Core i9-13900K processor, where in principle everything is irrelevant, but a slower one, or a laptop, then you have all encountered the fact that your project in VS Code starts up, well, literally for a very long time. That is, if it is one file - okay, but if we are talking about a modern Next.js application with 100 pages and 1000 files, then the laptop just glitches like an old washing machine until it starts up.
So version 7 just minimizes all this horror and with new versions of VS Code will allow you to launch your projects faster.
✅ About Go
Of course, it's unusual to talk about Go in the context of web development, because it's usually related to the backend, but it's still worth it now. Why did you choose Go and not another language, like Rust or something else?
Here is a short list of why Go was chosen:
"By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure...Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable."
"We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code."
These are direct quotes from
here
, if you want to know why Go, you can read the full text.
🔎 Problems with the new version
A new version is always good, but there are also pitfalls that you should know about so as not to make mistakes in your projects:
Modest code base
- This project has literally just been presented to the public, so in terms of code base parity with older versions 5.8, 5..., it is clear that it is not comparable. That is, if you want to learn something new about version 7, if you have some kind of bug, or something similar, then ChatGPT will not answer you, and you will not find information on StackOverflow. You will be considered a discoverer and your duty is to record on the Internet what you have found. This is, of course, cool, but when it comes to projects with large budgets, it is worth understanding that these are still risks, because developers can literally sit for several hours just to infer the type. But, it is also clear that this is a reserve for the future, like Next.js in 2021.
Go
- Yes, this can be considered both an advantage and a disadvantage. Just imagine how much code, modules and other stuff was made specifically for JavaScript, and now the language under the hood of the compiler is changing. Accordingly, the problems that were with Go are transferred to web development. Many web developers are not even aware (I hope not) what kind of programming language it is, but its implementation inevitably entails inconsistencies and crutches.
🖊️ Conclusion
From my point of view, this is really something new and cool, because the speed of the program has always been of primary importance to me. I like the fact that web development does not focus entirely on JavaScript, which is everywhere now, but also on other languages that can bring something new. It is clear that everything is around JS, HTML, because it is a standard, but still, when we talk about variability, everyone will remember the backend, that is where everything is, but now there is something to discuss on the client. Therefore, even with all the shortcomings of the new version of TypeScript, I think that it should be downloaded and it should be tried, developed. Imagine, before our eyes history is being made, where millions of developers will develop this, a bunch of new bugs with Go and features, typical inconsistencies and other similar things. But, it is cool!
💬
What do you think about this? It will be interesting to read!
Zinc is my attempt at a low-level systems programming language prototype.
I found this parser, called
Owl
mirrored
here
, that generates parsers for visibly pushdown languages.
Visibly pushdown languages are those where recursion to other grammar productions must be guarded by tokens which can only be used for that purpose.
While somewhat limiting, this ends up meaning that all grammar productions that can "contain" other productions are wrapped in certain tokens and so the language is visually easier to parse.
And it runs in linear time.
mut i = 0
def n = 10
while 0 <= i < n {
def x = call rndi()
if i = 0 {
; verify that the fixed seed rng is working as expected
assert "rndi() = 13109595": x = 13109595
}
call puti(x)
call putc(10)
set i = i + 1
}
return i
if
and
while
statements don't need parentheses around their conditional.
Mutable variables and immutable bindings.
Semicolons used to comment out the rest of the line.
The language splits side-effects from non-side-effecting code. Or rather would once functions and subroutines are implemented. Subroutines can have side-effects, functions cannot. Expressions can only apply functions, not call subroutines. It's like Haskell but better.
No package manager to distract you.
Simple type system. Currently, a unityped language - everything is an int - except for the optional
assert
message.
No need to write functions. Top-level code in a
.zn
is executed when the file is included.
No package/file/module imports yet. I had some idea about compiling each file to a struct, but whether there was one of them or multiple of them was undecided. And there's always the issue of what to do about side-effects on module load.
No nil. No pointers or references either. I wondered how far I could get with this model, maybe using arrays or structs or arenas or static allocations or something.
While I dreamed of integrating it with some 2D graphics or tile engine and playing around with old-school game dev, I ran into a few limitations with its design.
Like how cool would it be to write a clone of
Raptor: Call of the Shadows
in a language I wrote?
I intended to write it as a single-pass compiler directly from the syntax tree and targeting C via TCC to enable it to run quickly and on any platform.
Unfortunately I also designed a language with top-level execution and nested functions - neither of which I could come up with a good compilation model for if I wanted to preserve the single-pass, no AST, no IR design of the compiler.
It's certainly possible there is a model, but my musings couldn't figure it out.
The reality is too that I've been a programmer in GC languages my entire career (if you ignore the 5 years I did C++ and Verilog) and that's where my mind resides so writing a low-level limited language was, well, limiting.
In March 2025 I received an email asking about PuTTY’s “logo” –
the icon used by the Windows executable. The sender wanted to
know where it had come from, and how it had evolved over
time.
Sometimes I receive an email asking a question, and by the time
I’ve finished answering it, I realise it’s worth turning into a
blog post. This time I knew it was going to be one of those
before I even started typing, because there’s a graphic design
angle to the question
and
a technical angle. So I had a
lot to say.
PuTTY’s icon designs date from the late 1990s and early 2000s.
They’ve never had a major
stylistic
redesign, but over
the years, the icons have had to be re-rendered under various
constraints, which made for a technical challenge as well.
Hand-drawn era
When I first started drawing PuTTY icons, I didn’t know I was
going to need lots of them at different sizes. So I didn’t start
off with any automation – I just drew each one by hand in the
MSVC icon editor.
PuTTY itself
Expanded to a visible size, here’s the earliest version of the
original PuTTY icon that I still have still in my source
control. That means it was current in early 1999; my source
control doesn’t go all the way back to the very project start in
the summer of 1996, but if it did, I expect this
was
probably
the same icon I originally drew. I’d have
drawn one quite early, because Windows GUI programs look pretty
unhelpful without an icon.
The original PuTTY icon, in colour and in
monochrome
It’s 32 × 32 pixels, and uses a limited palette of primary
colours and shades of grey. That’s because I started the project
in the mid-1990s, when non-truecolour displays were very common.
There was a fixed 16-colour palette recommended for use in
icons, consisting of the obvious primary and secondary colours
and dim versions of them, plus a couple of spare shades of grey.
That way all the icons in the entire GUI could reuse the same 16
palette entries and not run the display out of colours. So this
icon is drawn using that palette.
Providing a plain black-and-white version was another standard
recommendation at the time. But I can’t remember why – I
certainly never actually
saw
a computer running Win95
or later with a B&W display! My best guess is that the
recommendation was intended for use with the “Win32s” extension
that would let 32-bit programs at least
try
to run on
Windows 3.1 (on suitable underlying hardware). Probably Windows
3 still supported black and white displays.
I don’t think I thought about the design very much. I just
dashed it off because I needed
some
kind of an icon.
The two computers are drawn in a style that was common in the
’90s: a system box sitting under a CRT monitor, with a floppy
disk drive visible at the right-hand end. They’re connected with
a lightning bolt to suggest electrical communication between
them. I didn’t
believe
I was being very original: I had
the vague idea that several icons of this kind already existed
(although I’ve never been able to find any specific example of
what I was thinking of), so in my head, it fit with the general
iconography of the day, and seemed as if it would be immediately
understandable.
I can’t remember why the lightning bolt was
yellow
.
With hindsight that seems the strangest thing about it; cyan
would have been a more obvious choice for electricity. Possibly
it was just to contrast more with the blue screens of the
computers.
Speaking of which … I can’t remember why the screens were blue,
either. I
think
I just intended it as a neutral-ish
sort of colour which wasn’t black, so that it was clear that the
screens were turned on. I’m pretty sure the blue
wasn’t
an allusion to the Windows Blue Screen of Death – that wouldn’t
make sense, because in a typical use of PuTTY, only
one
end of the connection is running Windows, and neither end has
crashed!
Looking back, I suppose I might also have considered making the
two computers look different from each other, on the theory that
the client machine might be sitting on someone’s desk, and the
server might be in a rack in a datacentre? But I was a student
when I started writing PuTTY, and
my
Linux server you
could remotely log into was on my desk and not in a rack. So I
think that just never even occurred to me.
PSCP (and PSFTP): file transfer
In 1999, a contributor sent an SCP implementation, and PSCP was
born. I decided to draw it an icon of its own.
It wasn’t especially important for PSCP to have an icon at all.
You don’t run it
from
a GUI (it’s useless without
command-line arguments), so it doesn’t need an icon for you to
find it in Explorer or identify it in the Start Menu. And it
doesn’t put up GUI windows, so it wouldn’t need an icon to
identify those in the taskbar either. But I drew an icon anyway.
I can’t remember why. Maybe it was just for completeness, or I
felt like doing a bit of drawing, or something.
I made the PSCP icon by modifying the existing PuTTY icon,
replacing one of the two computers with a piece of paper with
variable-length lines on it, suggesting a written document:
The original PSCP icon, in colour and in
monochrome
I kept the lightning bolt connecting the remaining computer to
that document, the theory being that you use PSCP to remotely
access documents. (In some fairly vague sense.)
Later on, we gained a second file transfer tool, PSFTP, and I
reused the same icon for that, on the basis that they’re both
file transfer tools and the icon doesn’t really say anything
specific to one of them. PSFTP is still a command-line
application, but unlike PSCP, it’s possible to run
it
from
the GUI (by clicking on it in Explorer or the
Start Menu), because it has an interactive prompt, so if you
don’t run it with any command-line arguments, you can tell it at
the prompt where to connect to. So PSFTP put the same icon to a
more practical use than PSCP did.
(On the other hand, giving two tools the same icon means you
have to check carefully which one you’re dragging around in
Explorer! Perhaps it would have been better to make the PSCP and
PSFTP icons different in some way. But I couldn’t easily think
of a visual depiction of the ways the two tools differ.)
Pageant: SSH agent
Pageant, our SSH agent, was the next tool in the suite that
needed an icon.
My original idea for an icon depicting an SSH agent was to draw
the face of a
secret
agent – in the stereotypical
outfit from the sillier end of spy fiction, with a wide-brimmed
hat and a turned-up collar.
But I’m terrible at drawing faces. After a few failed attempts,
I realised that Pageant would never get released
at all
if I waited until I’d drawn the icon I wanted. So instead I took
the secret agent’s hat – the only part which I
had
drawn to my satisfaction – and simply sat it on top of a larger
version of the computer from the original PuTTY icon, at a
rakish angle.
The original Pageant icon
I’d intended that as a placeholder, planning to come back and
have another try later at drawing my original idea of a human
secret agent. But then I realised the icon I’d already drawn was
simply
better!
It fits visually with the rest of the
suite by reusing the same computer design; it’s more appropriate
because the ‘agent’ in question
is
computerised rather
than human. And most importantly of all, a computer wearing a
hat turned out to be cute. So I kept it.
On the technical side: Pageant’s original icon
didn’t
come with a black and white version. I don’t remember why not;
perhaps, with Windows 3 a few more years in the past, it just
didn’t seem important enough any more. But on the other hand,
Pageant introduced a new requirement: I had to draw
a
small
version of the icon. Up to this point I’d just
drawn everything at the (then) standard icon size of 32 × 32;
but Pageant puts an icon in the System Tray on the taskbar, so
it needed a smaller 16 × 16 icon to fit there. If you just let
Windows scale down the 32 × 32 version, it didn’t do such a good
job of it.
Half-size version of the original Pageant icon
Drawing a small version of the same computer logo was a
challenge, especially while still using that 16-colour fixed
palette. You can see that I’ve done some tricks with the darker
palette entries, ‘hand-antialiasing’ the hatband, and replacing
the internal black outline at the bottom of the computer monitor
with a dark grey one to try to make it less prominent – as if
the original 1-pixel black outline had been scaled down to half
a pixel.
(But the
external
outline is still solid black, so
that it separates the entire icon clearly from whatever
background colour it’s shown on. The taskbar is grey, and so is
most of the icon, so that’s still important!)
PuTTYgen: key generator
After that came PuTTYgen. I drew another “computer + lightning
bolt + object” icon, this time with the other object being a key
– a weak visual pun, since PuTTYgen generates and works with
cryptographic keys.
The original PuTTYgen icon
The lightning bolt indicating network communication was
particularly inappropriate in this tool. It was already fairly
tenuous in PSCP, but PuTTYgen doesn’t do any network
communication
at all
, so the lightning bolt was
completely inaccurate. I really put it in there for visual
consistency with the rest of the icons: it’s there to signal
“this is a tool from the same suite as PuTTY”.
(I’m still not
very
happy with the drawing of the
key.)
This icon is the other way up from the PSCP one: here, I’ve
kept the computer in the lower left corner from the main PuTTY
icon, and replaced the top right one with a different weird
object, whereas in PSCP, I kept the top right computer and
replaced the
other
one with an object. I don’t remember
having any particular reason for that – perhaps I just didn’t
even bother to check which way round I’d drawn the previous
icon! But I’m not too unhappy with it, because it makes the two
icons more different from each other, so easier to tell
apart.
Configuration dialog box
The next icon we needed wasn’t for a separate tool at all: it
was for PuTTY’s configuration dialog box.
The taskbar entry for the config box had the boring “default”
icon, just representing a generic window frame, that Windows
uses for any window that doesn’t supply a better one. I’d
decided that looked ugly.
Rather than just reuse the main PuTTY icon unmodified, I drew
an icon to specifically represent configuration or modification,
by overlaying a large spanner on top:
The PuTTY configuration dialog icon, in colour and in
monochrome
For this icon I went back to including a monochrome version. I
can’t remember why. Perhaps it was just that I was modifying an
existing icon which
did
have a monochrome version!
Scripted era
Nothing else happened in this area until 2007.
That was about when people started to complain that the 32 × 32
pixel icons had started to look nasty. Displays had become
bigger, and Windows was defaulting to displaying 48 × 48 icons
instead of 32 × 32. So if your application only had a 32 × 32 icon,
Windows would stretch it.
It wouldn’t have looked great if a 32 × 32 icon had been
magnified by an exact factor of 2 to 64 × 64, copying each input
pixel the same number of times. But 48 isn’t even a multiple of
32, so instead Windows had to stretch my
icons
unevenly
, which looked even worse.
Rather than hand-draw another full set of icons, I decided to
get more ambitious. So I wrote a piece of code that drew all the
components of each icon image in a programmatic way, so that I
could run the script to auto-generate a full set of icons for
every tool. That eliminated the inconsistency about which icons
had tiny versions, and which ones had black and white versions:
now
all
the icons had
all
versions.
The hard part of that was the tiny 16 × 16 icons. I’d only
drawn one of those so far, for Pageant. Now I had to try to
capture the artistic judgment I’d used for the
“hand-antialiasing” in that icon, and try to make a piece of
software do the same thing for
all
the graphical
components of the icon suite.
I remember that being pretty hard work, but in the end I
managed to make something I was happy with. After
that,
every
icon had 48 × 48, 32 × 32 and 16 × 16
icons.
A full set of icons generated by the script
I stopped short of trying to make the program
recreate
every
decision I’d made in the original
drawing. I made quite a few of those decisions in a hurry
without much thought, after all. So I just got the script to a
point I was happy
enough
with. And I took the
opportunity to improve on a few things: for
example,
all
versions of the Pageant hatband are now
antialiased, not just the tiny version.
But there are other ways that the scripted icons
don’t
exactly
match the original hand-drawn versions.
The radio buttons below should let you flip back and forth
between two example icons in the original and script-generated
versions:
Use the radio buttons to switch between the original
PuTTY and Pageant icons and the 32 × 32 output from the
script
The actual PuTTY icon is a pretty close match – only a small
number of pixels changed. But the Pageant icon is quite
different: apart from the deliberate change of the antialiased
hatband, the computer in the icon isn’t even the
same
size!
I can’t remember if that was a pure accident
(maybe I didn’t flip back and forth between the two versions of
that icon while I was writing the script), or a deliberate style
change.
(Of course, if I’d
wanted
to change the size of the
computer, it would have been easier to do it using the script
than redrawing by hand. So it would have made sense for it to be
a deliberate change, now that the opportunity was available. But
I don’t remember that it
was
deliberate.)
As I said earlier, I made sure the script could generate a full
set of black-and-white icons too, just in case. In 2007 I think
it was even less likely that anyone would still be running
Windows (or anything else) on a B&W display, but one of the
nice things about having this stuff done by a script is that you
can get it to do a lot of hard work for you just in case. So now
I have a full set of black and white icons too, even at the new
size of 48 pixels:
All five icons, in black and white, at 48 pixels
pterm
In 2002 I had ported a lot of the PuTTY code to Linux. I wrote
a Linux version of the full GUI PuTTY application, and also
reused part of the code as an X11 terminal
emulator
pterm
which you could use in place of
existing ones like
xterm
.
Linux GUI applications can also have icons if they want to: an
X11 program can set a property on its window, containing a
bitmap to be used as its icon. Window managers vary
in
what
they use this for – they might minimise the
whole window to an icon in Windows 3.1 style, or display the
icon in a taskbar entry in the style of more modern Windows, or
display the icon somewhere in the application’s window frame, or
anything else they can think of.
I hadn’t done anything about this in 2002, partly because as
far as I knew the toolkit library I was using (GTK 1) didn’t
support setting an icon on the window, and also because it would
have been a pain to provide the same icons in multiple bitmap
file formats.
But now that I was generating the icons from a script, it was
suddenly much easier to make the script output multiple file
formats. Also I found out that GTK 1
did
let you set an
icon property after all. (In any case, I already knew GTK 2
could do that, and was planning an upgrade to that.) So, for
both reasons, it was now practical to provide icons for the X11
versions of the PuTTY tools.
So I had to draw an icon for
pterm
, which didn’t
already have one!
This time, I really did run out of imagination.
Since
pterm
doesn’t do any network communication, I
simply drew an icon consisting of
just one
copy of the
same computer logo seen in all the other icons, expanding it to
the full size of the icon image. (A little bigger than it was for
Pageant, since this time it didn’t have to fit under a hat.)
pterm
also has a configuration dialog box, so I
made a separate icon for that, reusing the same spanner overlay
from PuTTY proper.
The icon for
pterm
, and its
configuration dialog
Much
later, in 2021, it became possible to write a
version of
pterm
that would run on Windows, and act
as an alternative GUI container for an ordinary
Windows
cmd.exe
or Powershell session. This icon
became the one for Windows pterm as well, once that
happened.
Installer
The flexible icon-drawing script also let me support true
colour icons as an extra output mode, so I wasn’t limited to the
old 16-colour palette any more.
That came in handy when I needed an icon for the
PuTTY
installer
. I thought it would be nice to draw a
cardboard box as part of that icon, with the flaps open at the
top and a lightning bolt coming out of it pointing at a
computer, suggesting “PuTTY is zapping out of this box on to
your machine”. Cardboard boxes are brown, and brown wasn’t in
the 16-colour icon palette – but now we weren’t limited to that
palette any more, so I could draw a true-colour icon
using
actual
brown!
But I still wanted to support the 16-colour icons, because I
never like to throw away backwards compatibility if I can avoid
it. And I noticed that you could make
several
acceptable shades of brown by taking two different colours in
the old Windows 16-colour icon palette, and mixing them 50:50.
All my mixes involved the dark red (
#800000
); I got three
different browns, or brown-enough colours, by mixing it with
black (
#000000
), dark yellow
(
#808000
), and full yellow
(
#ffff00
).
So I made the cardboard box out of those three 50-50 mixes,
using them all to show that different parts of the box were at
different angles to the light. That way, in the true-colour icon
I could use a single colour that was the mean of the two old
colours, and in the 16-colour version I halftoned it by
alternating pixels of the
actual
two old colours.
The PuTTY installer icon, in true-colour and
halftoned versions
SVG revamp
My icon-drawing script works well at the ordinary 32 × 32 and
48 × 48 sizes, and scales
down
to 16 × 16 reasonably
well, but it doesn’t work so well when you start
scaling
up
.
So, if anyone needs
really big
versions of the icons,
generating them by just running the same script with a bigger
image size won’t produce very nice output. In one of my failed
attempts to port PuTTY to the Mac, I found that MacOS wanted a
128 × 128 icon to use in the dock, and the output of my script
at that size was definitely not great.
I’ve more or less given up on that Mac port now, but I thought
that a fully scalable version of all the icons might still be a
useful thing to have around. For one thing, it allows using them
in contexts other than GUI icons. (Gratuitous merch, anyone?
T-shirts? Laptop decals? 🙂)
So, partly “just in case” but also for fun, I’ve been working
on a second icon-drawing script. It mimics the same “thought
processes” of the original script, but it outputs SVG files
instead of bitmaps – keeping the artwork still as close as
possible to the original style.
As of now, these are my best efforts at the SVG versions:
All eight icons, in fully scalable SVG
For the tiny icons, I still think my original script is doing a
better rendering job, because SVG has no idea at all how best to
do that antialiasing. But for
large
icons, especially
in true colour, I think my recommendation is going to be to base
them on these SVGs – just render to a big bitmap.
That’s all, folks!
That’s the history and the current state of the PuTTY icon set.
People sometimes object to the entire 1990s styling, and
volunteer to design us a complete set of replacements in a
different style. We’ve never liked any of them enough to adopt
them. I think that’s probably
because
the 1990s styling
is part of what makes PuTTY what it is – “reassuringly
old-fashioned”. I don’t know if there’s
any
major
redesign that we’d really be on board with.
So I expect by now we’re likely to keep these icons, or
something basically the same, for the whole lifetime of the
project!
CISA: Medusa ransomware hit over 300 critical infrastructure orgs
Bleeping Computer
www.bleepingcomputer.com
2025-03-12 20:26:29
CISA says the Medusa ransomware operation has impacted over 300 organizations in critical infrastructure sectors in the United States until last month. [...]...
CISA says the Medusa ransomware operation has impacted over 300 organizations in critical infrastructure sectors in the United States until last month.
This was revealed in a
joint advisory
issued today in coordination with the Federal Bureau of Investigation (FBI) and the Multi-State Information Sharing and Analysis Center (MS-ISAC).
"As of February 2025, Medusa developers and affiliates have impacted over 300 victims from a variety of critical infrastructure sectors with affected industries including medical, education, legal, insurance, technology, and manufacturing," CISA, the FBI, and MS-ISAC warned on Wednesday.
"FBI, CISA, and MS-ISAC encourage organizations to implement the recommendations in the Mitigations section of this advisory to reduce the likelihood and impact of Medusa ransomware incidents."
Medusa ransomware surfaced almost four years ago, in January 2021, but the gang's activity only picked up
two years later
, in 2023, when it launched the Medusa Blog leak site to pressure victims into paying ransoms using stolen data as leverage.
Since it emerged, the gang has claimed over 400 victims worldwide and gained media attention in March 2023 after claiming responsibility for an
attack on the Minneapolis Public Schools (MPS) district
and sharing a video of the stolen data.
The group also leaked files allegedly stolen from Toyota Financial Services, a subsidiary of Toyota Motor Corporation, on its dark extortion portal in November 2023 after the company refused to pay an $8 million ransom demand and
notified customers of a data breach
.
Medusa was first introduced as a closed ransomware variant, where a single group of threat actors handled all development and operations. Although Medusa has since evolved into a Ransomware-as-a-service (RaaS) operation and adopted an affiliate model, its developers continue to oversee essential operations, including ransom negotiations.
As the advisory explains, to defend against Medusa ransomware attacks, defenders are advised to take the following measures:
Mitigate known security vulnerabilities to ensure operating systems, software, and firmware are patched within a reasonable timeframe.
Segment networks to limit lateral movement between infected devices and other devices within the organization.
Filter network traffic by blocking access from unknown or untrusted origins to remote services on internal systems.
It's also important to note that multiple malware families and cybercrime operations call themselves Medusa, including a
Mirai-based botnet
with ransomware capabilities and an
Android malware-as-a-service
(MaaS) operation discovered in 2020 (also known as TangleBot).
Due to this commonly used name, there's also been some confusing reporting about Medusa ransomware, with many thinking it's the same as the widely known
MedusaLocker ransomware operation
, although they're entirely different operations.
Last month, CISA and the FBI
issued another joint alert
warning that victims from multiple industry sectors across over 70 countries, including critical infrastructure, have been breached in Ghost ransomware attacks.
Until recently the Scheme programming language in general, and Guile in particular, had no facilities for coding front-end web applications. Guile Hoot (with Fibers) addresses this gap.
Guile Hoot
is an implementation of Guile running on the browser via WebAssembly compilation;
Fibers
is a library that provides concurrency for Guile Scheme in the tradition of Concurrent ML. Luckily for us,
Guile Hoot
comes with
Fibers
out of the box, so coding clean, event-driven browser applications using Guile is now possible.
Stricly speaking, we implement an event-driven architecture. See
HtdP, Section 2.5
for an elementary discussion of event-driven interactive programs.
Using tic-tac-toe as an example (click
here
for a live demo), we show how to implement a functional web programming pattern, the FLUX architecture, with
Guile Hoot
and
Fibers
. Among other things, we demonstrate atomic state changes, time-travel, concurrent programming as a way to separate concerns, and a simple but convenient approach for rendering HTML.
I couldn’t find much introductory material for the concurrency model implemented in
Fibers
, so I include towards the end of this post a section that reviews basic
Fibers
concepts, followed by a detailed discussion of the asynchronous queue that we use for event processing.
For now, all we need to know about this asynchronous queue is that event messages can be enqueued onto this queue by calling
(put-message enq-ch evt)
and dequeued with
(get-message deq-ch)
. Note that dequeueing an event will block if the queue is empty; enqueueing events never blocks.
Our Tic-Tac-Toe
Below a picture of a game in progress:
For our simple game, we display a header, a status line that tells us what’s happening in the game (more on this below), a board with the moves by the players, and a time-travel stack of buttons. In the particular game being displayed, the players have executed 6 moves; we are looking at the state of the board after the 3rd move, having clicked on the time-travel button corresponding to MOVE: 3.
At this point, a player can click on another time-travel button to go backwards or forwards in time, or click a square on the board, which will result in a new move if the square is empty, with the corresponding adjustment to the time-travel buttons.
The FLUX Architecture
This
article
gives a good overview of the FLUX architecture in the context of Facebook and React. We use the idea of unidirectional flow and stay away from the React framework and its ecosystem.
The FLUX architecture was championed by Facebook as the basic software pattern for coding client web applications, which led to ReactJS and its ecosystem. The basic idea is quite simple: implement a unidirectional data flow as opposed to the traditonal MVC approach:
Browser events–wherever they are captured–are converted into a data structure that encodes the information about the event into an
event message
. This message is then put on a queue. An event loop listens for messages on this queue, dispatches the message to a handler and then makes changes to state as needed. The last step is to trigger the re-rendering of the application, reflecting the changes made to state.
State changes take place as response to events in the context of the event loop exclusively
. A new event will start the cycle all over again.
These ideas in code; assume that event messages are encoded and put into an event queue:
(lambda (resolved rejected)
(call-with-async-result
resolved rejected
(lambda ()
;; set up asynchronous channels
(let-values (((enq-ch deq-ch stop?) (inbox)))
;; instantiate the application
(let ((state (make-atomic-box (make-initial-state)))
(app (tic-tac-toe enq-ch deq-ch stop?)))
;; send start message
(put-message enq-ch (make-event 'start #f))
;; event loop
(let forever ((evt (get-message deq-ch)))
(begin
;; modify global state per event
(handle-events state evt)
;; render *after* new state
(render (app (atomic-box-ref state)))
(forever (get-message deq-ch)))))))))
See the
Guile Hoot Fibers documentation
for a discussion of the required boilerplate code.
Ignoring the scafolding and zooming in on the
thunk
, we see that an application of the function
inbox
results in the creation of two
Fiber
channels,
enq-ch
and
deq-ch
, as well as
stop?
, a condition variable that can be used by
Fibers
. These channels are used to provide the queue used for handling events.
Technically, we only need to pass
enq-ch
to
tic-tac-toe
.
Next, we invoke the function
tic-tac-toe
with these
Fibers
variables as parameters to instantiate
app
, a dynamic templating function that will return pseudo-HTML as a function of the state value passed to it.
More precisely,
app
takes state as input and generates SXML, Guile’s native representation of XML that is very handy for HTML.
The pseudo-HTML will be converted into actual HTML and injected into the DOM during rendering. Thus,
app
is the dynamic component or application that is updated and rerendered in response to events.
Now we are ready to get the ball rolling: first, the
start
event is put on the event
queue
, and then we start an event
loop
that: a) receives the next event from the queue, b) invokes an event dispatcher to modify state based on payload, c) renders the dynamic template using the new state value, and d) returns to listening to the queue for new events. Note that putting the
start event
on the queue makes possible the first render of the application: otherwise, the event loop would keep on blocking waiting for an event. At the start of the game, the loop is waiting for events, which in our case are mouse clicks on the squares of the tic-tac-toe board.
We see immediately that this event loop corresponds
exactly
to the diagram of the FLUX architecture sketched above.
Dynamic Templating
After the
start event
, all the other events processed correspond to mouse clicks, either on squares on the board or on the time-travel buttons. The handlers for these events are coded into the dynamic template we use for specifying the look, feel, and actions of our web application. We discuss in the next section what exactly these handlers do. But first, let’s review how dynamic templating works in Guile.
As mentioned already, we rely on Dave Thompson’s work for HTML rendering, explained in his excellent blog post
Building Interactive Web Pages with Guile Hoot
, which I recommend the reader study in detail. For our purposes, his piece can be conceptually split into two pieces: templating and rendering.
For templating, the main idea is to use quasi-quoting, a standard Guile feature, to describe the HTML of components
and their actions
(e.g,
click
) in Guile code. The main advantage of this approach is that we have the full power of Guile for specifying dynamic components, as opposed to the limited DSLs used by templating engines.
Static components are simple enough: for example, below the code for the title at the top of the page:
(define header
`(h1 "Tic-Tac-Toe"))
This looks very much like the corresponding HTML code–sans some of the angle brackets.
Dynamic components are a little more complicated but still straightforward for a Guile programmer. Consider the component that shows the current status of the game, which tells players whether the game has been won, tied, or which player gets to play next:
(define (status state)
(let* ((board (get-current-board state))
(winner (win-game board))
;; tie if every square has a token and winner is #f
(tie (every identity (vector->list board)))
(msg (cond
(winner (string-append "Winner: " winner))
(tie "Game tied!")
(else (string-append "Next Player: "
(token state))))))
`(div (@ (class "status")) ,msg)))
This function takes the current state as input, retrieves the value of the current board, and computes whether there is winner or a tied game. It then uses one of these values to construct
msg
, a string with the corresponding value, unless neither hold, in which case
msg
indicates which player goes next. Simple stuff, normal Guile.
Browser Events
We saw above that the event loop takes coded events from a queue for processing. In this section we show how those events are placed in the queue.
Reading from bottom to top, the board is a
div
that holds nine squares, each of which is a
div
with a
click
handler to register a player’s move.
Any
browser event could be thus translated into our event record type using an appropriate event type encoding. Our setup doesn’t require us to get data from the mouse click event (e.g., x- and y- positions) but that’s easy enough to do–look at Dave Thompson’s piece for guidance.
This is where the magic happens: the handler converts a
browser click
into an
event
, a record type that holds the type (
move
in this case) and a payload–the data associated with the event–the square’s index in our case.
This transmutation happens
inside
the call to
put-message
as it puts the event on the event queue, which makes the conversion thread-safe. Put it another way, the handler converts a physical action into a
datum
.
Finally, the
square
function uses
display-square
to display a representation of the square–a player token or a blank space.
The sharp reader will have noticed that the event loops takes events from the
deq-ch
channel with
get-message
whereas the click handler above puts them into the
enq-ch
channel with
put-message
.
An
event
is a record-type consisting of two slots: a
type
, and a
payload
. Our game recognises three types of event:
move
events with a payload corresponding to the index of the square on the board being clicked,
travel
events for which the payload is similarly the index of the time-travel button clicked, and a
start
event with payload value of
#f
, which is ignored. The start event is generated right before the event loop is instantiated, and it is processed exactly once.
Just like before, we code time-travel events via a handler:
As it was the case earlier for
move
events, the
click
event handler will put a time-travel event on a queue using
put-message
for handling by the event loop. Displaying the time-travel buttons is done similarly as before: we build a string corresponding to the time-travel button.
Handling Events
Going back to the heart of our event loop, events are sent to a handler when they are pulled off the queue.
These functions change state as a side-effect.
This handler is pretty simple–all it does is invoke state-change functions corresponding to the specific event:
(define (handle-events state evt)
(match evt
(($ <event> 'move payload)
(atomic-box-update! state make-move payload))
(($ <event> 'travel payload)
(atomic-box-update! state time-travel payload))
(($ <event> 'start payload) #f)
((_ #f) #f)))
Clojure has function for updating atoms with the same signature.
Here
is a link to a version of
atomic-box-update!
proposed by Andrew Tropin for inclusion in the
atomic-box
Guile module in the
Guile
discussion list.
The key function here is
atomic-box-update!
, our extension to the standard
ice-9 atomic
module, that takes as inputs the current state, a state-change function, and rest arguments:
atomic-box
takes the value of state held in
box
, an atomic box, applies
proc
and
args
to state, and then puts it back in the atomic box, changing state
in place
. The use of an atomic-box is not necessary for our example because WebAssembly (and therefore Hoot) does not support multithreading presently. Nevertheless, we show how to use atomic boxes because the same pattern could be used server-side where Guile (i.e.,
Fibers
) supports true multithreading. It also makes our code future-proof should WebAssembly support multicore execution some time later.
The two functions used above for state change are
make-move
and
time-travel
. Let’s examine them:
Changing State
State for our application consists of a record-type with three slots:
history
, a vector with 10 entries for boards (the starting empty board plus one board for each move);
move
, a slot that holds the index into
history
to the current board; and
total-boards
, the number of
played
boards in our history, including the initial empty board. Note that with time-travel
move
can hold any value from 0 to
total-boards
$\, - \, 1$
.
State is changed by the application of either of two functions to current state and event payload, with the first one being
make-move
:
(define (make-move state i)
(let* ((history (get-history state))
(move (get-move state))
(board (vector-copy (vector-ref history move))))
(if (or (win-game board) ;; the game is over
(vector-ref board i)) ;; there's a token in square
state
(let* ((new-history (make-vector 10 #f))
(next-move (+ move 1))
;; account for initial board in total-boards
(total-boards (+ next-move 1)))
;; clear entries in history past current move
(vector-copy! new-history 0 history 0 next-move)
;; make the move in copy of current board
(vector-set! board i (token state))
;; add new board to history
(vector-set! new-history next-move board)
(make-state new-history next-move total-boards)))))
The lack of an efficient persistent vector data structure makes this code a bit clunky. We either rely on mutation, or we have to copy the entire vector.
make-move
first checks to see if the proposed move is
not
possible, because either there is already a winner, or the square is already occupied. If so, then it returns the current value of state unchanged.
If it
is
possible to make a move onto square
i
, it computes the components for updating state: it copies the current
history
and adds a new board to it corresponding to the new move, increments the values of
move
and
total-boards
both, and then returns the updated value of state.
We change state to reflect time-travel clicks with
time-travel
:
(define (time-travel state move)
(let ((history (vector-copy (get-history state)))
(total-boards (get-total-boards state)))
(make-state history move total-boards)))
time-travel
is quite simple: it takes the current state and returns a new value corresponding to the old history and
total-boards
, as well as the (new) value passed in
move
.
Back to Templating
We had mentioned above how Guile’s quasi-quoting can be used to create type of dynamic HTML. Below is the entire template for the web page–it should look familiar to anyone who has used React or Vue, or even plain HTML:
One more time, reading from the bottom,
tic-tac-toe
returns
app
, a function that takes
state
as its input and builds an HTML-like version of the web application. As we can see, it has a
header
and
status
components, followed by a
container
, which holds the
board
and the
time-travel
component.
Dave’s code is found in the
vdom.scm
file in the repository.
How do we insert this into the appropriate DOM element to render the page? This is the job of the
rendering
code written by Dave Thompson (with a minor modification to take the
application
of
app
with the global state as its parameter). We refer the reader to his
blog article
.
Testing
We can run our tests at the terminal with
guile -L . test.scm
One of the advantages of encoding a physical event like a mouse click into an event message is that we can test the state changing components of the code base using synthetic events. The Guile file
test.scm
shows how this can be accomplished.
We first define a couple of helper functions to convert data into events and then process them:
Our events are simple lists of the form
(type payload)
.
The first function takes a list of synthetic events and translates them into event record-types; the second one processes the newly created events, potentially modifying
state
with each. As mentioned above, the functions
make-move
and
time-travel
have side-effects. in fact, their sole purpose is to change the value of state.
We are now ready to do write some tests:
(let ((state (make-atomic-box (make-initial-state)))
(move-events '((move 0) (move 4) (move 3) (move 7) (move 6))))
;; handle a sequence of move events
(process-events state (event-seq move-events))
;; the current board should reflect the events
(test-equal (vector X #f #f X O #f X O #f)
(get-current-board (atomic-box-ref state)))
;; the move slot should be 5
(test-equal 5 (get-move (atomic-box-ref state)))
;; the total number of boards should be 6
(test-equal 6 (get-total-boards (atomic-box-ref state)))
;; process a travel event
(process-events state (event-seq '((travel 4))))
;; the current board should reflect the travel event
(test-equal (vector X #f #f X O #f #f O #f)
(get-current-board (atomic-box-ref state)))
;; the move slot should be 4
(test-equal 4 (get-move (atomic-box-ref state)))
;; the total number of boards should be 5
(test-equal 5 (get-total-boards (atomic-box-ref state)))
)
The
let
bindings at the top allow us to define a sequence of synthetic events conveniently in the form of a list of lists, each with an event type and a payload. Starting from an initial state where the board is blank, we process the events to get to an intermediate state of play.
Our first test checks to make sure that the current board reflects the sequence of events. We also check the move slot to make sure it corresponds to the the 5 moves made and that the total number of boards is 6. Next, we process a time-travel event and check that the value of the current board matches our expectations and that the move slot has a value of 4, with the total number of boards being 5. No need to rely on the browser for checking the state change logic.
Using Fibers
The first section of the documentation provides a concise and exceedingly clear history of concurrent programming that motivates the approach. It warrants multiple readings.
I wrote this section because I was struggling with how to use Guile
Fibers
. While the documentation is truly excellent, I didn’t have the background needed to understand some of the key ideas (channels, operations) and I couldn’t find any tutorials online.
This asynchronous queue is described and implemented using concurrent ML in John Reppy’s
Concurrent ML: Design, Application, and Semantics
. I recommend that the reader tackle Reppy’s paper first, as it is an excellent introduction to the topic of concurrent programming.
This my attempt to clarify my grasp of the concurrent programming model in
Fibers
through a detailed examination of a concrete example. The first sections present the building blocks for
Fibers
, which are then used in the latter sections to implement, piece by piece, an asynchronous queue. My hope is that readers will feel comfortable with the
Fibers
documentation after reading this primer.
Fibers
Concurrency in Guile is provided via
fibers
, which are lightweight threads. Conceptually, we can think of fibers as closures that are managed by a scheduler. These fibers can execute asynchronously, independently of one another.
In the case of regular Guile, fibers can use as many cores as specified. Guile
Hoot
does not support multithreading in the browser, so the concurrency abstraction is implemented with a task processing loop. There is no parallelism gain by using
Fibers
in
Hoot
. Nevertheless, we can still use this library for implementing event-driven programs running on the browser effectively.
A new fiber is created with
(spawn-fiber thunk)
, which will execute the thunk passed as its sole argument running in its own computational context. But we need to be executing in a context for which there is a scheduler (in the case of regular
Guile
), or inside a promise, for
Hoot
.
For
Guile
, the reader should look up
run-fibers
in the API documentation.
Hoot
requires that the code be run inside a
promise
, i.e., a function with two parameters,
resolved
and
rejected
. The
Hoot Documentation
provides the example below:
Recall that we use this very same structure in our application, and the queue we use executes asynchronously
precisely because
it runs inside such a context.
Channels
Concurrent programming demands communication and synchronization between threads, which in the case of
Fibers
is achieved through
channels
. This abstraction was introduced by Tony Hoare in his
Communicating Sequential Processes
paper. Channels are
blocking
affordances for communication between threads (for us, fibers). A channel will block on a sender until there is a receiver. The converse also holds: a channel will block on a receiver until a message is sent.
John Reppy puts it well: “When a thread wants to communicate on a channel, it must ‘rendezvouz’ with another thread that wants to do complimentary communication on the same channel”. This is a really nice way to say that
a channel is not a queue holding messages
: channels never own or hold messages. Instead, they provide a mechanism by which a sending thread can coordinate to give a message to another thread that has signaled its willingness to receive the message.
In terms of the API, we can create a channel with
make-channel
. As mentioned above, we can send and receive data using channels, as we will see in the next section.
Operations
We need one more concept: a way to reason about asynchronous events, for which Fibers introduces the concept of an
operation
. An
operation
can be performed once or many times or none at all. Put another way, a
Fiber
operation is a way to
express a computing possibility
. Operations are first-class values, so starting with the basic operations provided by
Fibers
, we can combine and extend them to create new operations that can be used to model complex behaviour.
To make all of this more concrete, let’s consider the basic operations on channels provided natively by
Fibers
:
(get-operation ch)
expresses a
willingness
to receive a message from channel
ch
. Conversely,
(put-operation ch message)
captures the desire to put the message
msg
on channel
ch
.
By themselves, these operations will do nothing and will coordinate nothing. We need to invoke the
Fibers
function
(perform-operation op)
for something to happen. If something does happen, we say that the operation has
succeeded
. Note that
(perform-operation op)
returns the values due to performing
op
and will block until it can complete.
Thus, if we actually want to wait on channel
ch
to receive data, we need to use the function
perform-operation
on
(get-operation ch)
. Because this is a common activity, Fibers provides
(get-message ch)
, which is equivalent to
(perform-operation (get-operation ch))
. Similarly,
(put-message ch msg)
is shorthand for
(perform-operation (put-operation ch msg))
. Two fibers can interact and exchange information on a channel by having one of them invoking
put-message
while the other executes a corresponding
get-message
. As mentioned before, one of the threads will block until the other makes it possible for synchronization to occur.
Asynchronous Queue
Back to our queue: we want to express our
willingness
to either put data on the queue if it is empty or, if the queue holds some event messages already, to carry out one of two actions: put one more event message on the queue, or to take the message off the head of the queue for processing the event (in the event loop). We are
modelling all possibilities
for operations on a queue, regardless of whether it is empty or not.
Let’s say that we can encode this complex desire as a brand-new operation that we will call
queue-op
. We will then invoke
perform-operation
on
queue-op
so that, nondeterministically, one of the valid actions on the queue can take place through the
success
of one of the constitutive operations. We make no claims about which action takes place nor do we care about the order–this is the power of the abstraction that lets us reason about asynchronous events without any reference to timing or sequence.
In the web application
example
mentioned above, we rely on a
put-message
for the
start-event
. Were this
put-message
not executed, the event loop would block and nothing would happen.
We might be tempted to think we could use
get-message
and
put-message
operating on a channel to encode the logic described above. However, these functions are
blocking
and we can foresee a scenario where
get-message
gets ahead of
put-message
and we end up deadlocked. More importantly, recall that
channels are not queues
.
So how do we build an asynchronous queue for our events? The answer is that we need to rely on
two
channels and a simple data queue. These pieces come together by using
Fibers
operations
to express the complex operation
queue-op
described above.
Modelling the Queue
A first stab:
(if (q-empty? q)
;; then clause -- the queue is empty
(put a message from a receiving channel
on the queue)
;; else clause -- the queue has stuff in it
(or (put a message from a receiving channel
on the queue)
(take message from head of q and put it
on broadcasting channel))
)
We know how to express the desire to get a message from channel
ch
:
(get-operation ch)
. But we want to take a message
msg
from a channel
ch
and
put it on the queue.
(wrap-operation op fn)
is a primitive provided by
Fibers
that allows us to create a complex operation that takes the value returned by the succeeding operation
op
and then
apply
fn
to it. We can express the idea of taking a message
msg
from channel
enq-ch
and then enqueueing it on
back-queue
with
Back to our first stab: expressing the intent of the
then
clause is straightforward–we just use
enq-op
. But how do we express the idea of the
else
clause? For this we use the primitive function
(choice-operation op1 ...)
, which combines the
op
s provided in such a way that only one of them succeeds, nondeterministically. Accordingly, expressing the idea that a queue that is not empty can either dequeue or enqueue values is given by:
(choice-operation enq-op deq-op)
Now we can take this complex operation and use
perform-operation
to act on our queue:
models the operations on a queue, i.e., it abstracts the notion of a queue in terms of the possible operations that can take place on it without making any assumptions about their possible sequence. It just says: “An empty queue can receive a value; a non-empty queue can receive a value or yield one”. This operation expresses our willingness to execute, nondeterministically,
some
operation on a queue.
Modelling the Queue III:
The last piece: our queue needs to be ready to operate on values either as they become
available
(enqueue) or
needed
(dequeue) over time:
Our complex operation will block if there is no corresponding operation to any of its basic constituent operations. Recall that channel communication takes place only when we have reciprocal operations.
Our infinite loop waits to enqueue a value if the queue is empty or, if the queue has at least one value, either to enqueue or dequeue a value. But we should provide a clean exit from the loop so that resources can be reclaimed when we are done with our application.
We can express the intention to exit the loop via a
stop-op
operation. We use the built-in primitive operation,
(wait-operation cvar)
, which waits until the
Fibers
condition variable
cvar
is
signalled
with the function
(signal-condition! cvar)
. We use
wrap-operation
again to set a boolean variable that makes it possible to exit the loop whenever
wait-operation
succeeds:
But now we have to modify our complex operation
queue-op
, since we need to entertain the possiblity that
stop?
has been signalled under either scenario, so we include
stop-op
in both branches. This gives rise to
We now have a way to model activity on a queue with a termination condition that can be triggered by any function that has access to the
Fibers
condition variable
stop?
.
This is a good place to revisit the
two
infinite loops we have seen so far: the
event loop
listens on
deq-ch
, for which
deq-op
provides the complimentary sending operation. In other words, when a value is dequeued, it is then processed by our event loop.
The other loop, the queue message loop, is also always listening for messages to be enqueued via the
enq-ch
channel. But this is the same channel that browser events will use to put messages on the queue with something like
(put-message enq-ch evt-data)
where
evt-data
is the encoding of the browser event, which might include event type as well as any of the event data that will be needed for processing.
Whenever we use channels, it is always important to understand the interaction between corresponding sending and receiving.
The Queue: The Whole Story
Now we are ready to look at all the code for the implementation of the asynchronous queue:
First, the internal function
start-inbox-loop
defines the operations that abstract a queue and returns a thunk with the loop that uses them–the queue message loop.
Next, in the body of
inbox
, this thunk is executed
as a fiber
when called by
spawn-fiber
. This means that it is executing in its own context on a task loop–this is one reason why it is desirable to have
stop-op
.
Finally,
inbox
returns the channels
enq-ch
and
deq-ch
as well as the condition variable
stop?
These values can be used by the event loop and the browser event handlers to dequeue and enqueue event data, as we have seen.
The
code repo
has working code using all the bits discussed in this piece.
Andy Wingo, the implementer of
Fibers
, has written several blog posts on the topic:
Concurrent ML vs Go
, where he ponders what approach he should follow for implementing lightweight threads in Guile;
Is Go an acceptable cml
, which concludes that he must implement CML from scratch;
An incomplete history of language facilities for concurrency
, where Wingo reviews the work to date with great clarity and succinctness;
Growing Fibers
, a technical blog on the implementation of fibers;
A new concurrent ml
, where Wingo discusses the high points of the implementation in Guile of the main features of Concurrent ML.
Racket includes a model of concurrency that is similar to
Fibers
but it offers a
more extensive API
that matches Concurrent ML more closely. Like Guile Hoot, Racket threads all run on one processor. Once again, I haven’t yet found a good tutorial for its use, although the documentation is first-rate. This would be a good platform to explore some of the facilities missing in
Fibers
.