In this Meetup, we explore CiviBooking - an extension for organisations that hire out rooms or resources that they want to track through CiviCRM. We're delighted that Mathieu Lu, Co-founder at Coop SymbioTIC (Montreal) and a maintainer of the CiviBooking extension, will join us to discuss what CiviB...
In this Meetup, we explore CiviBooking - an extension for organisations that hire out rooms or resources that they want to track through CiviCRM. We're delighted that Mathieu Lu, Co-founder at
Coop SymbioTIC
(Montreal) and a maintainer of the CiviBooking extension, will join us to discuss what CiviBooking can do.
For existing CiviCRM users, there will be opportunities to meet and discuss CiviCRM with other organisations using the software in their day-to-day work, and to ask questions of experts.
You are invited to join us in person or online. The event is free, conveniently situated at The Melting Pot, next to Edinburgh Waverley train station - and there will be tea and biscuits!
Congress Quietly Kills Military “Right to Repair,” Allowing Corporations to Cash In on Fixing Broken Products
Intercept
theintercept.com
2025-12-09 18:22:54
Both chambers included Pentagon budget provisions for a right to repair, but they died after defense industry meetings on Capitol Hill.
The post Congress Quietly Kills Military “Right to Repair,” Allowing Corporations to Cash In on Fixing Broken Products appeared first on The Intercept....
The idea of a
“right to repair” — a requirement that companies facilitate consumers’ repairs, maintenance, and modification of products — is extremely popular, even winning broad, bipartisan support in Congress. That could not, however, save it from the military–industrial complex.
Lobbyists succeeded in killing part of the National Defense Authorization Act that would have given service members the right to fix their equipment in the field without having to worry about military suppliers’ intellectual property.
“Defense contractors have a lot of influence on Capitol Hill.”
The decision to kill the popular proposal was made public Sunday after a closed-door conference of top congressional officials, including defense committee chairs, along with Speaker Mike Johnson, R-La., and Senate Majority Leader John Thune, R-S.D.
Those meetings were secret, but consumer advocates say they have a pretty good idea of what happened.
“It’s pretty clear that defense contractors opposed the right-to-repair provisions, and they pressed hard to have them stripped out of the final bill,” said Isaac Bowers, the federal legislative director at U.S. PIRG. “All we can say is that defense contractors have a lot of influence on Capitol Hill.”
The idea had drawn bipartisan support in both the House and Senate, which each passed their own versions of the proposal.
Under one version, co-sponsored by Sen. Elizabeth Warren, D-Mass., and Sen. Tim Sheehy, R-Mt., defense companies would have been required to supply the information needed for repairs — such as technical data, maintenance manuals, engineering drawings, and lists of replacement parts — as a condition of Pentagon contracts.
The idea was that no service member would ever be left waiting on a contractor to fly in from Norway to repair a simple part —
which once happened
— or, in another
real-life scenario
, told by the manufacturer to buy a new CT scanner in a combat zone because one malfunctioned.
Instead of worrying about voiding a warranty, military personnel in the field could use a 3D printer or elbow grease to fix a part.
“The military is a can-do operation,” Bowers said. “Service members can and should be able to repair their own equipment, and this will save costs if they can do it upfront and on time and on their schedule.”
“Contractor Profiteering”
Operations and maintenance costs are typically the biggest chunk of the Pentagon’s budget, at 40 percent. That is in large part because the military often designs new weapons at the same time it builds them, according to Julia Gledhill, a research analyst for the national security reform program at the Stimson Center.
“We do see concurrent development, wherein the military is designing and building a system at the same time,” Gledhill said on a webinar hosted by the nonprofit Taxpayers for Common Sense on Tuesday. “That, turns out, doesn’t work very well. It means that you do discover design flaws, what the DOD would characterize as defects, and then you spend a whole lot of money trying to fix them.”
For the defense industry, however, the proposal threatened a key profit stream. Once companies sell hardware and software to the Pentagon, they can keep making money by forcing the government to hire them for repairs.
Defense lobbyists pushed back hard against the proposal when it arose in the military budgeting process. The CEO of the Aerospace Industries Association
claimed
that the legislation could “cripple the very innovation on which our warfighters rely.”
The contractors’ argument was that inventors would not sell their products to the Pentagon if they knew they had to hand over their trade secrets as well.
In response, Warren wrote an unusual letter last month calling out one trade group, the National Defense Industrial Association.
“NDIA’s opposition to these commonsense reforms is a dangerous and misguided attempt,”
Warren said
, “to protect an unacceptable status quo of giant contractor profiteering that is expensive for taxpayers and presents a risk to military readiness and national security.”
As a piece of legislation, the right to repair has likely died until next year’s defense budget bill process. The notion could be imposed in the form of internal Pentagon policies, but it would be a less of a mandate: Such policies can be more easily waived.
The secretaries of the Army, Navy, and Air Force have all expressed some degree of support for the idea, and Defense Secretary Pete Hegseth has urged the branches to include “right to repair” provisions in new contracts going forward — though, for now, it’s just a suggestion rather than legal requirement.
We have a housing crisis, as you probably, painfully, know. Wouldn’t you like to have someone to blame for it?
The United States is short
4 million
housing units, with a particular dearth of starter homes,
moderately priced
apartments in low-rises, and family-friendly dwellings. Interest rates are high, which has stifled construction and pushed up the cost of mortgages. As a result, more Americans are renting, and roughly
half of those households
are spending more than a third of their income on shelter.
This crisis has many causes: restrictive zoning codes, arcane permitting processes, excessive
community input
, declining construction productivity, expensive labor, and expensive lumber. And, some say, the aggressive entry of private equity into the housing market. Institutional investors have bought up hundreds of thousands of American homes since the start of the coronavirus pandemic, outbidding families and
pushing up rents
—a trend lamented by everyone from
Alexandria Ocasio-Cortez
to
J. D. Vance
.
Casting private equity as a central villain in the country’s real-estate tragedy makes intuitive sense. Who’s going to win in a bidding war for a three-bedroom in a suburb of Cincinnati: a single-income family with a scrabbled-together 10 percent down payment or a Wall Street LLC offering cash? Still, housing economists and policy analysts have argued that institutional investors have played at most a bit part. Supply constraints began cropping up on the coasts a generation ago, if not earlier, whereas Wall Street started buying up significant numbers of homes only after the Great Recession and especially after the pandemic. Moreover, even if big investors are purchasing thousands of homes, they don’t own significant numbers of homes compared with small-scale landlords and individuals.
Yet in some markets, the balance has shifted. Last month, the Lincoln Institute of Land Policy and the Center for Geospatial Solutions published
a report
showing that corporations now own a remarkable one in 11 residential real-estate parcels in the 500 urban counties with data robust enough to analyze. In some communities, they control more than 20 percent of properties.
I figured that big investors might be picking up vacation rentals in Colorado and expensive apartment buildings in the Bay Area and the Acela Corridor. They are, the report’s authors told me. But these investors are pouring the most money into “buy low, rent high” neighborhoods: communities, many of them in the South and the Rust Belt, where large shares of families can’t afford a mortgage.
“They’re pulling all the starter homes off of the market in low-income, high-minority-density neighborhoods,” George McCarthy, the president of the Lincoln Institute, told me—a trend that is intensifying the country’s yawning racial wealth and homeownership gaps. In Cleveland, corporations own 17.5 percent of residential real-estate parcels. In the city’s East Side, which contains many predominantly
Black neighborhoods
, just one in five homebuyers in 2021 took out a mortgage. The rest—many investors, presumably—paid in cash or took out a loan from a
non-traditional financier
.
In Baltimore’s majority-Black McElderry Park and Ellwood Park/Monument neighborhoods, owner-occupants made just 13 percent of purchases in 2022. In a majority-white neighborhood not far away, owner-occupants bought more than 80 percent of homes that same year, and out-of-state corporations owned less than 1 percent of residential parcels.
The report made me see the country’s real-estate crisis in a different light. Private-equity firms and other deep-pocketed investors aren’t why Seattle and Boston are unaffordable. Those cities have had shortage-driven housing crises that have intensified over decades. The firms aren’t why many towns in the Mountain West have seen jumps in home values and a corresponding increase in homelessness, displacement, and eviction. In those communities, white-collar emigrants from big cities have arrived and outbid locals. But investor money is distorting the housing market in communities with low wages and decent-enough housing supply, pushing thousands of Black and Latino families off the property ladder. Tens of thousands of workers who would like to invest in a home are instead stuck paying rent, and putting up with the associated uncertainty.
While not all corporate landlords are bad landlords,
some
are bad landlords. Corporations are more likely to threaten to evict and to actually evict
their tenants
. They are also prone to skimping on maintenance and upkeep. “At the neighborhood level, when more than half of the properties are owned by outside investors—when you’ve now flipped that neighborhood from being primarily homeowner driven to investor driven—that matters, because homeowners behave very differently, politically and otherwise,” McCarthy said. An out-of-state investment firm might be less likely than a longtime resident or a local property manager to plant shade trees and demand safe sidewalks, for instance.
In response to the rising corporate ownership of homes, a variety of politicians have pushed for policy fixes. In New York, Governor Kathy Hochul has proposed legislation barring firms from
bidding on single-family or two-family homes
for the first 75 days they are on the market. Washington State is contemplating capping the number of units that
corporations can own
. Other legislators have suggested revoking
tax benefits
from large-scale owners.
McCarthy said that caps probably would not work well: Corporations might simply set up multiple entities to get around the rules and keep purchasing properties, for instance. “It’s just not going to fly,” he said. But he supports treating firms that own more than 10 properties in a given jurisdiction as commercial owners rather than residential owners, subjecting them to higher property-tax rates and higher taxes on their capital gains.
If nothing is done, what’s happening to majority-Black communities in Ohio and Virginia and Georgia and Michigan might start happening in communities around the country. Private equity might not be causing the housing crisis, but corporate owners could end up making it a lot worse for everyone.
Let’s get a few things out of the way before I go any
further with this seemingly impertinent thought, because it’s
nowhere near as snarky as it sounds.
First, I don’t particularly
like
vibe coding. I
love programming, and I have loved it since I made my first
tentative steps with it sometime back in the mid-to-late 90s. I
love programming so much, it always feels like I’m having too
much fun for it to count as real work. I’ve done it
professionally, but I also do it as a hobby. Someone apparently
once said, “Do what you love and you’ll never work a day in
your life.” That’s how I feel about writing code. I’ve also
been teaching the subject for twenty-five years, and I can
honestly say I am as excited about the first day of the
semester now as I was when I first started. I realize it’s a
bit precious to say so, but I’ll say it anyway: Turning
non-programmers into programmers is my life’s work. It is the
thing of which I am most proud as a college professor.
Vibe coding makes me feel
dirty
in ways that I
struggle to articulate precisely. It’s not just that it feels
like “cheating” (though it does). I also think it takes a lot
of the fun out of the whole thing. I sometimes tell people
(like the aforementioned students) that programming is like
doing the best crossword puzzle in the world, except that when
you solve it, it actually dances and sings. Vibe coding robs me
of that moment, because I don’t feel like
I
really did
it at all. And even though to be a programmer is to live with a
more-or-less permanent set of aporias (you don’t
really
understand what the compiler is doing,
really—and even if you do, you probably don’t
really
understand how the virtual memory subsystem works, really),
it’s satisfying to understand every inch of my code and
frustrating—all the way to the borderlands of active
anxiety—not quite understanding what Claude just wrote.
But this leads me to my second point, which I must make as
clearly and forcefully as I can. Vibe coding actually works. It
creates robust, complex systems that work. You can tell
yourself (as I did) that it can’t possibly do that, but you are
wrong. You can then tell yourself (as I did) that it’s good as
a kind of alternative search engine for coding problems, but
not much else. You are also wrong about that. Because when you
start giving it little programming problems that you can’t be
arsed to work out yourself (as I did), you discover (as I did)
that it’s awfully good at those. And then one day you muse out
loud (as I did) to an AI model something like, “I have an idea
for a program…” And you are astounded. If you aren’t astounded,
you either haven’t actually done it or you are at some stage of
grief prior to acceptance. Perfect? Hardly. But then neither
are human coders. The future? I think the questions answers
itself.
But to get to my impertinent question…
Early on in my love affair with programming, I read
Structure and
Interpretation of Computer Programs,
which I now
consider one of the great pedagogical masterpieces of the
twentieth century. I learned a great deal about programming
from that book, but among the most memorable lessons was one
that appears in the second paragraph of the original preface.
There, Hal Abelson and Gerald Sussman make a point that hits
with the force of the obvious, and yet is very often
forgotten:
[W]e want to establish the idea that a computer language
is not just a way of getting a computer to perform operations
but rather that it is a novel formal medium for expressing
ideas about methodology. Thus, programs must be written for
people to read, and only incidentally for machines to
execute.
I’ve been repeating some version of this to my students ever
since. Computers, I remind them, do not need the code to be
“readable” or “ergonomic” for humans; they only need it to be
readable and ergonomic for a computer, which is a considerably
lower bar.
Every programming language—
including assembly
language
—was and is intended for the convenience of humans
who need to read it and write it. If a language is innovative,
it is usually not because it has
allowed
for automatic
memory management, or concurrency, or safety, or robust error
checking, but because it has made it easier for humans to
express and reason about these matters. When we extol the
virtues of this or that language—Rust’s safety guarantees,
C++’s “no-cost abstractions,” or Go’s approach to
concurrency—we are not talking about an affordance that the
computer has gained, but about an affordance that
we
have gained as programmers of said computer. From our
standpoint as programmers, object-oriented languages offer
certain ways to organize our code—and, I think Abelson and
Sussman would say, our thinking—that are potentially conducive
to the noble treasures of maintainability, extensibility, error
checking, and any number of other condign matters. From the
standpoint of the computer, this little OO kink of ours seems
mostly to indicate a strange affinity for heap memory.
“Whatevs!” (says the computer). And pick your poison here,
folks: functional programming, algebraic data types, dependent
types, homoiconicity, immutable data structures, brace styles…
We can debate the utility of these things, but we must
understand that we are primarily talking about
human
problems. The set of “machine problems” to which these matters
correspond is considerably smaller.
So my question is this: Why vibe code with a language that
has
human
convenience and ergonomics in view? Or to
put that another way: Wouldn’t a language designed
for vibe
coding
naturally dispense with much of what is convenient
and ergonomic
for humans
in favor of what is
convenient and ergonomic for machines? Why not have it just
write C? Or hell, why not x86 assembly?
Now, at this point, you will want to say that the need for
human understanding isn’t erased entirely thereby. Some version
of this argument has merit, but I would remind you that if you
are really vibe coding for real you already don’t understand a
great deal of what it is producing. But if you look carefully,
you will notice that it doesn’t struggle with undefined
behavior in C. Or with making sure that all memory is properly
freed. Or with off-by-one errors. It sometimes struggles to
understand what it is that you actually want, but it rarely
struggles with the actual execution of the code. It’s better
than you are at keeping track of those things in the same way
that a compiler is better at optimizing code than you are.
Perfect? No. But as I said before…
Is C the ideal language for vibe coding? I think I could
mount an argument for why it is not, but surely Rust is even
less ideal. To say nothing of Haskell, or OCaml, or even
Python. All of these languages, after all, are for people to
read, and only incidentally for machines to execute. They are
practically adorable in their concern for problems that AI
models do not have.
I suppose what I’m getting at, here, is that
if
vibe coding is the future of software development (and it is),
then why bother with languages that were designed for people
who are not vibe coding? Shouldn’t there be such a thing as a
“vibe-oriented programming language?” VOP. You read it here
first.
One possibility is that such a language truly would be
executable pseudocode beyond even the most extravagant fever
dreams of the most earnest Pythonistas; it shows you what it’s
doing in truly pseudo code, but all the while it’s writing
assembly. Or perhaps it’s something like the apotheosis of
literate programming. You write a literary document “expressing
ideas about methodology,” and the AI produces machine code (and
a kind of literary critical practice evolves around this
activity, eventually ordering itself into structuralist and
post-structuralist camps. But I’m getting ahead of myself).
Perhaps your job as a programmer is mostly running tests that
verify this machine code (tests which have also been produced
by AI). Or maybe a VOPL is really a certain kind of language
that comes closer to natural language than any existing
programming language, but which has a certain (easily learned)
set of idioms and expressions that guide the AI more reliably
and more quickly toward particular solutions. It doesn’t have
goroutines. It has a “concurrency slang.”
Now obviously, the reason a large language model focused on
coding is good at Javascript and C++ is precisely because it
has been trained on billions of lines of code in those
languages along with countless forum posts, StackOverflow
debates, and so on. Bootstrapping a VOPL presents a certain
kind of difficulty, but then one also suspects that LLMs are
already
being trained in some future version of this
language, because so many programmers are already groping their
way toward a system like this by virtue of the fact that so
many of them are already vibe coding production-level
systems.
I don’t know how I feel about all of this (see my first and
second points above). It saddens me to think of “coding by
hand” becoming a kind of quaint Montessori-school stage in the
education of a vibe coder—something like the contour drawings
we demand from future photoshopers or the balanced equations we
insist serve as a rite of passage for people who will never be
without a calculator to the end of their days.
At the same time, there is something exciting about the
birth of a computational paradigm. It wasn’t that long ago, in
the grand scheme of things, that someone realized that rewiring
the entire machine every time you wanted to do a calculation
(think ENIAC, circa 1945) was a rather suboptimal way to do
things. And it is worth recalling that people
complained
when the stored-program computer rolled
around (think EDVAC, circa 1951). Why? Well, the answer should
be obvious. It was less reliable. It was slower. It removed the
operator from the loop. It threatened specialized labor. It was
conceptually impure. I’m not kidding about any of this. No less
an authority than Grace Hopper had to argue against the quite
popular idea that there was no way anyone could ever trust a
machine to write instructions for another machine.
Ivanti warns of critical Endpoint Manager code execution flaw
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 17:10:25
American IT software company Ivanti warned customers today to patch a newly disclosed vulnerability in its Endpoint Manager (EPM) solution that could allow attackers to execute code remotely. [...]...
American IT software company Ivanti warned customers today to patch a newly disclosed vulnerability in its Endpoint Manager (EPM) solution that could allow attackers to execute code remotely.
Ivanti delivers system and IT asset management solutions to over 40,000 companies via a network of more than 7,000 organizations worldwide. The company's EPM software is an all-in-one endpoint management tool for managing client devices across popular platforms, including Windows, macOS, Linux, Chrome OS, and IoT.
Tracked as
CVE-2025-10573
, this critical security flaw can be exploited by unauthenticated threat actors in low-complexity cross-site scripting attacks that require user interaction.
"Stored XSS in Ivanti Endpoint Manager prior to version 2024 SU4 SR1 allows a remote unauthenticated attacker to execute arbitrary JavaScript in the context of an administrator session,"
Ivanti said
.
Ivanti noted that the risk of this vulnerability should be significantly reduced because the Ivanti EPM solution is not intended to be exposed online.
Today, Ivanti also released security updates to address three high-severity vulnerabilities, two of which (CVE-2025-13659 and CVE-2025-13662) could allow unauthenticated attackers to execute arbitrary code on unpatched systems.
Luckily, successful exploitation also requires user interaction and the targets to either connect to an untrusted core server or import untrusted configuration files.
"We are not aware of any customers being exploited by these vulnerabilities prior to public disclosure. These vulnerabilities were disclosed through our responsible disclosure program," Ivanti added.
While Ivanti has yet to discover evidence of exploitation in attacks, Ivanti EPM security flaws are often targeted by threat actors.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Maintaining enterprise IT hygiene using Wazuh SIEM/XDR
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 17:09:33
Poor IT hygiene, such as unused accounts, outdated software, and risky extensions, creates hidden exposure in your infrastructure. Wazuh, the open-source XDR and SIEM, shows how continuous inventory monitoring across endpoints helps teams spot drift and tighten security. [...]...
Organizations face the challenge of maintaining visibility and control over their IT infrastructure. A forgotten user account, an outdated software package, an unauthorized service, or a malicious browser extension can expose vulnerabilities that threat actors are eager to exploit.
Addressing these risks requires a systematic approach to maintaining the security and integrity, and overall health of every system within the organization. This is where IT hygiene becomes essential.
IT hygiene is the systematic practice of maintaining consistent, secure configurations across all endpoints in an organization's infrastructure. It encompasses continuous monitoring of hardware, software, user accounts, running processes, and network configurations to ensure alignment with security policies and compliance requirements.
Poor IT hygiene creates security gaps that can lead to data breaches, system compromises, and significant financial and reputational damage.
Wazuh is a free, open source security platform that provides multiple capabilities, including a dedicated IT hygiene capability, file integrity monitoring, configuration assessment, vulnerability detection, and active response.
This post explores how organizations can leverage Wazuh to maintain enterprise IT hygiene, examines practical use cases, and demonstrates its effectiveness in improving their security posture.
IT hygiene overview
IT hygiene encompasses the preventive measures organizations implement to maintain the health and security of their IT infrastructure. It reduces the risk of security incidents by ensuring systems remain properly configured, up to date, and monitored.
Key aspects include:
Asset visibility:
Maintaining a comprehensive, up-to-date inventory of all hardware and software assets across your infrastructure.
Configuration management:
Ensuring systems are configured in accordance with security best practices and organizational policies. These include minimizing services, ports, and software, as well as authentication and account hardening configurations.
Patch management:
Regularly updating software to address known vulnerabilities.
Access control:
Managing user accounts and permissions to prevent unauthorized access.
Monitoring and auditing:
Continuously tracking system activities and configurations to detect anomalies.
Without proper IT hygiene practices, organizations become vulnerable to threats such as unauthorized access, malware infections, data exfiltration, and compliance violations.
The Wazuh IT hygiene capability
Wazuh introduced its IT hygiene capability in version 4.13.0, providing security teams with a centralized dashboard for monitoring system inventory across an entire infrastructure.
The capability leverages the Wazuh Syscollector module to gather and aggregate data from all monitored endpoints, storing it in dedicated indices within the Wazuh indexer for querying and analysis.
The Wazuh IT hygiene capability collects system inventory data, including:
Hardware specifications such as CPU, memory, and storage data
Operating system details and versions
Installed software packages and their versions
Running processes and services
Network configurations and open ports
User accounts and group memberships
Browser extensions and their permissions
This data is presented through an intuitive dashboard interface that enables security administrators to query and analyze inventory information across multiple endpoints simultaneously, eliminating the need for time-consuming manual checks.
Accessing the IT hygiene dashboard
Users can access inventory data through the Wazuh dashboard by navigating to
Security operations > IT hygiene
. The interface provides multiple tabs for different inventory categories:
Each tab allows administrators to add custom filters to refine queries and select additional fields to display. This flexibility enables security teams to quickly identify configuration changes, policy violations, and security anomalies across their infrastructure.
Practical use cases for enterprise IT hygiene
Software patch management
Maintaining consistent software versions across all endpoints is critical for security, stability, and compliance. Inconsistent package versions introduce exploitable vulnerabilities and can violate organizational patching policies. Manually verifying software versions across thousands of endpoints is impractical and error-prone.
The Wazuh IT hygiene capability provides comprehensive visibility into installed packages across the entire infrastructure. Security administrators can:
Identify endpoints running outdated or vulnerable software versions
Detect unauthorized software installations
Verify compliance with approved software catalogs
For example, administrators can use the filters on the
Packages
tab to identify all endpoints running a specific version of a critical application or library. By applying filters on fields such as
package.name
and the field
package.version
, security teams can quickly generate a list of endpoints requiring package updates, significantly streamlining the patch management process.
Browser extension management
Browser extensions are an increasingly exploited attack surface, particularly in enterprise environments. Extensions with broad permissions can access sensitive data, inject malicious scripts, intercept credentials, and serve as malware vectors. Recent security incidents have involved fake ad blockers and password managers used in credential theft campaigns.
The Wazuh IT hygiene capability provides complete visibility into browser extensions across all monitored endpoints, including:
Extension names and versions
Requested permissions (tabs, storage, webRequest, and so on.)
Installation dates and sources
User associations
Security teams can use this information to identify unauthorized or high-risk extensions, detect extensions with excessive permissions, and enforce browser extension policies. This enables them to respond quickly to reports of malicious extensions.
Identity management
The
Identity
section of the Wazuh IT hygiene enables account auditing to ensure that user identities and permissions remain aligned with organizational policies across the entire infrastructure. Administrators can audit user information by applying the filters within the
Users
and
Groups
dashboard.
The following use case demonstrates dormant account detection to identify inactive or unnecessary accounts, and privilege account verification to ensure only authorized users hold elevated permissions.
Dormant account detection
Dormant or abandoned user accounts pose significant security risks. These accounts, often belonging to former employees or contractors, can be exploited by attackers for unauthorized access. They represent forgotten attack vectors that may lack current security controls, such as multi-factor authentication, and thus present an entry point for attackers.
The Wazuh IT hygiene capability enables organizations to identify dormant accounts systematically. Administrators can:
a. Navigate to
Security operations > IT Hygiene > Identity > Users
.
b. Filter accounts based on criteria such as:
Accounts with valid login shells (indicating interactive access)
Last login dates beyond organizational policies
Accounts without recent activity
c. Generate lists of accounts requiring review or deactivation
For example, the above image shows users filtered for
user.shell
values such as
/bin/bash
or
/bin/sh
to identify accounts capable of interactive system access. Cross-referencing this data with the details from
user.last.login
field reveals dormant accounts that should be investigated or removed.
Privileged account auditing
Unauthorized users with administrative privileges pose a critical security risk. Accounts in the local Administrators group (Windows) or sudo group (Linux) can install software, modify system configurations, disable security controls, and access sensitive data.
Even if rarely used, these accounts are valuable targets for attackers seeking to maintain persistence and escalate privileges.
The Wazuh IT hygiene capability allows security teams to:
Identify all users with elevated privileges across the infrastructure
Verify that only authorized personnel have administrative access
Detect privilege escalation attempts or policy violations
Maintain compliance with access control policies
Administrators can use filters in the
Groups
tab within the
Identity
section of the Wazuh IT hygiene dashboard to identify members of privileged groups.
Administrators can then cross-reference these results against authorized user lists to identify accounts with unauthorized privilege assignments.
Hardware resource optimization
In large enterprise environments with numerous Linux and Windows endpoints, mismatched hardware specifications can lead to significant operational challenges.
Servers with insufficient CPU cores or memory create performance bottlenecks that impact critical workloads, while oversized instances waste resources and drive unnecessary cloud computing costs.
The Wazuh IT hygiene capability enables resource analysis across all devices, allowing administrators to:
Identify endpoints that fall outside policy-defined specifications
Detect underpowered systems affecting critical services
Find oversized instances wasting budget
Optimize cloud resource allocation
Plan capacity upgrades based on actual usage patterns
For example, administrators can use the filters within the
Hardware
tab to identify all servers with memory below a defined threshold (for example, 8GB for web servers) or systems with excessive resources that could be downsized.
This data-driven approach supports both cost optimization and reliability improvements without requiring manual inspection of individual endpoints.
Port and service monitoring
Unnecessary open ports and unauthorized services expand the attack surface. Each open port is a potential entry point for attackers, and unauthorized services may contain vulnerabilities or misconfigurations that compromise security.
The Wazuh IT hygiene capability provides comprehensive visibility into:
All open network ports across endpoints
Services listening on each port
Process associations for running services
Port states and configurations
Security teams can use the filter within the
Ports
tab to identify endpoints with unexpected open ports or unauthorized services. For instance, database ports (3306, 5432) should not be open on workstations or web servers. They should be restricted to internal networks or specific application servers only.
Best practices for implementing IT hygiene with Wazuh
To maximize the benefits of Wazuh IT hygiene capabilities, organizations should follow these best practices:
1. Establish baseline inventories:
Document expected configurations, approved software, authorized accounts, and standard hardware specifications for different endpoint types. Create explicit policies for software versions, user account lifecycles, browser extensions, privileged access, and hardware standards.
2. Automate alerting:
Configure Wazuh to generate alerts for critical deviations such as new privileged accounts, unauthorized software installations, or suspicious browser extensions.
3. Integrate with workflows:
Connect IT hygiene findings with existing ticketing systems, patch management tools, and incident response processes.
4. Maintain documentation:
Keep detailed records of authorized exceptions, approved changes, and remediation actions taken in response to hygiene issues.
5. Leverage other Wazuh modules:
Leverage SCA, vulnerability detection, and malware detection alongside IT hygiene for comprehensive security coverage.
6. Schedule regular reviews:
Conduct periodic audits of inventory data to identify drift from baseline configurations and policy violations.
7. Train security teams:
Ensure personnel understand how to effectively query and interpret IT hygiene data to identify security risks.
Conclusion
Maintaining IT hygiene reduces the risk of security incidents by keeping systems correctly configured, patched, and monitored. The Wazuh IT hygiene capability meets this need by providing a centralized, real-time inventory across all endpoints.
Security teams can quickly spot policy violations, configuration drift, and security anomalies using holistic data on hardware, software, accounts, processes, ports, and browser extensions, enabling informed, data-driven decisions.
PeerTube is a tool for hosting, managing, and sharing videos or live streams.
Core Components Assessed/Included Repositories
The following repositories were submitted by the solution and included in our evaluation. Any repositories, add-ons, features not included in here were not reviewed by us.
Esperanto, English, Slovenčina, Gàidhlig, العربية, Norsk, Magyar, Deutsch, Toki Pona, Euskara, Polski, Português (Portugal), Suomi, Tiếng Việt, Italiano, فارسی, Español, Taqbaylit, 简体中文(中国), Hrvatski, ελληνικά, Occitan, украї́нська мо́ва, Français, ไทย, Türkçe, 繁體中文(台灣), 日本語, Galego, Íslenska, Svenska, Nederlands, Pусский, bokmål, Čeština, Shqip, Català, Português (Brasil), Norsk nynorsk
Organisations using it
French Ministry of National Education (~100K videos), Italy’s National Research Council, a few French alternative media, the Weißensee Kunsthochschule in Berlin, as well as the Universität der Künste in the same city, a few universities worldwide, the Blender and Debian projects, and various activist groups
* This information is self-reported and updated annually
Github insights
Learn how this product has met the requirements of the
DPG Standard
by exploring the indicators below.
Application Details
DPG ID
GID0092472
Status
DPG
Date Created
2025-08-11
Date Submitted
2025-08-25
Date Reviewed
2025-10-07
Date of Expiry
2026-10-07
Application Log Details
Timestamp
Activity
2025-10-07 08:40:13
Ricardo Torres (L2 Reviewer) submitted their review of PeerTube (152) and found it to be a DPG
Ricardo Torres (L2 Reviewer) moved PeerTube (12958) to under review
2025-10-07 08:38:21
Ricardo Torres (L2 Reviewer) finished consultation on 4. Platform Independence for PeerTube (12958)
Spain arrests teen who stole 64 million personal data records
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 16:57:06
The National Police in Spain have arrested a suspected 19-year-old hacker in Barcelona, for allegedly stealing and attempting to sell 64 million records obtained from breaches at nine companies. [...]...
The National Police in Spain have arrested a suspected 19-year-old hacker in Barcelona, for allegedly stealing and attempting to sell 64 million records obtained from breaches at nine companies.
The teen now faces charges related to involvement in cybercrime, unauthorized access and disclosure of private data, and privacy violations.
"The cybercriminal accessed nine different companies where he obtained millions of private personal records that he later sold online,"
reads the police's announcement.
The police launched an investigation into the cybercriminal in June, after the authorities became aware of breaches at the unnamed firms.
Eventually, the suspect was located in Igualada, Barcelona, and it was confirmed that he held 64,000,000 private records. These records include full names, home addresses, email addresses, phone numbers, DNI numbers, and IBAN codes.
It is unclear how many total individuals were impacted by the breach.
The police mention that the detainee attempted to sell the information on various hacker forums, using six different accounts and five pseudonyms.
The 19-year-old was arrested last week, and during the action, police agents also confiscated computers and cryptocurrency wallets containing funds believed to be from data sales.
Data broker also arrested in Ukraine
In parallel but unrelated news, the
cyberpolice in Ukraine have announced
the arrest of a 22-year-old cybercriminal who used a custom malware he developed to automatically hack user accounts on social networks and other platforms.
Most of the hacker's victims were based in the United States and various European countries.
The offender then proceeded to sell access to the compromised accounts, which he boosted using a bot farm of 5,000 accounts, on various hacking forums.
The arrested man now faces up to 15 years in prison for violations of Ukraine's Criminal Code (Article 361), as well as deprivation of the right to hold certain positions or engage in certain activities for up to three years.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Mean Girl Tish James Sues Lindsay Lohan's Brother for Alleged Scheme to Destabilize More Than 150 NYC Apartments
hellgate
hellgatenyc.com
2025-12-09 16:46:21
Michael Lohan is one of the Peak Capital Advisor principals named in a lawsuit filed by the New York Attorney General....
Actor Lindsay Lohan's younger brother Michael Lohan is one of seven real estate speculators
accused last week
by New York Attorney General Letitia James of conspiring to illegally deregulate more than 150 rent-stabilized apartments in Brooklyn and Queens.
On December 1, Attorney General James and New York's affordable housing agency, Homes and Community Renewal (HCR), filed a
lawsuit
against Peak Capital Advisors and its founders and principals, one of whom is Lohan. The lawsuit alleges that, since 2019, Peak has bought 31 buildings, including in Greenpoint, Astoria, Sunnyside, and Long Island City, and converted more than 150 rent-stabilized apartments in those buildings to market-rate units by falsely claiming they qualified for deregulation under the "substantial rehabilitation"
exemption in state housing law
.
i decided to take a look outside of my git comfort zone to see what’s out there. and wow, i’m glad i did, because i came across
jj-vcs
and, it’s hard to emphasize this enough: it’s a delight. it lets me operate on commit graph like i’m playing with lego.
jj is a version control system, like git.
1
1
did you write javascript in the 2010s and remember asking important questions like what is
this
anyway? why am i binding
this
?
i’ve been told prototypal inheritance is not that complicated but my real takeaway is learning what it means to be gaslit.
and then the tide rose: ES6 and typescript are the tools folks use today. both the experience of writing js and the artifacts you see from people of all skill levels are better.
i like to imagine a similar shift is happening in version control, where the experience of using it is going to improve a lot, and that’ll
downstream
(upstream?) into improvements in vcs’ across the board.
it allows using git as a backend, so you have access to that world: collaborating with your coworkers on github, or running programs on your computer that use a git repo. it blends in. it also gives you access to a new world of jj, where the commands are so consistent you can intuit the flags easily, and tailor it to many different workflows.
here’s some reasons why i think you should try it out:
in addition to
SHAs
, a commit has a change id: it lets you have multiple versions of a commit, making every commit no matter where it lives, easily amendable. it’s like a primary key for your change, and lets you really refine all parts of a change over time, including both the diff and the commit message.
at any point if you want to work on something else, there is no need to commit or stash your changes, you
jj new
to where you want to be and don’t risk losing anything.
you don’t have to name branches, you simply push them up and they get generated names.
it’s easy to work on top of a merge of a bunch of branches all at once
2
2
colloquially known as a
megamerge
– it’s basically a merge of a few different parents. so you’re essentially working on a bunch of branches at once. super nice if you’ve got a few different pull requests out that you want to enjoy before they land.
you’re always working on committed code and you don’t have to
add
anything for it to be tracked – because of this, the commands you use to change the commit that you’re working on vs changing any other commit are the same, it feels very natural to mess with any commit as you wish.
3
3
there’s also a great concept of immutable vs mutable commits, by default the commits in your main branch or trunk are immutable, so you are prevented from messing with them
with git, rebasing is error prone enough that i just started merging trunk into my feature branches
4
4
hey
if fossil doesn’t rebase
why should i?
and using github’s squash merges – this is ugly in a few ways: it destroys your history and clutters up a pull request with merge commits, a lot of noise. jj automatically rebases things all the time, allowing me to easily make pull requests i’m not ashamed of, and even allows me to rebase a bunch of pull requests at once in one fell swoop.
that’s not even touching on the more novel things, like
absorb
,
revsets
,
templates
– there are many gifts behind this executable.
it took me a few tries, but was one of the more rewarding things i’ve picked up in a long time. it reminded me of long-ago time when i was using vimtutor on some ancient terminal-only computer
5
5
in a basement, no less. i was there for a job that involved cleaning sticker residue off of debit pinpads with isopropyl. learning vim was a really sweet perk in retrospect.
i had access to: but instead of learning the motions to operate on text i learned to operate on the commit graph. it’s a reasonably small set of consistent commands and flags to pickup.
if you’re interested in getting started, my suggestion is popping open
steve’s tutorial
and become familiar with the basics. then run
jj git init
in an existing repo and try to use it. you can flip back to git in the same repo.
i often find it helpful to have a live view of the commit graph open in a terminal, so you can have some more visibility into what the operations are doing.
# a live updating view of the commit graphwatch -t -n 1 --color jj log --ignore-working-copy --color=always# include `-s` if you want see a list of files, toowatch -t -n 1 --color jj log --ignore-working-copy --color=always -s
and if anything goes wrong,
jj undo
6
6
shout out the patient folks in the jj discord generously explaining to me how to recover a repo that i thought surely was a ‘re-clone the repo’ situation
lets you back up and take another try. sometimes
jj undo
fails or you otherwise need to go back further, in that case
jj op log
and
jj op restore
will take you anywhere back in time. it reminded me of my first time playing
braid
and hitting the rewind button.
atuin history showing the vcs commands i run frequently shifting from git to jj
my original motivation was trying to recreate
githubler’s claude code support
in something that’s in the CLI, and i was able to do that with a project i called
‘jjagent’
. i still use jjagent all the time
7
, but learning jj itself turned out to be a lot more profound.
7
↩
jjagent is very specific to my workflows, and i don’t think really has very wide appeal. that being said there are some parts of it i find work very well – the main one being is that it stores a
Claude-session-id: ...
in a git trailer so i can get back to the claude conversation that resulted in a code change. the other one being the idea of an agent working on a single changing commit that you refine (i prefer this strongly over 50x garbage commits everytime you don’t one-shot something.)
North Korean hackers exploit React2Shell flaw in EtherRAT malware attacks
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 15:43:05
A new malware implant called EtherRAT, deployed in a recent React2Shell attack, runs five separate Linux persistence mechanisms and leverages Ethereum smart contracts for communication with the attacker. [...]...
A new malware implant called EtherRAT, deployed in a recent React2Shell attack, runs five separate Linux persistence mechanisms and leverages Ethereum smart contracts for communication with the attacker.
Researchers at cloud security company Sysdig believe that the malware aligns with North Korea's tools used in Contagious Interview campaigns.
They recovered EtherRAT from a compromised Next.js application just two days after the disclosure of the
critical React2Shell vulnerability
tracked as CVE-2025-55182.
Sysdig highlights EtherRAT's mix of sophisticated features, including blockchain-based command-and-control (C2) communication, multi-layered Linux persistence, on-the-fly payload rewriting, and evasion using a full Node.js runtime.
Although there are substantial overlaps with "Contagious Interview" operations conducted by Lazarus, EtherRAT is different in several key aspects.
React2Shell is a max-severity deserialization flaw in the React Server Components (RSC) "Flight" protocol that allows unauthenticated remote code execution via a crafted HTTP request.
The flaw impacts a large number of cloud environments running React/Next.js, and its exploitation in the wild started hours after the public disclosure late last week. Some of the first threat actors leveraging it in attacks are China-linked groups
Earth Lamia and Jackpot Panda
.
Automated exploitation followed, and
at least 30 organizations
across multiple sectors were breached to steal credentials, cryptomining, and deploy commodity backdoors.
EtherRAT attack chain
EtherRAT uses a multi-stage attack chain, starting with the exploitation of React2Shell to execute a base64-encoded shell command on the target,
Sysdig says
.
The command attempts to download a malicious shell script (
s.sh
) with
curl
,
wget
, or
python3
as fallbacks, and loops every 300 seconds until successful. When the script is fetched, it is checked, turned into an executable, and launched.
Script logic
Source: Sysdig
The script creates a hidden directory in the user's
$HOME/.local/share/
location where it downloads and extracts a legitimate Node.js v20.10.0 runtime directly from nodejs.org.
It then writes an encrypted payload blob and an obfuscated JavaScript dropper that is executed using the downloaded Node binary, and then deletes itself.
The obfuscated JavaScript dropper (
.kxnzl4mtez.js
) reads the encrypted blog, decrypts it using a hardcoded AES-256-CBC key, and writes the result as another hidden JavaScript file.
The decrypted payload is the EtherRAT implant. It is deployed using the Node.js binary that had been installed in the previous stage.
Marks of an advanced implant
EtherRAT uses Ethereum smart contracts for C2 operations, which provide operational versatility and resistance to takedowns.
It queries nine public Ethereum RPC providers in parallel and picks the majority-response result, which prevents single-node poisoning or sinkholing.
The malware sends randomized CDN-like URLs to the C2 every 500 ms and executes JavaScript returned from the operators using an AsyncFunction constructor in a mechanism that works as a fully interactive Node.js shell.
Constructing randomized URLs
Source: Sysdig
North Korean hackers have used smart contracts before to deliver and distribute malware. The technique is called EtherHiding and has been described before in reports from
Google
and
GuardioLabs
.
Additionally, Sysdig researchers note that "the encrypted loader pattern used in EtherRAT closely matches the DPRK-affiliated
BeaverTail
malware used in the Contagious Interview campaigns."
EtherRAT persistence on Linux
Sysdig comments that the EtherRAT malware has extremely aggressive persistence on Linux systems, as it installs five layers for redundancy:
Cron jobs
bashrc injection
XDG autostart
Systemd user service
Profile injection
By using multiple persistence methods, the operator of the malware makes sure that they continue to have access to the compromised hosts even after system reboots and maintenance.
Another unique feature in EtherRAT is its ability to self-update by sending its source code to an API endpoint. The malware receives replacement code that has the same capabilities but uses different obfuscation, overwrites itself with it, and then spawns a new process with the updated payload.
Sysdig hypothesizes that this mechanism helps the malware evade static detection and may also help prevent analysis or introduce mission-specific functionality.
With React2Shell exploitation underway by numerous actors, system administrators are recommended to upgrade to a safe React/Next.js version as soon as possible.
Sysdig provides in its report a short list of indicators of compromise (IoCs) associated with EtherRAT's staging infrastructure and Ethereum contracts.
The researchers recommend that users check for the listed persistence mechanisms, monitor Ethereum RPC traffic, review application logs, and rotate credentials.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
The
commandfor
attribute takes an ID—similar to the
for
attribute—while
command
accepts built-in values, enabling a more portable and intuitive approach.
Demo to show and hide a
<dialog>
and a
[popover]
with Invoker Commands. For browsers without support a polyfill is loaded.
Currently it is possible to send commands to
[popover]
s and
<dialog>
elements, with more types of elements possibly coming in the future. The commands to send to an element are mirrored after their JavaScript counterparts:
show-popover
:
el.showPopover()
hide-popover
:
el.hidePopover()
toggle-popover
:
el.togglePopover()
show-modal
:
dialogEl.showModal()
close
:
dialogEl.close()
It is also possible to set up custom commands to send to elements. These custom commands are prefixed by two dashes and are handled by the
toggle
event.
One of the nice features introduced by the Popover API is the light dismiss behavior of popovers. This lets users close a popover by clicking outside of the popover–on the
::backdrop
–or by pressing the ESC key.
From Chrome 134, this light dismiss behavior is also available on
<dialog>
, through the new
closedby
attribute which controls the behavior:
<dialog closedby="none">
: No user-triggered closing of dialogs at all. This is the default behavior.
<dialog closedby="closerequest">
: Pressing ESC (or other close trigger) closes the dialog
<dialog closedby="any">
: Clicking outside the dialog, or pressing ESC, closes the dialog. Similar to
popover="auto"
behavior.
Demo that compares behavior of the various values for
closedby
.
Hint popovers with
popover="hint"
are a new type of HTML popover designed for ephemeral layered UI patterns, such as tooltips or link previews. Opening a hint popover does not close other open auto or manual popovers, allowing layered UI elements to coexist. Hint popovers can also exist on links (
<a>
tags), unlike auto and manual popovers which require activation from button elements.
Set this up like any other popover:
<button interestfor="callout-1"></button><div id="callout-1" popover=hint> Product callout information here.</div>
Using
popover="hint"
combined with
interest invokers
(the
[interestfor]
attribute), make it much easier to build layered UI elements like tooltips, hover cards, and previews declaratively in HTML and CSS, without complex JavaScript workarounds. This pairing allows for dual-purpose interaction patterns (for example, hover to preview, click to navigate) and better management of multiple on-screen layers.
The time has finally arrived: you can now fully customize the HTML
<select>
element using CSS!
To get started, apply the
appearance: base-select
CSS property to your
<select>
element. This will switch it to a new, minimal state that's optimized for customization.
Using base-select unlocks several powerful features including complete CSS customization. Every part of the select element, including the button, the dropdown list, and the options, can be styled with CSS. You can change colors, fonts, spacing, and even add animations to create a unique look and feel that matches your site's design.
The dropdown list of options (
::picker(select)
) is rendered in the top-layer of the page. This means that it appears above all other content without being clipped by parent containers. The browser also automatically handles positioning and flipping the dropdown based on available space in the viewport.
The new select also enables you to include and properly render HTML elements like
<img>
and
<span>
directly inside of the
<option>
elements. This means you can do something as simple as adding flag icons next to a country picker, or something as complex as creating a profile selection where you can see an icon, name, email, and ID. As long as you are not including interactive elements such as links, which are not allowed inside of customizable selects, you get full control over creating visually rich dropdown menus.
Another neat thing you can do with customizable select is use the new
<selectedcontent> element
. This element reflects the HTML content of the selected option. For complex selects, setting
display: none
on specific elements within
<selectedcontent>
lets you show part of the option content in the select button, or even just an icon to represent the selection. In the monster picker, you can hide the monster skills description by setting:
Carousel scroll affordances with native CSS pseudo-elements.
This year, creating carousels and other scrolling experiences in CSS became much easier with the introduction of two new pseudo-elements:
::scroll-button()
and
::scroll-marker()
. These features let you create native, accessible, and performant carousels with just a few lines of CSS, no JavaScript required.
A carousel is essentially a scrollable area with added UI affordances for navigation: buttons to scroll back and forth, and markers to indicate the current position and allow direct navigation to a specific item.
The
::scroll-button()
pseudo-element creates browser-provided, stateful, and interactive scroll buttons. These buttons are generated on a scroll container and can be styled with CSS. They behave like regular
<button>
elements, are focusable, and are automatically disabled when scrolling is no longer possible in a given direction.
You can create buttons for any scroll direction:
left
,
right
,
up
, or
down
, as well as logical directions like
block-start
and
inline-end
. When a scroll button is activated, it scrolls the container by approximately 85% of its visible area.
The
::scroll-marker
pseudo-element represents a marker for an element within a scroll container. These markers are grouped in a
::scroll-marker-group
and behave like anchor links, letting users jump directly to a specific item in the scroller. This is useful for creating dot navigation for a carousel or a table of contents for a long document.
Like
::scroll-button()
,
::scroll-marker
s are fully stylable with CSS. You can use images, text, or even counters to create a variety of marker styles. Additionally, the
:target-current
pseudo-class styles the active ("current") marker that aligns with the currently-scrolled-to item.
In addition to the
::scroll-button()
and
::scroll-marker
pseudo-elements, CSS carousels includes another neat feature:
scroll-target-group
. This designates an element as a container for a group of navigation items, like a table of contents. Use this to transform a manually-created list of anchor links into scroll-markers which can be used to navigate the page.
Pair
scroll-target-group
with the
:target-current
pseudo-class to style the anchor element whose target is currently visible. This gives you the power of
::scroll-marker
from the CSS Carousel API, but with the flexibility of using your own HTML elements for the markers, giving you much more control over their styling and content.
To create a scroll-spy navigation, you need two things:
A list of anchor links that point to different sections of your page.
The
scroll-target-group: auto
property applied to the container of those links.
The following example creates a "scroll-spy" highlighting where you are on a page in an overview, or table of contents.
The following CSS creates the scroll-target-group, then styles the table of contents. The link corresponding to the section currently in view will be red and bold.
Last year's CSS Wrapped covered CSS anchor positioning: an exciting update that changes the way you can position elements relative to each other. And since that coverage, it became a part of
Interop 2025
, and browser support expanded.
However, while CSS could move an element to a fallback position, it had no way of knowing which fallback was chosen. This meant that if your tooltip flipped from the bottom to the top of the screen, the arrow would still be pointing the wrong way. This is now resolved with anchored container queries.
Anchor queries can be created with two steps:
First, apply
container-type: anchored
to the positioned element, like your tooltip. This enables the element to be "aware" of its anchor position fallback.
Next, use the
anchored(fallback: ...)
function within an
@container
block to style any child of your positioned element based on the active fallback value.
When you specify a fallback value, it can either be a custom fallback that you name and specify, or it can be one of the browser defaults like
flip-block
, or
flip-inline
.
Here's a quick demo of how you can use anchored container queries to automatically flip a tooltip's arrow when its position changes:
/* The element our tooltip is anchored to */ .anchor { anchor-name: --my-anchor; } /* The positioned element (tooltip) */ .tooltip { position: fixed; position-anchor: --my-anchor; position-area: bottom; /* Reposition in the block direction */ position-try-fallbacks: flip-block; /* Make it an anchored query container */ container-type: anchored; /* Add a default "up" arrow */ &::before { content: '▲'; position: absolute; /* Sits on top of the tooltip, pointing up */ bottom: 100%; } } /* Use the anchored query to check the fallback */ @container anchored(fallback: flip-block) { .tooltip::before { /* The 'top' fallback was used, so flip the arrow */ content: '▼'; bottom: auto; /* Move the arrow below the tooltip */ top: 100%; } }
This is a huge win for anchor positioning and component libraries, enabling more robust and self-contained UI elements with less code.
Hover and focus-triggered UI is everywhere on the web, from tooltips to rich hovercards and page previews. While this pattern often works well for mouse users, it can be inaccessible to other modalities like touchscreen. Additionally developers have to manually implement the logic for each input type, leading to inconsistent experiences.
The new
interestfor
attribute solves this by providing a native, declarative way to style an element when users "show interest" in it without fully activating it. It's invoked similarly to the
commandfor
attribute
, but, instead of a click,
interestfor
is activated when a user "shows interest" in an element, such as by hovering over it with a mouse or focusing it with a keyboard. When paired with
popover="hint"
, it becomes incredibly easy to create layered UI elements like tooltips and hovercards without any custom JavaScript.
<button interestfor="callout-1"></button><div id="callout-1" popover="hint"> Product callout information here.</div>
Note:
Unlike command invokers, which only work on button elements, interest invokers can be set on links (
<a>
tags) as well as buttons.
Here’s a demo that uses
interestfor
to create product callouts on an image. Hovering over the buttons on the image will reveal more information about each product.
Interest Delays
One additional new feature that landed with interest invokers is the ability to set interest-delays. This prevents an interest-invoked element from getting triggered too prematurely. You can set a delay to both open and close the interest invoker using the
interest-delay
property, which accepts a time-based value. 0.5 seconds is the default, but you can speed it up, for example, by doing:
/* applies an updated delay timing value on the interest-invoking button */[interestfor] { interest-delay: 0.2s;}
To determine if an element is stuck, snapped, or scrollable you could use a bunch of JavaScript … which isn’t always easy to do because you have to attach timeouts to scroll events and so on.
Thanks to scroll-state queries–available from Chrome 133–you can use CSS to declaratively, and more performantly, style elements in these states.
Recording of the demo. When an item is snapped, it gets styled differently.
To use a scroll-state query declare
container-type: scroll-state
on an element.
.parent { container-type: scroll-state;}
Once you have that in place, children of that element can then query whether that element is in a certain scroll-state:
Stuck state: when the element is stuck.
Snapped state: when the element is snapped.
Scrollable state: when the element is overflowing.
For example, to style the snapped element differently, use the
snapped
scroll-state-query:
The usual method to create staggered animations for list items, where each item appears sequentially, requires you to count DOM elements and hard-code these values into custom properties (for example,
--index: 1;
,
--index: 2;
) using
:nth-child
selectors. This method is cumbersome, fragile, and not scalable, especially when the number of items changes dynamically.
The new
sibling-index()
and
sibling-count()
functions make your life easier here, as these functions provide native awareness of an element's position among its siblings. The
sibling-index()
function returns a 1-based integer representing the element's position, while
sibling-count()
returns the total number of siblings.
These let you write concise, mathematical formulas for layouts and animations that automatically adapt to the number of elements in the DOM.
li { /* Create a staggered delay. */ /* We subtract 1 because sibling-index() starts at 1, */ /* ensuring the first item starts immediately (0s). */ transition: opacity 0.25s ease, translate 0.25s ease; transition-delay: calc(0.1s * (sibling-index() - 1)); @starting-style { opacity: 0; translate: 1em 0; }}
Demo showing a staggered entry animation on the 4 images. Hit the shuffle button to randomize the order.
Recording of the demo.
The
container
option for
Element.scrollIntoView
lets you perform a
scrollIntoView
only scrolling the nearest ancestor scroll container. This is extremely useful if you have nested scroll containers. With the option set to
"nearest"
, calling
scrollIntoView
won’t scroll all of the scroll containers to the viewport.
slideList.addEventListener('click', (evt) => { // scrollIntoView will automatically determine the position. evt.target.targetSlide.scrollIntoView({container: 'nearest', behavior: 'smooth'});});
Recording showing a
scrollIntoView
action without and with
container
set to
"nearest"
Demo featuring a JavaScript-based carousel that uses
scrollIntoView
to scroll to the specific slide in the carousel. Use the toggle at the top left to control whether
container: "nearest"
should be used or not.
Nested view transition groups is an extension to view transitions that lets you nest
::view-transition-group
pseudo-elements within each other.
When view transition groups are nested, instead of putting them all as siblings under a single
::view-transition
pseudo-element, it's possible to retain 3D and clipping effects during the transition.
To nest
::view-transition-group
elements in another group, use the
view-transition-group
property on either the parent or children.
The nested groups get placed inside a new
::view-transition-group-children(…)
pseudo-element in the tree. To reinstate the clipping used in the original DOM, apply
overflow: clip
on that pseudo-element.
Demo for Nested View Transition Groups. Without nested view transition groups, the avatar and name don't rotate along with the card. But when the option is checked, the 3D effect can be restored.
For browsers with no support, check out this recording:
Recording of the demo showing the demo. It shows the behavior without and with nested view transition groups.
Using
insertBefore
to move an element in the DOM is destructive. If you move a playing video or an iframe using
insertBefore
, it reloads and loses its state completely.
However, from Chrome 133, you can use
moveBefore
. It works exactly like
insertBefore
, but it keeps the element alive during the move.
This means videos keep playing, iframes don't reload, CSS animations don’t restart, and input fields keep their focus—even while you are actively reparenting them across your layout.
Demo to compare behavior of
insertBefore
and
moveBefore
.
For browsers with no support, check out this recording:
Recording of the demo showing a YouTube embed that is playing. When the
iframe
gets moved with
moveBefore
, the video keeps playing. When it gets moved with
insertBefore
, the iframe reloads.
The CSS
attr()
function
, which lets you use the value of an HTML attribute within your CSS, has been powered-up.
Previously,
attr()
could only be used within the content property of pseudo-elements and could only return values as a CSS string. The updated
attr()
function expands its capabilities, allowing
attr()
to be used with any CSS property, including custom properties. It can now also parse attribute values into various data types beyond just strings, like colors, lengths, and custom identifiers.
With the new attribute, you can set an element's
color
property based on a
data-color
attribute, parsing it as a
<color>
type with a fallback.
div { color: attr(data-color type(<color>), red);}
To solve a common UI challenge, you can dynamically set the
view-transition-name
for multiple elements using their
id
attribute, parsed as a
<custom-ident>
. This avoids repetitive CSS rules for each element.
Finally, this demo shows how to use the
attr()
function in multiple ways. First use the
data-rating
to determine a percent-fill to visually fill the star mask and represent the rating. Then use the same data attribute in the
content
property to insert the value in a pseudo-element.
When a popover,
<dialog>
, or
<details>
element gets toggled, it can be interesting to know which element was responsible for toggling it. For example, knowing if the user pressed the “Accept Cookies” or “Reject Cookies” button to dismiss a
cookie banner
is a very important detail.
The
source
attribute of the
ToggleEvent
lets you know exactly that, as it contains the element which triggered the event to be fired, if applicable. Based on that source you can take different actions.
<div id="cookiebanner" popover="auto"> <p>Would you like a cookie?</p> <button id="yes" commandfor="cookiebanner" command="hide-popover">Yes</button> <button id="no" commandfor="cookiebanner" command="hide-popover">No</button></div><script> const $btnYes = document.getElementById('yes'); const $btnNo = document.getElementById('no'); const $cookiebanner = document.getElementById('cookiebanner'); $cookiebanner.addEventListener('toggle', event => { if (event.source == $btnYes) { // Give the user a cookie } else if (event.source == $btnNo) { // Don't give the user a cookie } });</script>
Cookie banner demo that uses
ToggleEvent.source
. The demo also uses
Invoker Commands
A font’s content box is defined by internal metrics—specifically the ascent and descent that reserve space for accents and hanging characters.
Illustration showing the ascender and descender line of a typeface. (Source:
Material Design
)
Because the visual boundaries of Latin text are the cap height and the alphabetic baseline, rather than the ascent and descent, text will appear optically off-center even when it is mathematically centered within a container.
Illustrations showing the cap height and baseline of a typeface. (Source:
Material Design
)
The
text-box
properties make finer control of vertical alignment of text possible, letting you flawlessly center text vertically. The
text-box-trim
property specifies the sides to trim, above or below (or both), and the
text-box-edge
property specifies the metrics to use for
text-box-trim
effects.
When trimming both edges and setting the over edge metric to
cap
and the under edge metric to
alphabetic
, text will be visually centered.
The new
shape()
function lets you clip an element to a complex, non-polygonal, responsive shape in CSS. This is a great option for clipping masks using
clip-path: path()
, and works seamlessly with CSS custom properties to define coordinates and control points, making it more maintainable than SVG shapes. This also means you can animate your custom properties within
shape()
to create dynamic and interactive clipping.
Here's how to create a flag shape with curved top and bottom edges using
shape()
:
.flag { clip-path: shape(from 0% 20px, curve to 100% 20px with 25% 0% / 75% 40px, vline to calc(100% - 20px), curve to 0% calc(100% - 20px) with 75% 100% / 25% calc(100% - 40px), close );}
In this example, the horizontal coordinates use percentages to scale with the element's width, while the vertical coordinates for the curve's height use fixed pixel values, creating a responsive effect where the flag's wave remains constant regardless of the element's size.
Another example here uses a
blob generator
for
shape()
to create a fun frame effect:
The
if()
function in CSS lets you set different values for a property based on a conditional test. Think of it like a ternary operator in JavaScript, but for your stylesheets. It provides a cleaner and more concise way to handle dynamic styling compared to writing multiple, verbose
@media
or
@supports
blocks for single property changes.
The syntax is straightforward. The
if()
function takes a series of condition-value pairs, separated by semicolons. The first condition that evaluates to
true
will have its corresponding value applied. You can also provide an
else fallback
value.
Currently,
if()
can be used with three types of queries:
media()
: For media queries.
supports()
: For feature queries.
style()
: For style queries.
One example of using
if()
is creating inline media queries. This allows you to adjust styling for different viewport sizes or device capabilities without writing separate
@media
blocks.
For example, you can create a responsive layout that changes from a column to a row based on viewport orientation:
This approach is more concise than a traditional media query, which requires you to define the styles in two separate places. With if(), you can keep the logic for a single property in one place, making your CSS easier to read and maintain. Change the orientation of the layout in this CodePen by opening the CSS or HTML side pane:
CSS custom functions are a fantastic new addition to the CSS language, and make it much easier to write composable, reusable, and clear functional styling logic. A custom function is made up of the
@function
statement, a function name prefixed with a double dash (
--
), a series of arguments, and a
result
block. The arguments can also have default, or fallback, values.
An example of a simple CSS function is the "negate" function which returns the inverse value of a number:
/* Negate function returns the negative of a value */@function --negate(--value) { result: calc(-1 * var(--value));} /* Usage */html { --gap: 1em; padding: --negate(var(--gap));}
There are many ways you can use functions in CSS. Ultimately, we'll likely see new patterns emerge. For example, you might store CSS utilities in a
utils.css
file that contains multiple functions. One of my favorite CSS functions is the conditionally rounded border radius. The following function removes an element's
border-radius
when it gets within a specified distance of the viewport edge (defaulting to 4px), otherwise applying the desired radius. You can provide one argument for the radius, or a second to override the edge distance:
/* Conditionally apply a radius until you are (default: 4px, or specify second argument) from the edge of your screen */@function --conditional-radius(--radius, --edge-dist: 4px) { result: clamp(0px, ((100vw - var(--edge-dist)) - 100%) * 1e5, var(--radius));}/* usage */.box { /* 1rem border radius, default (4px) distance */ border-radius: --conditional-radius(1rem);}.box-2 { /* 1rem border radius, right at the edge (0px distance) */ border-radius: --conditional-radius(1rem, 0px);}
One nice update that landed this year is the ability to use range syntax in style queries and
if()
statements. Media queries and container queries already supported this capability, but before Chrome 142, style queries required an exact value match, like
@container style(--myVal: true)
.
Now, you can type your values and use them with comparison operators like
<
,
>
,
<=
, and
>=
. This enables many new architectural capabilities directly in your CSS.
The following demo uses stylized cards to visualize the daily weather. The HTML markup includes data, such as the chance of rain, which is indicated by the value of data-rain-percent.
Now, if the chance of rain is greater than 45%, the card will get a blue background.
Range queries can also be used in
if()
statements now as well, meaning more concise phrasing for styles. For example, you can write the above code even more concisely using inline
if()
:
The
stretch
keyword is a keyword for use with CSS sizing properties (such as
width
and
height
) that lets elements grow to exactly fill their containing block's available space.
It’s similar to
100%
, except the resulting size is applied to the
margin box
of the element instead of the box determined by
box-sizing
.
.element { height: stretch;}
Using this keyword lets the element keep its margins while still being as large as possible.
Demo to compare behavior of
height
being set to
auto
,
100vh
,
100%
, or
stretch
.
This year, CSS gives us more control over the shape of our elements with the new
corner-shape
property. This experimental feature lets you customize the shape of corners beyond the standard rounded corners available with
border-radius
.
You can now create a variety of corner styles, including:
round
bevel
notch
scoop
squircle
This property opens up a world of creative possibilities. From
flower-like shapes
to
hexagonal grids
, and even enabling a simple squircle; this CSS feature is small but mighty. You can even animate between different corner shapes for dynamic and engaging user interfaces, making this a great option for hover effects and interest states.
For even more control, you can use the
superellipse()
function to create any continuous curve, allowing for fine-tuned and unique corner designs.
[$] Bazzite: a gem for Linux gamers
Linux Weekly News
lwn.net
2025-12-09 15:18:16
One of the things that has historically stood between Linux and the
fabled "year of the Linux desktop" is its lack of support for video
games. Many users who would have happily abandoned Windows have,
reluctantly, stayed for the video games or had to deal with dual
booting. In the past few years, th...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on December 18, 2025)
Man Charged for Wiping Phone Before CBP Could Search It
403 Media
www.404media.co
2025-12-09 15:04:59
The exact circumstances around the search are not known. But activist Samuel Tunick is charged with deleting data from a Google Pixel before CBP’s Tactical Terrorism Response Team could search it....
A man in Atlanta has been arrested and charged for allegedly deleting data from a Google Pixel phone before a member of a secretive Customs and Border Protection (CBP) unit was able to search it, according to court records and social media posts reviewed by 404 Media. The man, Samuel Tunick, is described as a local Atlanta activist in Instagram and other posts discussing the case.
The exact circumstances around the search—such as why CBP wanted to search the phone in the first place—are not known. But it is uncommon to see someone charged specifically for wiping a phone, a feature that is easily accessible in some privacy and security-focused devices.
💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
The indictment
says on January 24, Tunick “did knowingly destroy, damage, waste, dispose of, and otherwise take any action to delete the digital contents of a Google Pixel cellular phone, for the purpose of preventing and impairing the Government’s lawful authority to take said property into its custody and control.” The indictment itself was filed in mid-November.
Tunick was arrested earlier this month, according to a post on a crowd-funding site and court records. “Samuel Tunick, an Atlanta-based activist, Oberlin graduate, and beloved musician, was arrested by the DHS and FBI yesterday around 6pm EST. Tunick's friends describe him as an approachable, empathetic person who is always finding ways to improve the lives of the people around him,”
the site says
. Various activists have
since shared news
of Tunick’s arrest on social media.
The indictment says the phone search was supposed to be performed by a supervisory officer from a CBP Tactical Terrorism Response Team. The American Civil Liberties Union (ACLU)
wrote in 2023
these are “highly secretive units deployed at U.S. ports of entry, which target, detain, search, and interrogate innocent travelers.”
“These units, which may target travelers on the basis of officer ‘instincts.’ raise the risk that CBP is engaging in unlawful profiling or interfering with the First Amendment-protected activity of travelers,” the ACLU added.
The Intercept previously
covered the case of a sculptor and installation artist who was detained at San Francisco International Airport and had his phone searched. The report said Gach did not know why, even years later.
Court records show authorities have since released Tunick, and that he is restricted from leaving the Northern District of Georgia as the case continues.
The prosecutor listed on the docket did not respond to a request for comment. The docket did not list a lawyer representing Tunick.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.
Donating the Model Context Protocol and Establishing the Agentic AI Foundation
Today, we’re donating the
Model Context Protocol
(MCP) to the Agentic AI Foundation (AAIF), a directed fund under the
Linux Foundation
, co-founded by Anthropic, Block and OpenAI, with support from Google, Microsoft, Amazon Web Services (AWS), Cloudflare, and Bloomberg.
One year ago, we
introduced
MCP as a universal, open standard for connecting AI applications to external systems. Since then, MCP has achieved incredible adoption:
Across the ecosystem: There are now more than 10,000 active public MCP servers, covering everything from developer tools to Fortune 500 deployments;
Across platforms: MCP has been adopted by ChatGPT, Cursor, Gemini, Microsoft Copilot, Visual Studio Code, and other popular AI products;
Across infrastructure: Enterprise-grade infrastructure now exists with deployment support for MCP from providers including AWS, Cloudflare, Google Cloud, and Microsoft Azure.
We’re continuing to invest in MCP’s growth. Claude now has a directory with over 75
connectors
(powered by MCP), and we recently launched
Tool Search and Programmatic Tool Calling
capabilities in our API to help optimize production-scale MCP deployments, handling thousands of tools efficiently and reducing latency in complex agent workflows.
MCP now has an official, community-driven
Registry
for discovering available MCP servers, and the
November 25th
spec release introduced many new features, including asynchronous operations, statelessness, server identity, and official extensions. There are also official SDKs (Software Development Kits) for MCP in all major programming languages with 97M+ monthly SDK downloads across Python and TypeScript.
Since its inception, we’ve been committed to ensuring MCP remains open-source, community-driven and vendor-neutral. Today, we further that commitment by donating MCP to the Linux Foundation.
The Linux Foundation and the Agentic AI Foundation
The
Linux Foundation
is a non-profit organization dedicated to fostering the growth of sustainable, open-source ecosystems through neutral stewardship, community building, and shared infrastructure. It has decades of experience stewarding the most critical and globally-significant open-source projects, including The Linux Kernel, Kubernetes, Node.js, and PyTorch. Importantly, the Linux Foundation has a proven track record in facilitating open collaboration and maintaining vendor neutrality.
The Agentic AI Foundation (AAIF) is a directed fund under the Linux Foundation co-founded by Anthropic,
Block
and
OpenAI
, with support from
Google
,
Microsoft
,
AWS
,
Cloudflare
and
Bloomberg
. The AAIF aims to ensure agentic AI evolves transparently, collaboratively, and in the public interest through strategic investment, community building, and shared development of open standards.
Donating the Model Context Protocol
Anthropic is donating the Model Context Protocol to the Linux Foundation's new Agentic AI Foundation, where it will join
goose
by Block and
AGENTS.md
by OpenAI as founding projects. Bringing these and future projects under the AAIF will foster innovation across the agentic AI ecosystem and ensure these foundational technologies remain neutral, open, and community-driven.
The Model Context Protocol’s
governance model
will remain unchanged: the project’s maintainers will continue to prioritize community input and transparent decision-making.
The future of MCP
Open-source software is essential for building a secure and innovative ecosystem for agentic AI. Today’s donation to the Linux Foundation demonstrates our commitment to ensuring MCP remains a neutral, open standard. We’re excited to continue contributing to MCP and other agentic AI projects through the AAIF.
Clearspace is building the intentionality layer of the internet. Our mission is to build technology as effective at
protecting human attention
as social media is at exploiting it (infinite scrolling, short-form feeds, manipulative notifications, etc). Our category defining mobile app has been featured on Huberman Lab, New York Times Wirecutter, NPR Marketplace, Forbes, TBPN.
People that want a better relationship with their devices have nowhere to turn except for willpower. We are building an agent that achieves this on all devices by processing and filtering network traffic based on natural language rules.
About The Role
We are looking for a lead designer with strong aesthetic intuition and an obsession with designing through every inch of the user journey. You will be asked to bring pixel perfect designs to life across several different platforms, if you don’t
love the process
of designing this is not the role for you. You will be talking to users often and asked to speak to the overall brand direction at Clearspace.
Responsibilities
Design agent-first UI/UX patterns for the Clearspace platform
Create a design system that spans across mobile, web, desktop
Work directly with the founders and shape product direction
Move fast and autonomously
Qualifications
1+ years of professional product design in a consumer context
Experience creating a design system from scratch
Willing to work onsite in San Francisco
Nice to Have
Have had or considered Creative Director roles
Have examples of creating beautiful things outside of designs (physical art, video, music, etc)
About
Clearspace
At
Clearspace
we help people reduce compulsive phone usage.
We exist to protect people's attention from the exploits of modern technology platforms and make space for the things that matter to them most.
We believe the technology to protect someones attention should be just as sophisticated and effective as the tech that is exploiting it and are building a world-class engineering team to arm the world with a comprehensive attention protection stack.
My name is
Bruno Simon
, and I'm a
creative developer
(mostly for the web).
This is my portfolio. Please drive around to learn more about me and discover the many secrets of this world.
And don't break anything!
Options
Audio
Quality
I'm stuck!
Reset
Renderer
Server
WASD
or
ARROWS
Move around
SHIFT
Boost
CTRL LEFT
or
B
Brake
SPACE
Jump
ENTER
Interact
M
Map
L
Mute
T
Post a whisper
R
Respawn
NUM KEYS
/
NUM PAD
Activate hydraulics
LEFT CLICK (DRAG)
Move camera
H
Honk
One finger
Move the car
Two fingers
Move camera / zoom
Tap (on the car)
Jump
B
Boost
Y
Jump
X
Brake
A
Interact / Exit
LT
L2
Accelerate
RT
R2
Backward accelerate
LB / RB
L1 / R1
Hydraulics
Joystick Left
Turn wheels
Joystick Left (press)
Honk
Joystick Right
Move camera
Joystick Right (press)
Zoom in/out
Select
Reset
Start
Pause
Achievements
/
2h 30min 15s
Circuit
Server currently offline. Scores can't be saved.
No score yet today
Resets in
Leave a whisper
Whispers are messages left by visitors.
- Everyone can see them
- New whispers remove old ones (max 30)
- One whisper per user
- Choose a flag
- No slur!
- Max 30 characters
Server currently offline
Behind the scene
Thank you for visiting my portfolio!
If you are curious about the stack and how I built it, here’s everything you need to know.
Three.js
Three.js
is the library I’m using to render this 3D world.
It was created by
mr.doob
(
X
,
GitHub
), followed by hundreds of awesome developers, one of which being Sunag (
X
,
GitHub
) who added
TSL
, enabling the use of both WebGL and WebGPU, making this portfolio possible.
Three.js Journey
If you want to learn Three.js, I got you covered with this
huge course
.
It contains everything you need to start building awesome stuff with Three.js (and much more).
Devlogs
I’ve been making devlogs since the very start of this portfolio and you can find them on my
Youtube channel
.
Even though the portfolio is out, I’m still working on the last videos so that the series is complete.
Code source
The code is available on
GitHub
under
MIT license
. Even the Blender files are there, so have fun!
For security reasons, I’m not sharing the server code, but the portfolio works without it.
Musics
The music you hear was made especially for this portfolio by the awesome Kounine (
Linktree
).
They are now under
CC0 license
, meaning you can do whatever you want with them!
Download them
here
.
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
As a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.
We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.
Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.
I don't think they should be banned, I think they should be encouraged: I'm always appreciative when people who can't think for themselves openly identify themselves so that it costs me less effort to identify them.
What do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?
It depends on if you're saying "Infowars has the answer, check out this article" vs "I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."
Yes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".
I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?
At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
To me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.
I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.
I endorse this. Please do take whatever measures are possible to discourage it,
even if it won't stop people
. It at least sends a message: this is not wanted, this is not helpful, this is not constructive.
I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.
I do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).
I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.
Agreed - in fact these folks are going out of their way to be transparent about it. It's much easier to just take credit for a "smart" answer
Banning the disclosure of it is still an improvement. It forces the poster to take responsibility for what they have written, as now it is in their name.
This is what DeepSeek said:
> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged.
>
> 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage.
>
> 3. The community can self-regulate. We don't need a new rule for every type of low-effort content.
>
> The issue is low effort, not the tool used. Let downvotes handle it.
I find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.
But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.
There is friction to asking AI yourself. And a comment typically means that "I found the AI answer insightful enough to share".
Unfortunately it's easier to train an AI to be convincing than to be correct, so it can look insightful before it's true.
Like horoscopes, only they're not actually that bad so roll a D20 and on a set of numbers known only to the DM (and varying with domain and task length) you get a textbook answer and on the rest you get convincing nonsense.
The problem is that the AI answer could just be wrong, and there’s another step required to validate what it spit out. Sharing the conversation without fact checking it just adds noise.
Maybe I remember the Grok ones more clearly but it felt like “I asked Grok” was more prevalent than the others.
I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (
https://rfd.shared.oxide.computer/rfd/0576
). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.
Of course, if it’s banned maybe people just stop admitting it.
Depends on the context.
I find myself downvoting them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.
Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use:
https://news.ycombinator.com/item?id=46164360
You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?
It started, as many things do these days, by scrolling on X.
I was reading post after post about the power crisis hitting AI data centers—GPU racks sitting idle, waiting not on chips, but on electricity. I texted with Sam Altman—who confirmed power was indeed a major constraint. I pinged our engineering team—and found that they already had the outline of a plan to build a power turbine based on our Symphony supersonic engine.
After a few conversations, it became clear: AI didn’t just need more turbines—it needed a new and fundamentally better turbine. Symphony was the perfect new engine to accelerate AI in America. About three months later, we had a signed deal for 1.21 gigawatts and had started manufacturing the first turbine.
Today, we’re announcing Superpower, our new 42‑megawatt natural gas turbine, along with a $300M funding round and Crusoe as our launch customer. And most importantly: this marks a turning point. Boom is now on a self-funded path to both Superpower and the Overture supersonic airliner.
I want to share the real story of how this happened—and why supersonic technology is exactly what America’s energy crisis demands.
America Doesn’t Have 10–15 Years to Solve Its Power Problem the Old Way
If you’ve been paying attention, you know the U.S. is in a genuine energy crunch. GPU racks are idling because they can’t get power. Data centers are fighting over substations and interconnection queues. Meanwhile China is adding power capacity at a wartime pace—coal, gas, nuclear, everything—while America struggles to get a single transmission line permitted.
AI won’t wait for us to fix the grid. And the United States simply doesn’t have 10–15 years to build out power infrastructure the old way.
Hyperscalers have already moved to their own Plan B: behind‑the‑meter power plants. You’ve seen XAI’s Colossus I and II in Memphis. OpenAI’s Stargate I in Abilene. These projects are powered by arrays of aeroderivative natural-gas turbines—which are, fundamentally, modified jet engines from the 1970s. There’s something brilliant in this approach: the transition from gigantic “frame” turbines to arrays of mid-size “aeroderivative” turbines mirrors the computing industry’s shift from mainframes to blade servers.
The problem? The “blade servers” of the energy world are old tech and they’re sold out. Because the most popular “aeroderivative” turbines are based on subsonic jet engines, they’re happiest when the outside air temperature is -50°F—like it is when going Mach 0.8 at 30,000 feet. As outside temperatures rise, there is no option but to throttle back the engines—or else the turbine blades literally melt down. These turbines begin losing power at about 50°F and by the time it’s 110°—as often happens in popular data center locations like Texas—30% of generation capacity is lost. Nonetheless, major manufacturers all have backlogs through the rest of the decade and none is building a new-generation advanced-technology turbine.
A Supersonic Engine Core Makes the Perfect Power Turbine
When we designed the Symphony engine for Overture, we built something no one else has built this century: a brand-new large engine core optimized for continuous, high‑temperature operation.
A subsonic engine is built for short bursts of power at takeoff. A supersonic engine is built to run hard, continuously, at extreme thermal loads. Symphony was designed for Mach 1.7 at 60,000 feet, where effective temperatures reach 160°F—not the frigid -50°F conditions where legacy subsonic engines operate.
This gives Superpower several critical advantages:
Full power even with high ambient heat
– Where legacy turbines lose 20–30% at 110°F, Superpower maintains its full 42MW output without derate.
Waterless operation
– Legacy turbines need huge quantities of water for cooling to avoid thermal derate in hot environments. Superpower doesn’t. It stays at full output, water‑free.
Cloud‑native control and monitoring
. Superpower inherits the telemetry and operations stack we built for XB‑1. Every turbine streams real‑time performance data, supports remote control, and flags anomalies before customers ever notice.
Superpower and Symphony are based on virtually identical turbine engines. Both share the identical core (HPC and HPT) and a slightly tuned low spool. In the place of Symphony’s hollow-core titanium fan, Superpower adds two additional compressor stages plus a three-stage free power turbine connected to a high-efficiency generator on its own shaft. Additionally, the engines use slightly different fuel nozzles, Symphony’s optimized for Jet A vs. Superpower’s for natural gas.
Scaling Production the Supersonic Way: Vertical Integration
The legacy aerospace supply chain is congested. When the mission is urgent and the supply chain congested, you build the supply chain. The new Superpower Superfactory starts with a simple vision: raw materials in one side of the building, gigawatts of completed power turbine packages out the other side. We’ve already started making the first parts—and much of the production equipment to support 2GW/yr is on order. With this new financing we’re ready to accelerate further.
If America wants to build at the speed AI requires, vertical integration isn’t optional. We’re standing up our own foundry and our own large scale CNC machining capability. We’ll have more to share on the Superpower Superfactory in early 2026.
Scaling Production the Supersonic Way: Vertical Integration
Superpower is sort of like our Starlink moment, the strongest accelerant we’ve ever had toward our core mission of making Earth dramatically more accessible.
The fastest way to a certified, passenger-carrying Symphony engine is to run its core for hundreds of thousands of hours in the real world, powering Earth’s most demanding AI data centers. Every hour a Superpower turbine spins is an hour of validation for Symphony. Every gigawatt we deliver strengthens our vertical integration and manufacturing capability. And with Superpower profitability funding the remainder of the aircraft program, we’ve done something rare in aerospace: created a self-sustaining path to a new airliner.
Superpower also reminds me of what Boom is at our core: a team willing to take on what others say is impossible, to do with a small team what big companies might not even attempt.
Subscribe to the newsletter for Boom news and insights straight to your inbox.
Ransomware IAB abuses EDR for stealthy malware execution
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 15:24:00
An initial access broker tracked as Storm-0249 is abusing endpoint detection and response solutions and trusted Microsoft Windows utilities to load malware, establish communication, and persistence in preparation for ransomware attacks. [...]...
An initial access broker tracked as Storm-0249 is abusing endpoint detection and response solutions and trusted Microsoft Windows utilities to load malware, establish communication, and persistence in preparation for ransomware attacks.
The threat actor has moved beyond mass phishing and adopted stealthier, more advanced methods that prove effective and difficult for defenders to counter, even if well documented.
In one attack analyzed by researchers at cybersecurity company ReliaQuest, Storm-0249 leveraged the SentinelOne EDR components to hide malicious activity. However, researchers say that the same method works with other EDR products, as well.
SentinelOne EDR abuse
ReliaQuest says that the Storm-0249 attack started with ClickFix social engineering that tricked users into pasting and executing
curl
commands in the Windows Run dialog to download a malicious MSI package with SYSTEM privileges.
A malicious PowerShell script is also fetched from a spoofed Microsoft domain, which is piped straight onto the system's memory, never touching the disk and thus evading antivirus detection.
The MSI file drops a malicious DLL (SentinelAgentCore.dll). According to the researchers, "this DLL is placed strategically alongside the pre-existing, legitimate SentinelAgentWorker.exe, which is already installed as part of the victim's SentinelOne EDR."
Next, the attacker loads the DLL using the signed SentinelAgentWorker (DLL sideloading), executing the file within the trusted, privileged EDR process and obtaining stealthy persistence that survives operating system updates.
"The legitimate process does all the work, running the attacker's code, appearing as routine SentinelOne activity to security tools and bypassing detection,"
explains ReliaQuest
.
Signed executable side-loading the malicious DLL
Source: ReliaQuest
Once the attacker gains access, they use the SentinelOne component to collect system identifiers through legitimate Windows utilities like
reg.exe
and
findstr.exe
, and to funnel encrypted HTTPS command-and-control (C2) traffic.
Registry queries and string searches would normally raise alarms, but when conducted from within a trusted EDR process, they are treated as routine and ignored by security mechanisms.
ReliaQuest explains that the compromised systems are profiled using 'MachineGuid,' a unique hardware-based identifier that ransomware groups like LockBit and ALPHV use for binding encryption keys to specific victims.
This suggests that Storm-0249 conducts initial access compromises tailored to the needs of its typical customers, ransomware affiliates.
The abuse of trusted, signed EDR processes bypasses nearly all traditional monitoring. The researchers recommend that system administrators rely on behavior-based detection that identifies trusted processes loading unsigned DLLs from non-standard paths.
Furthermore, it is helpful to set stricter controls for curl, PowerShell, and LoLBin execution.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Apple's Slow AI Pace Becomes a Strength as Market Grows Weary of Spending
(Bloomberg) --
Shares of Apple Inc. were battered earlier this year as the iPhone maker faced repeated complaints about its lack of an artificial intelligence strategy. But as the AI trade faces increasing scrutiny, that hesitance has gone from a weakness to a strength — and it’s showing up in the stock market.
Through the first six months of 2025, Apple was the second-worst performer among the Magnificent Seven tech giants, as its shares tumbled 18% through the end of June. That has reversed since then, with the stock soaring 35%, while AI darlings like Meta Platforms Inc. and Microsoft Corp. slid into the red and even Nvidia Corp. underperformed. The S&P 500 Index rose 10% in that time, and the tech-heavy Nasdaq 100 Index gained 13%.
“It is remarkable how they have kept their heads and are in control of spending, when all of their peers have gone the other direction,” said John Barr, portfolio manager of the Needham Aggressive Growth Fund, which owns Apple shares.
As a result, Apple now has a $4.1 trillion market capitalization and the second biggest weight in the S&P 500, leaping over Microsoft and closing in on Nvidia. The shift reflects the market’s questioning of the hundreds of billions of dollars Big Tech firms are throwing at AI development, as well as Apple’s positioning to eventually benefit when the technology is ready for mass use.
“While they most certainly will incorporate more AI into the phones over time, Apple has avoided the AI arms race and the massive capex that accompanies it,” said Bill Stone, chief investment officer at Glenview Trust Company, who owns the stock and views it as “a bit of an anti-AI holding.”
Of course, the rally has made Apple’s stock pricier than it has been in a long time. The shares are trading for around 33 times expected earnings over the next 12 months, a level they’ve only hit a few times in the past 15 years, with a high of 35 in September 2020. The stock’s average multiple over that time is less than 19 times. Apple is now the second most expensive stock in the Bloomberg Magnificent Seven Index, trailing only Tesla Inc.’s whopping valuation of 203 times forward earnings. Apple’s shares climbed about 0.5% in early Tuesday trading.
“It’s really hard to see how the stock can continue to compound value at a level that makes this a compelling entry point,” said Craig Moffett, co-founder of research firm MoffettNathanson. “The obvious question is, are investors overpaying for Apple’s defensiveness? We think so.”
Catch your best ideas before they slip through your fingers
Do you ever have flashes of insight or an idea worth remembering? This happens to me 5-10 times every day. If I don’t write down the thought immediately, it slips out of my mind. Worst of all, I
remember
that I’ve forgotten something and spend the next 10 minutes trying to remember what it is. So I invented external memory for my brain.
Introducing
Pebble Index 01
- a small ring with a button and microphone. Hold the button, whisper your thought, and it’s sent to your phone. It’s added to your notes, set as a reminder, or saved for later review.
Index 01 is designed to become muscle memory, since it’s always with you. It’s private by design (no recording until you press the button) and requires no internet connection or paid subscription. It’s as small as a wedding band and comes in 3 colours. It’s made from durable stainless steel and is water-resistant. Like all Pebble products, it’s extremely customizable and built with open source software.
Here’s the best part: the battery lasts for years. You never need to charge it.
Pre-order today for $75
. After worldwide shipping begins in March 2026, the price will go up to $99.
Now that I’ve worn my Index 01 for several months, I can safely say that it has changed my life - just like with Pebble, I couldn’t go back to a world without this. There are so many situations each day where my hands are full (while biking or driving, washing dishes, wrangling my kids, etc) and I need to remember something. A random sampling of my recent recordings:
Set a timer for 3pm to go pick up the kids
Remind me to phone the pharmacy at 11am
Peter is coming by tomorrow at 11:30am, add that to my calendar
Jerry recommends reading Breakneck
Mark wants a Black/Red PT2
Before, I would take my phone out of my pocket to jot these down, but I couldn’t always do that (eg, while bicycling). I also wanted to start using my phone less, especially in front of my kids.
Initially, we experimented by building this as an app on Pebble, since it has a mic and I’m always wearing one. But, I realized quickly that this was suboptimal - it required me to use my other hand to press the button to start recording (lift-to-wake gestures and wake-words are too unreliable). This was tough to use while bicycling or carrying stuff.
Then a genius electrical engineer friend of mine came up with an idea to fit everything into a tiny ring. It is the perfect form factor! Honestly, I’m still amazed that it all fits.
The design needed to satisfy several critical conditions:
Must work reliably 100% of the time. If it didn’t work or failed to record a thought, I knew I would take it off and revert back to my old habit of just forgetting things.
It had to have a physical press-button, with a satisfying click-feel. I want to know for sure if the button is pressed and my thought is captured.
Long battery life - every time you take something off to charge, there’s a chance you’ll forget to put it back on.
Must be privacy-preserving. These are your inner thoughts. All recordings must be processed and stored on your phone. Only record when the button is pressed.
It had to be as small as a wedding band. Since it’s worn on the index finger, if it were too large or bulky, it would hit your phone while you held it in your hand.
Water resistance - must be able to wash hands, shower, and get wet.
We’ve been working on this for a while, testing new versions and making tweaks. We’re really excited to get this out into the world.
Here are a few of my favourite things about Index 01:
It does one thing really well - it helps me remember things.
It’s discreet. It's not distracting. It doesn't take you out of the moment.
There’s no AI friend persona and it’s not always recording.
It’s inexpensive. We hope you try it and see if you like it as well!
Colours: polished silver, polished gold, and matte black
US ring sizes: 6, 7, 8, 9, 10, 11, 12, 13
You can pre-order now and pick your size/colour later before your ring ships.
Cost and availability:
Pre-order price is $75, rises to $99 later. Ships worldwide, beginning in March.
Works with iPhone and Android
: We overcame
Apple’s best efforts
to make life terrible for 3rd party accessory makers and have Index 01 working well on iOS and Android.
Extremely private and secure
: Your thoughts are processed by open source speech-to-text (STT) and AI models locally on your phone. You can read the code and see exactly how it works - our
Pebble mobile app
is open source. Higher-quality STT is available through an optional cloud service.
No charging:
The battery lasts for up to years of average use. After the end of its life, send your ring back to us for recycling.
On-ring storage:
Recording works even if your phone is out of range. Up to 5 minutes of audio can be stored on-ring, then synced later.
No speaker or vibrating motor:
This is an input device only. There is an RGB LED, but it’s rarely used (to save battery life and to reduce distraction).
Works great with Pebble
or other smartwatches: After recording, the thought will appear on your watch, and you can check that it’s correct. You can ask questions like ‘What’s the weather today?’ and see the answer on your watch.
Raw audio playback
: Very helpful if STT doesn’t work perfectly due to wind or loud background noises.
Actions: W
hile the primary task is remembering things for you, you can also ask it to do things like ’Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion).
Customizable and hackable:
Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP. Or route the audio recordings directly to your own app or server!
99+ languages:
Speech to text and local LLM support over 99 languages! Naturally, the quality of each may vary.
Let me be very clear - Index 01 is designed at its core to be a device that helps you remember things. We want it to be 100% reliable at its primary task. But we’re leaving the side door open for folks to customize, build new interactions and actions.
Here’s how I’m thinking about it - a single click-hold + voice input will be routed to the primary memory processing path. Double-click-hold + voice input would be routed to a more general purpose voice agent (think ChatGPT with web search). Responses from the agent would be presented on Pebble (eg ‘What’s the weather tomorrow?’, ‘When’s the next northbound Caltrain?’) or other smartwatches (as a notification). Maybe this could even be an input for something like ChatGPT Voice Mode, enabling you to hear the AI response from your earbuds.
The built in actions, set reminder, create note, alarms, etc, are actually MCPs - basically mini apps that AI agents know how to operate. They run locally in WASM within the Pebble mobile app (no cloud MCP server required). Basically any MCP server can be used with the system, so intrepid folks may have fun adding various actions like Beeper, Google Calendar, weather, etc that already offer MCPs.
Not everything will be available at launch, but this is the direction we are working towards. There will be 3 ways to customize your Index 01:
Trigger actions via button clicks - configure a single or double click to do things like take a photo, control your Home Assistant smart home, Tasker function, unlock your car. This will work better on Android since iOS Shortcuts doesn’t have an open API.
Trigger actions via voice input - write an MCP to do….basically anything? This is pretty open ended.
Route your voice recordings and/or transcriptions to your own webhook - or skip our AI processing entirely and send every recording to your own app or webapp.
### FAQ
How does it work?
People usually wear it on the index finger. Inside the ring is a button, a microphone, a Bluetooth chip, memory, and a battery that lasts for years. Click the button with your thumb, talk into the mic, and it records to internal memory. When your phone is in range, the recording is streamed to the Pebble app. It’s converted to text on-device, then processed by an on-device large language model (LLM) which selects an action to take (create note, add to reminders, etc).
When do I pick my size?
You’ll be able to pick your ring size and color after placing a pre-order. If you have a 3D printer, you can
print our CAD designs
to try on. We’re also planning a sizing kit. You can view the
measurements of the inner diameter
of each ring size.
How long does the battery last?
Roughly 12 to 15 hours of recording. On average, I use it 10-20 times per day to record 3-6 second thoughts. That’s up to 2 years of usage.
Is it secure and private?
Yes, extremely. The connection between ring and phone is encrypted. Recordings are processed locally on your phone in the open-source Pebble app. The app works offline (no internet connection) and does not require a cloud service. An optional cloud storage system for backing up recordings is available. Our plan is for this to be optionally encrypted, but we haven’t built it yet.
Is a paid subscription required?
No.
What kind of battery is inside?
Index 01 uses silver-oxide batteries.
Why can’t it be recharged?
We considered this but decided not to for several reasons:
You’d probably lose the charger before the battery runs out!
Adding charge circuitry and including a charger would make the product larger and more expensive.
You send it back to us to recycle.
Wait, it’s single use?
Yes. We know this sounds a bit odd, but in this particular circumstance we believe it’s the best solution to the given set of constraints. Other smart rings like Oura cost $250+ and need to be charged every few days. We didn’t want to build a device like that. Before the battery runs out, the Pebble app notifies and asks if you’d like to order another ring.
Is it always listening?
No. It only records while the button is pressed. It’s not designed to record your whole life, or meetings.
What if the speech-to-text processing misses a word or something?
You can always listen to the each recording in the app.
Why no touchpad?
We experimented with a touchpad, but found it too easy to accidentally swipe and press. Also, nothing beats the feedback of a real gosh darn pressable button.
Is there a speaker or vibrating motor?
No. The button has a great click-feel to indicate when you are pressing.
Does it do health tracking like Oura?
Nope
How durable and water-resistant is it?
It’s primarily made from stainless steel 316, with a liquid silicone rubber (LSR) button. It’s water-resistant to 1 meter. You can wash your hands, do dishes, and shower with it on, but we don’t recommend swimming with it.
Does it work with iPhone and Android?
Yes
I love customizing and hacking on my devices. What could I do with Index 01?
Lots of stuff! Control things with the buttons. Route raw audio or transcribed text directly to your own app via webhook. Use
MCPs
(also run locally on-device! No cloud server required) to add more actions.
Is this an AI friend thingy or always-recording device?
No.
How far along is development?
We’ve been working on this in the background to watch development. It helps that our Pebble Time 2 partner factory is also building Index 01! We’re currently in the DVT stage, testing pre-production samples. We’ll start a wider alpha test in January with a lot more people. Here’s some shots from the pre-production assembly line:
Show HN: Gemini Pro 3 Hallucinates the HN Front Page 10 Years from Today
tl;dr
: In Rust, “trait composition” are a neat way to keep code, where a lot of components come together and need to be piped up, clean and avoid spaghettification.
Introduction
A major part of my almost two decade long career in programming has been spent working on “SDKs” in Rust. By which I mean building and maintaining complex systems as libraries used by other developers to implement applications on top of. I did this back at Immmer (now defunct), for Parity with Substrate Core/Client as well as its inner on-chain application SDK to the matrix-rust-sdk and last but not least at
Acter
for the Acter App and then the
Zoe
(
relay
) system.
For a while, but especially during latest iteration, I have been wondering about that highest layer architecture. How to design that client, where all these subcomponents are piped together. How to design it in a way that stays flexible for yourself as well as others, yet robust and ideally testable. How to avoid spaghettification of the client, even if the underlying components are complex trait-based systems themselves.
As we have to cover a lot of surface area itself, I will not be discussing trait themselves too much –
check the corresponding chapter in the excellent Rust book
, if you are looking for that – but assume you have an understanding of traits, trait bounds and have implemented them in Rust. I will throw around some almost-real code and examples without asking and expect the reader to be able to parse and understand them without much help. As I want to focus on the higher level “how do we use this”-architecture perspective.
Traits in SDKs
As with any big task, the best way to tackle it is by splitting them into smaller, manageable tasks and implement these one by one. The same is true for building up large SDKs. Often times they contain various components, like a storage layer; network or communication components; some internal state machine for the actual domain specific logic; and maybe some developer-front facing API or even UI components. To make implementing more manageable, it is common place to split them up into the separate independent components, sometimes even as separate crates, and provide an outer interface.
In the SDK world you often find that these components internally need to be plugable themselves though. Like a storage component might be implemented with an embedded SQLite for mobile Apps, with some SQL-backend-service or NoSQL-Database on the Server and with IndexDB in the Browser (with Wasm). Generally, the outer composed system doesn’t really have to care which of these is being used and thus it can be up to that component to define that. A common way to provide this abstraction is by defining a trait for that lowest layer and have these various specific parts implement them. Then the higher layer and also the layers on top can focus on their specific side of things.
This also nicely allows for these implementations that come with their own implementations to be only pulled. Or only compile for the targets that actually use them, as well as introduce new implementations via feature-flags gradually into production. It’s a pretty neat way of organizing the code. In the Matrix SDK we have that layer for implementing storage for example, and though not strictly because of the trait, the SDK even provides a macro to generate the entire test suite against your custom implementation that you can use.
To the mock
Having these traits brings in another nice benefit: Mocking. As the higher level components might have their own logic (like caching or ordering or something) testing often requires to set up the lower level component(s) as well. If instead, you defined that interface in a trait, you can implement various Mock-types to test a range of scenarios for your functions and focus on this specific logic. What sounds tedious at first becomes a breeze with the help of crates like
mockall
. It’s a lot easier and often faster than setting up that lower level layer just to test that the component pulls the objects from the store and returns them sorted regardless of the underlying order.
Middleware-ing
Similarly, by having the traits define the interfaces, you can add functionally nicely in a middleware-kinda fashion similar to what is done many web servers. Think of a caching layer on top of the database as an example. That caching layer can wrap anything implementing the trait while also implementing the trait itself. That way you can implement a LRU cache or something, regardless of the underlying storage types. As the interface is just the same trait again, you can mock the lower layer, ensuring a good test coverage on exactly what this layer does. Further you can just plug this “middleware” into the higher level layer without any further changes. This
is how we implemented a storage layer for the Rust SDK that splits off media storage
(before that was added to the SDK itself) and keeps them at different path (in the mobile’s “cache” directory), for example while passing along everything else to whatever inner database system was being used otherwise (e.g., SQLite).
But specific, sometimes
Now, for the traits you only want to expose the common interface of course. But specific implementation sometimes still have APIs to fine tune or configure certain things - like the path for the sqlite database. You don’t want to put these on the traits as they are implementation specific and pointless for other implementations. But as traits are implemented on specific types, your concrete types can still add these helper functions and as the higher level API / SDK you often just use feature-flags to then expose them or not.
Composing over many traits
Now that you understand the complexity and usage of these subcomponents, think about how you tie them all together in the
Client
. This needs to connect these components, move messages from one component to another, for e.g. to get that messages that just came in from the network to the internal state machine. And a results from the state machine which triggers the storage layer to persist some of these changes. Of course you want the client to be as flexible over the specific implementations as possible – most of that higher level code doesn’t really differ whether the message comes from LoRa, over QUIC or libP2P. It doesn’t matter to the client whether it will be stored in an SQlite database or IndexDB either.
But at times you have interdependencies, so the Rust compiler need to make sure that the type that the network layer message returns is the one that the state machine accepts. This is where things often spaghettify.
At the beginning that feels reasonable, but over time it grows, and the more things are pluggable, the more generics you need to add. The client needs one generic, then another, then another… Moving from single letter to entire words, running out of words. Sooner than you think it becomes incomprehensible to follow. Not even mentioning that ever increasing tree of trait bounds you have to keep around everywhere you expose that client. Which is your main external API surface area, so you expose it
a lot
. Brave are those, who then need to add another bound (like
Send
) to any of the lower traits…
“There must be a better way”, you think to yourself …
The three paths of the enlightenment
As always, you have a few options with its various benefits and trade offs to manage this nicer. You can
Box<dyn Trait>
it, use type aliases or compose a Trait with associated types. Let’s look at them one by one, in order of increasing complexity.
Type alias
The first thing that probably comes to mind, is alias some of the types definitions to make it a bit cleaner. So you’d still have some components that are generic of some sub traits
struct GenericStateMachine<S: StateT, M: MessageT>
that implements most of the concrete logic, but then for the production environment you have an alias
type NativeClientStateMachine = GenericStateMachine<NativeState, TcpMessage>;
that you could use.
Depending how you organize your code, the final client could really end up being a
type NativeTcpClient = GenericClient<NativeClientStateMachine, NativeClientStorage, TcpProtocol>;
itself. And you could even have a builder that depending on the target returns one or the other type, but both have the same API implemented via the traits.
Giving you all the benefits of having the concrete types, including access to the actual types, so the consumers code could even do implementation specific calls and its compile would fail if they tried to do that against a type that doesn’t implement those (e.g. because they picked a different target arch). Of course this only works as long as the compiler doesn’t force you to specify
which
exact type you are expecting but can still infer that itself.
Navigating this tree isn’t easy. Especially when debugging you can easily end up at the wrong layer and wonder why your changes aren’t showing up.
dyn Trait
s
A common idea that might come to mind is to wrap the specific implementation in a new type that holds it internally in a
dyn Trait
, if the trait can be made
dyn
compatible
(formerly known as “object safety”). In practice the type most likely must be wrapped in either Box, Arc or similar - if that is what is happening already anyways then this might not be a problem. If dynamic dispatching is not too much of an overhead, this could be a viable solution.
But
dyn
s come with another drawback: the compiler forgets all notion of the concrete type. While this can be cheaper in terms of code size (as generic functions aren’t repeated for each type), it also means that our specific type “is gone”. Any other methods that this type implements outside of the trait become inaccessible. In the Matrix SDK for storage, that seems to be acceptable, as the only
implementations specific tuning happens in the builder setup
before
it is passed to the
StateStore
.
But something as simple as getting implementation-specific configuration parameters returned from that type at runtime is now impossible, even if the type in question implemented it and it can be asserted that the type is the one.
Trait Composition
If dynamic dispatching isn’t feasible or the specific types needs to still be available, that alias list grows too long and becomes to tedious to update, you might come up with: a trait combining all the types – I call them composing trait. Rather than having a generic client with an increasingly growing list of generics, you define a trait that defines the specific types via associated types. This is what we have been doing in the Parity SDK and on-chain wasm state machine.
The idea is to create a new
trait Configuration
that defines all the requirements as associated types and have a client only reference that trait now. It can still return aliased or sub-types that are generic, but are then for that specific configuration. Like this:
Unfortunately, in reality this is rarely as clean. Often you find yourself needing to define the interdependencies as well. For example: the network needs to give you a specific
MessageT
that the state machine also actually understands. Even if you use a
trait
here, the compiler will enforce that you use the same type. As a result, you end up with even very low-level trait definitions popping up on your highest level configuration so that you can cross reference them via the associated types:
Nice, and clean, but you can already see how it will become more complex when these traits grow in complexity. In particular when you have to do changes to some of them, it ripples through the entire system quickly with rather hairy and complex bounds that are failing in very verbose error messages. Let’s just add an
ErrorT
type that our client might yield, when any of the inner yield an error. So the client is meant to wrap all the inner types. We add
traitErrorT{}traitStorageC{typeMessage:MessageT;typeError:ErrorT;//.. to all types}// and on the config:////traitConfiguration{// ...// gee, this is verbose...typeError:ErrorT+From<<Self::StorageasStorageC>::Error>+From<<Self::StateMachineasStateMachineC>::Error>+From<<Self::NetworkasNetworkC>::Error>;}
It’s a bit verbose, but reasonable overall. It becomes more tricky when you actually try to implement these types as you need to make sure all the types also match up correctly. That way we are able to reduce the generics on client from many to just one. Nice. But dragging around this massive Configuration is a pain, especially for the mock-test-ability as we described before, as we have to mock all the associated types, creating a lot of glue code.
So instead, what I end up doing is have anything with actual logic still be referring to the generics directly, so you can mock and test these specific ones, and have the final
Client<C: Configuration>
just be a holder that then passes along to the specific internal type with the associated types passed in as generics.
In practice it can become even more tricky if you have some of these configuration on several layers. Like in the
Parity Substrate Codebase
, to allow all clients to build on reusable CLI tooling there is a Service that can construct your client. That service requires a Configuration for Network and alike, but only a subset of what a Full Node needs and as result, that second needs to be a super set of the first. But that is a really advanced scenario, and if you have any good ideas to improve that situation, I am all ears.
Conclusion: Combined Composition
As so often, enlightenment isn’t picking one solution but combining wisely.
What you probably end up doing is a combination of these compositions types. Like in the Rust Matrix SDK, where in a lower level, the plugable storage is then held via a
dyn Trait
, while on a higher level, you might compose a client with an “trait composition” that allows any other (rust) developer to plug and replace any of the components as they please, including yourself for platform or target specific implementations.
By keeping any actual logic in the separate components with specific traits for easy mocked testing and using the “client” merely as the place were all these pipes come and plug together, you can rely on the compilers type checks as a means to ensure the correctness of the types being piped, while you have the mock tests for all the actual logic. And integration tests should cover the end-to-end functionality of the client regardless.
To wrap things up nicely, you can hide that
Client<C>
inside a type alias that itself is held by a
struct FfiClient(NativeClient);
on which you expose a completely typed no-generics rust-external API. Put on a bow and ship it :) .
Version
146.0 of the Firefox web browser has been released. One feature of
particular interest to Linux users is that Firefox now natively
supports fractional scaled displays on Wayland. Firefox Labs has also
been made available to all users even if they opt out of telemetry or
participating in stud...
Version
146.0
of the Firefox web browser has been released. One feature of
particular interest to Linux users is that Firefox now natively
supports fractional scaled displays on Wayland. Firefox Labs has also
been made available to all users even if they opt out of telemetry or
participating in studies. "
This means more experimental features
are now available to more people.
"
This release also adds support for Module-Lattice-Based
Key-Encapsulation Mechanism (ML-KEM) for WebRTC. ML-KEM is
"
believed to be secure against attackers with large quantum
computers
". See the release notes for all changes.
The response to that post was significant. I received quite a few comments proclaiming how rare it was to find an engineer that fit the bill.
That’s fair!
But it’s not because only a tiny sliver of engineers are capable of working this way. They’re rare because very few engineers are ever taught to optimize for these skills, and even fewer companies reward them during the initial hiring phase. So the market ends up under-producing exactly the kind of talent it desires.
This post is for the engineers who want to be in that “rare” bucket.
Think about your career along two simple axes:
How you work
One-speed executor:
Same energy everywhere, vs.
Master of context:
Being willing to change gears
What you work on
(the skill frontier)
Established terrain:
Mature, saturated technologies, vs.
Western fronts:
Domains where the rules are still being written
While these axes describe the mechanics of your work,
there is also an operating system underneath
: product thinking and customer centricity. This operating system determines whether those mechanics actually translate into meaningful outcomes.
The engineers who advance fastest today live in the
top-right corner
of that map:
They deliberately choose frontier domains; they
work on
the right stuff.
They’re masters of context in
how
they work, and guided by a clear understanding of customer outcomes.
That combination is what I call the
Western Front Innovator
.
Today’s post is about how engineers struggling to progress professionally can intentionally steer their careers toward that top-right corner.
If as part of your journey, you find yourself asking questions such as:
“How can I progress with learning React?”
or
“How can I become an expert with Kubernetes?”
Stop right there!
Swarms of others have been developing expertise with technologies that emerged last decade for… at least a decade. It’s already their superpower. It’s unlikely to become yours, too.
When you chase mature stacks as your primary differentiator, you’re signing up to compete with people who have a massive head start. You’re playing
their
game on
their
field, by
their
rules.
This is no way to become “rare.”
Ask yourself:
“What is emerging
right now
? Where are the rules still being written such that nobody has an insurmountable head start?”
Today, that’s a few areas, including but not limited to
AI engineering
– specifically the intersection of data pipelines, backend systems, and LLM-driven product development. I’ll focus on this example.
Now, let’s be clear. Despite what many job requirements and LinkedIn titles would have you believe, there’s no such thing as an “AI Engineer” in any deeply meaningful sense. There simply can’t be. A junior engineer who spends six months learning this stuff today is approximately as “experienced” as almost everyone else on the market (assuming they understand the CompSci fundamentals that underpin it).
In other words, being an AI Engineer doesn’t mean having a wealth of specialized experience. How could it, possibly?
It means being
hungry to experiment
. Quickly. And consistently. It means you’re willing to live on a moving frontier where the docs are incomplete, patterns are still solidifying, and nobody can pretend to have a decade of authority.
This is the first half of the Western Front Innovator: you choose to live on the frontier.
Being a master of context boils down to a simple principle:
You adjust your engineering approach based on the stakes and the outcome you’re trying to achieve, not your habits.
This isn’t a separate “step” or a bonus skill sitting off to the side of the model. It’s the
operating system
that makes both axes work.
Without product thinking and customer centricity:
Context switching turns into over‑engineering or reckless hacking.
Frontier work turns into hype‑chasing.
With them:
Context switching becomes deliberate: you know when speed matters and when reliability matters because you understand the customer outcome you’re aiming for.
Frontier work becomes meaningful: you’re not just playing with new tools — you’re using them to solve real customer problems in ways that weren’t possible before.
This is why Western Front Innovators behave differently once they reach a frontier domain. They:
Start backwards from
customer outcomes
, not just stories and tasks.
Ask, “What is the actual job‑to‑be‑done here?”
Push on
why
a feature matters and what success should look like.
Are willing to reshape the solution when the outcome demands it.
Now mix that mindset with frontier tech and the whole picture changes:
Instead of saying, “Give me tickets,” they say, “If our customers struggle with X, there’s probably a way to combine this new AI capability, this data we already have, and this workflow to solve it in a way that didn’t exist a year ago.”
These engineers don’t just ship features. They ship
novel outcomes
. And those outcomes get noticed fast.
Unfortunately, you may find yourself saying:
“I can’t find opportunities that give me the space to do what you’re suggesting.”
Make your own opportunities. Use some downtime to wow your colleagues with a new direction. Work on side projects and/or start your own freelance to build up a portfolio. Do
absolutely anything
but blame your job or the market. Ultimately only you are responsible for ensuring you grow the way you want. Remember that.
Also, good news…
Historically, companies haven’t hired nearly enough Western Front Innovators. They optimized for narrow speed (ship tickets) or narrow craftsmanship (polish small, stable areas) rather than people who could steer and adapt.
AI-assisted development is already changing the landscape. As the raw mechanics of coding get easier, the premium is quickly moving toward:
Deciding
what
to build.
Deciding
how
fast to move.
Deciding
where
new tools can reshape the problem altogether.
In this world, Western Front Innovators aren’t only
nice
to have on a team. They’re absolutely
critical
. And this means companies will soon have no choice but to begin more purposefully seeking them and fostering their growth.
If you’re a software engineer looking for an edge, don’t just collect tech buzzwords and hope that translates into some vague idea of “senior.”
Design for the top-right:
Avoid building your whole identity around stacks that are already saturated.
Move closer to frontiers where experience is in short supply.
Lean hard into customer centricity and product thinking.
Practice context switching on purpose: prototype here, craftsmanship there, and be explicit about why.
There always has been inherent demand for engineers who can do this (even if job postings don’t overtly advertise it). And moving forward, I believe this inherent demand will quickly turn explicit.
So in a world filled with engineers sprinting toward other people’s superpowers, opt out. Create your own. Be a Western Front Innovator.
Just a quick side quest: Doesn’t the assert here risk
put
and
get
calls being optimized away? If not I think I might have misunderstood std.debug.assert’s doc comment and could use some education.
Since assert is a regular function, the argument is evaluated (and is indeed in debug/safe builds), but since the expression is assumed to be true (otherwise unreachable) it seems like the whole expression is allowed to be removed
Is there a difference whether the expression is fallible or not, or is deemed to have side effects?
Fortunately, core Zig team members frequently chimes in with their expertise on
Ziggit
and that was the case here as well.
The short answer to my question is: no, the
put
and
get
calls will
not
get optimized away. We’ll see why in a bit.
std.debug.assert and unreachable
If you’ve ever hit an assertion in Zig, then you have also looked at the implementation of
std.debug.assert
since it appears in the panic trace:
thread 33583038 panic: reached unreachable code
lib/std/debug.zig:550:14: 0x10495bc93 in assert (sideeffects)
if (!ok) unreachable; // assertion failure
That’s all assert does:
if (!ok) unreachable;
If
unreachable
is hit in safe modes, then… well, you’ve seen the panic trace. Very helpful.
In optimizing modes,
unreachable
becomes a promise that control flow will not reach this point at all. Also very helpful: faster code!
Here’s the doc comment on
assert
that helped myself and some others get confused, despite the comment being 100% verifiably correct:
In ReleaseFast and ReleaseSmall modes, calls to this function are optimized
away, and in fact the optimizer is able to use the assertion in its
heuristics.
On closer inspection, this is just what the language reference entry on
unreachable
promise us. No more, no less.
This is very different from C’s
assert
which can nuke the whole thing through macros and preprocessor directives, depending on whether
NDEBUG
is set by the build system. It’s a similar story in many other languages - they have special constructs for assertions.
In Zig,
std.debug.assert
is a plain old function for which no special treatment is given. The idea that
if (!ok) unreachable;
somehow wires up the optimizer to always delete “the whole thing” in relase builds is wrong.
Does this mean asserts can be expensive even in ReleaseFast mode?
Yes, because while the call to assert is gone, the LLVM optimizer that’s supposed to remove dead code isn’t always able to do so. Simple expressions like
data.len > 0
will almost certainly be optimized out, but it’s less clear for anything non-trivial.
I shared an example in the Ziggit thread where dead code removal does not occur. Here’s an improved version by
TibboddiT
:
const std = @import("std");
fn check(val: []u8) bool {
var sum:usize=0;
for (0..val.len *500_000_000) |v| {
sum += val[v % val.len];
}
return sum ==6874500000000;
}
pubfn main() void {
var prng: std.Random.DefaultPrng = .init(12);
const rand = prng.random();
var buf: [100]u8=undefined;
rand.bytes(&buf);
std.debug.assert(check(&buf));
}
Compile and run this under ReleaseFast on Zig 0.14.x and you’ll see that the program is busy for a good while.
The core team believes this to be a missed optimization in LLVM.
If profiling shows that an assertion is expensive, or you’re just not confident it will be fully elided, you can do something like this:
if (std.debug.runtime_safety) std.debug.assert(check(&buf));
…or check against build modes when that makes more sense.
When the optimizer will definitely not remove code
Now back to the original question, which is about the opposite of trying to get rid of dead code. We want to
keep
code.
There are many reasons why code will never be removed by a correctly implemented optimizer. One of them is the presence of side effects
[1]
. Another reason is that writes to memory must be observable when that memory is later read.
Basically, the optimizer’s rule is that code removal must not lead to correctness bugs.
The
put
call in
assert(try q.put(io, &.{item}, 1) == 1);
has side effects
and
depends on memory coherence as there’s a
get
call elsewhere. We’re all good.
Conclusion:
The
assert(expr)
call is nothing more than
if (!expr) unreachable
where unreachable:
yields a helpful trace in safe builds, and
provides the optimizer with useful information in unsafe builds
The optimizer will never optimize away
expr
if doing so would lead to correctness issues
The optimizer is not always able to optimize away
expr
even when it’s effectively dead code
I’ll round this off with some wise words from ifreund
on the issue
if Zig should match C’s assert behavior:
I think trying to match C’s assert is exactly what we should not do. I’ve seen many bugs caused by putting expressions with side effects inside the assert macro. Macros suck.
[1] In the Ziggit thread, Andrew Kelley shared the concrete list of side effects:
loading through a volatile pointer
storing through a volatile pointer
inline assembly with volatile keyword
atomics with volatile pointers
calling an extern function
@panic, @trap, @breakpoint
unreachable in safe optimization modes (equivalent to @panic)
Kaiju – General purpose 3D/2D game engine in Go and Vulkan with built in editor
Kaiju is a 2D/3D game engine written in Go (Golang) backed by Vulkan. The goal of the engine is to use a modern, easy, systems level programming language, with a focus on simplicity, to create a new kind of game engine.
📄 2D / 🧊 3D Game Engine
🪟 Windows
🐧 Linux
🤖 Android (NEW, support now functional)
🍎 Mac (support is currently WIP)
🤖👉⌨️ Local AI (LLM) interop
⚠️
🚧🏗️👷♂️ Work in progress, under heavy development
🚚 Faster builds than other game engines
🔥 Better performance than other game engines (9x faster than Unity out of the box)
The current version of the base engine renders extremely fast, faster than most would think a garbage collected language could go. In my testing a release mode build of a game in Unity with nothing but a black background and a cube runs at about 1,600 FPS. In Kaiju, the same thing runs at around 5,400 FPS on the same machine. In fact, a complete game, with audio, custom cursors, real time PBR rendering with real time shadows, UI, and more runs at 2,712 FPS (in "debug" mode)
screenshots or it didn't happen
.
Why Go (golang)?
I love C, and because I love C and found out that Ken Thompson played a part in designing Go, I gave Go a chance. It has been such a joy to use and work with I decided to port my C game engine to Go. Go is a modern system-level language that allows me to write code the way I want to write code and even have the opportunity to do some crazy things if I want to (no strings attached). Also the simplicity and "just works" of writing Assembly code was a great boost to my happiness.
What's more, it's a language that other developers can easily learn and jump right into extending the engine/editor. No need for developers to re-figure out some bespoke macros or crazy templating nonsense. It's flat, easy, straight forward, and the foot-gun is hidden behind some walls, but there if you want it. Furthermore, developers can write their games in Go directly, no need for some alternative language that is different from the engine code (but we'll include Lua for modding).
What about the Garbage Collector?!
I am creating this section because I get asked about it when I mention "Go", possibly not realizing that most public game engines use a garbage collector (GC).
The GC is actually a feature I'm happy with (shocker coming from a C guy). Well, the reason is simple, if you're going to make a game engine that the public will use and needs to be stable, you need a garbage collector. Unity has C# (and possibly an internal GC as well), Unreal has a GC (and it could use a tune up if you ask me), Godot has a GC albeit their scripting language or when you use C#. It is actually very important for public engines to have a GC because people are only human and make a lot of mistakes, mistakes they'll blame on you (the engine developer) before they blame themselves.
Coincidentally, the overall design I have for the engine plays very well with the GC and last I measured, I have a net-0 heap allocation while running (may need a new review). If you don't abuse the GC, you shouldn't generally feel it, it runs concurrently as well.
I'll be the first to admit, I think the developers of Go can create a better GC than I can, and probably better than Unreal and Unity too.
⚠️
WORK IN PROGRESS
⚠️
Though the engine is production ready, the editor
is not
, feel free to join and contribute to its development.
Despite having a stable release model and cadence since December 2003, Linux
kernel version numbers seem to baffle and confuse those that run across them,
causing numerous groups to mistakenly make versioning statements that are flat
out false. So let’s go into how this all works in detail.
This is a post in the series about the Linux kernel CVE release process:
Linux kernel versions, how the Linux kernel releases are numbered (this post)
“Old” kernel version scheme is no more
I’m going to ignore the “old” versioning scheme of Linux that was in place
before the 2.6.0 release happened on December 17, 2003, as that model is no
longer happening. It only confuses people when attempting to talk about code
that is over 23 years old, and no one should be using those releases anymore,
hopefully. The only thing that matters today about the releases that happened
between 1991 and 2003 is that the developers have learned from their past
mistakes and now are following a sane and simple release model and cadence.
Luckily even
Wikipedia
glosses over some of the mess that happened in those old development cycles, so
the less said about them, the better. Moving on to…
The only things needed to remember about Linux kernel releases is:
All releases are “stable” and backwards compatible for userspace programs
to all previous kernel releases.
Higher major and minor version numbers mean a newer release, and do not
describe anything else.
All releases are stable
Once the 2.6.0 kernel was released, it was decided that the rule of kernel
releases would be that every release would be “stable”. No kernel release
should ever break any existing user’s code or workflow. Any regressions that
happened would always be prioritized over new features, making it so that no
user would ever have a reason to want to remain at an older kernel version.
This is essential when it comes to security bugs, as if all releases are
stable, and will not break, then there is both no need to maintain older kernel
versions as well as no risk for a user to upgrade to a new release with all
bugfixes.
Higher numbers means newer
Along with every release being stable, the kernel developers at the 2.6.0 time
decided to switch to a “time based release schedule”. This means that the
kernel is released based on what is submitted for any specific development
cycle during the 2 week merge window
as described in the kernel documentation.
So with a time based releases happening on average every 10 weeks, the only way
to distinguish between releases is the version number, which is incremented at
each release.
Stable kernel branches
Once the kernel developers started on this “every release is stable” process,
they soon realized that during the 10 week development cycle, there was a need
to get bugfixes that went into that kernel into the “last” release as that is
what users were relying on. To accomplish this, the goal of a “stable kernel
release” happened. The stable releases would take the bugfixes that went into
the current development tree, and apply them to the previous stable release and
do a new release that users could then use.
The rules of what is acceptable into a stable kernel
are
documented
with the most important rule being the first one “It or an equivalent fix must
already exist in Linux mainline (upstream).”
Major.Minor.Stable
Kernel version numbers are split into 3 fields, major, minor, and stable,
separated by a ‘.’ character. The minor number is incremented by Linus every
release that he makes, while the major number is only incremented every few
years when the minor number gets too large for people. The major.minor pair is
considered the “kernel branch” number for a release, and the stable number is
then incremented for every stable release on that branch.
An example makes this a bit simpler to understand. Here is how the 5.2 kernel
releases happened:
First 5.2.0 was released by Linus, and then he continued on with the 5.3
development cycle, first releasing -rc1, and then -rc2, and so on until -rc7
which was followed by a stable release, 5.3.0.
At the time 5.2.0 was released, it was branched in the
Linux stable git tree
by the stable kernel maintainers, and stable releases started happening, 5.2.1,
5.2.2, 5.2.3, and so on. The changes in these stable releases were all first
in Linus’s tree, before they were allowed to be in a stable release, ensuring
that when a user upgrades from 5.2 to 5.3, they will not have any regressions
of bugfixes that might have only gone into the 5.2.stable releases.
.y terminology
Many times, kernel developers will discuss a kernel branch as being “5.4.y”
with “.y” being a way to refer to the stable kernel branch for the 5.4 release.
This is also how the branches are named in the
Linux stable git tree
,
which is where the terminology came from.
Stable release branches “stop”
What is important to remember is that stable release branches are ended after a
period of time. Normally they last until a few weeks after the next minor
release happens, but one kernel branch a year is picked to be a “longterm”
kernel release branch that will live for at least 2 years. This kernel is
usually the “last” kernel release of the year, and the support cycle can be
seen on the kernel.org
releases page
This ability for kernel releases to continue for a short while, or many years
before going end-of-life, is important to realize when attempting to track
security bugs and fixes over time, as many companies get confused when trying
to compare version numbers against each other. It is NOT safe to do a simple
“if this version number is bigger than the previous one, then all fixes for it
will be in the next release.” You have to treat each kernel “branch” as a
unique tree, and not compare them against each other in order to be able to
properly track changes over time. But more about that later on…
I'm the kind of person who thinks about the design and implementation of hash tables. One design which I find particularly cute, and I think deserves a bit more publicity, is Robin Hood open-addressing with linear probing and power-of-two table size. If you're not familiar with hash table terminology, that might look like a smorgasbord of random words, but it should become clearer as we look at some actual code.
To keep the code simple to start with, I'm going to assume:
Keys are randomly-distributed 32-bit integers.
Values are also 32-bit integers.
If the key
0
is present, its value is not
0
.
The table occupies at most 32 GiB of memory.
Each slot in the table is either empty, or holds a key and a value. The combination of properties (1) and (2) allows a key/value pair to be stored as a 64-bit integer, and property (3) means that the 64-bit value
0
can be used to represent an empty slot (some hash table designs also need a special value for representing tombstones, but this design doesn't need tombstones). Combining a key and a value into 64 bits couldn't be easier: the low 32 bits hold the key, and the high 32 bits hold the value.
The structure for the table itself needs a pointer to the array of slots, the length of said array, and the number of non-empty slots. As the length is always a power of two, it's more useful to store
length - 1
instead of
length
, which leads to
mask
rather than
length
, and property (4) means that
mask
can be stored as 32 bits. As the load factor should be less than 100%, we can assume
count < length
, and hence
count
can also be 32 bits. This leads to a mundane-looking:
Property (1) means that we don't need to hash keys, as they're already randomly distributed. Every possible key
K
has a "natural position" in the slots array, which is just
K & mask
. If there are collisions, the slot in which a key
actually
ends up might be different to its natural position. The "linear probing" part of the design means that if
K
cannot be in its natural position, the next slot to be considered is
(K + 1) & mask
, and if not that slot then
(K + 2) & mask
, then
(K + 3) & mask
, and so on. This leads to the definition of a "chain": if
K
is some key present in the table,
C
K
denotes the sequence of slots starting with
K
's natural position and ending with
K
's actual position. We have the usual property of open-addressing: none of the slots in
C
K
are empty slots. The "Robin Hood" part of the design then imposes an additional rather interesting property: for each slot
S
in
C
K
,
Score(S.Index, S.Key) ≥ Score(S.Index, K)
, where:
S.Index
is the index of
S
in the
slots
array (not the index of it in
C
K
).
S.Key
is the key present in slot
S
(i.e. the low 32 bits of
slots[S.Index]
).
Score(Index, Key)
is
(Index - Key) & mask
.
These properties give us the termination conditions for the lookup algorithm: for a possible key
K
, we look at each slot starting from
K
's natural position, and either we find
K
, or we find an empty slot, or we find a slot with
Score(S.Index, S.Key) < Score(S.Index, K)
. In either of the latter two cases,
K
cannot have been present in the table. In the function below,
Score(S.Index, K)
is tracked as
d
. In a language with a modern type system, the result of a lookup would be
Optional<Value>
, but if sticking to plain C, property (3) can be used to make something similar: the 64-bit result is zero if the key is absent, and otherwise the value is in the low 32 bits of the result (which may themselves be zero, but the full 64-bit result will be non-zero). The logic is thus:
If using a rich 64-bit CPU architecture, many of the expressions in the above function are cheaper than they might initially seem:
slots[idx]
involves zero-extending
idx
from 32 bits to 64, multiplying it by
sizeof(uint64_t)
, adding it to
slots
, and then loading from that address. All this is a single instruction on x86-64 or arm64.
key == (uint32_t)slot
involves a comparison using the low 32 bits of a 64-bit register, which is a completely standard operation on x86-64 or arm64.
(slot >> 32) | (slot << 32)
is a rotation by 32 bits, which again is a single instruction on x86-64 or arm64.
On the other hand, if using riscv64, things are less good:
If the
Zba
extension is present,
sh3add.uw
is a single instruction for zero-extending
idx
from 32 bits to 64, multiplying it by
sizeof(uint64_t)
, and adding it to
slots
. If not, each step is a separate instruction, though the zero-extension can be eliminated
with a slight reformulation
to encourage the compiler to fold the zero-extension onto the load of
table->mask
(as riscv64 usually defaults to making sign-extension free, in contrast to x86-64/arm64 which usually make zero-extension free). Regardless, the load is always its own instruction.
key == (uint32_t)slot
hits a gap in the riscv64 ISA: it doesn't have any 32-bit comparison instructions, so this either becomes a 32-bit subtraction followed by a 64-bit comparison against zero, or promotion of both operands from 32 bits to 64 bits followed by a 64-bit comparison.
If the
Zbb
extension is present, rotations are a single instruction. If not, they're three instructions, and so it becomes almost worth reworking the slot layout to put the key in the high 32 bits and the value in the low 32 bits.
Moving on from lookup to insertion, there are various different options for what to do when the key being inserted is already present. I'm choosing to show a variant which returns the old value (in the same form as
table_lookup
returns) and then overwrites with the new value, though other variants are obviously possible. The logic follows the same overall structure as seen in
table_lookup
:
uint64_ttable_set(hash_table_t* table, uint32_t key, uint32_t val){
uint32_t mask = table->mask;
uint64_t* slots = table->slots;
uint64_t kv = key + ((uint64_t)val << 32);
for (uint32_t d = 0;; ++d) {
uint32_t idx = ((uint32_t)kv + d) & mask;
uint64_t slot = slots[idx];
if (slot == 0) {
// Inserting new value (and slot was previously empty)
slots[idx] = kv;
break;
} elseif((uint32_t)kv == (uint32_t)slot) {
// Overwriting existing value
slots[idx] = kv;
return (slot >> 32) | (slot << 32);
} else {
uint32_t d2 = (idx - (uint32_t)slot) & mask;
if (d2 < d) {
// Inserting new value, and moving existing slot
slots[idx] = kv;
table_reinsert(slots, mask, slot, d2);
break;
}
}
}
if (++table->count * 4ull >= mask * 3ull) {
// Expand table once we hit 75% load factor
table_rehash(table);
}
return0;
}
To avoid the load factor becoming too high, the above function will sometimes grow the table by calling this helper function:
Both of
table_set
and
table_rehash
make use of a helper function which is very similar to
table_set
, but doesn't need to check for overwriting an existing key and also doesn't need to update
count
:
That covers lookup and insertion, so next up is key removal. As already hinted at, this hash table design doesn't need tombstones. Instead, removing a key involves finding the slot containing that key and then shifting slots left until finding an empty slot or a slot with
Score(S.Index, S.Key) == 0
. This removal strategy works due to a neat pair of emergent properties:
If slot
S
has
Score(S.Index, S.Key) != 0
, it is viable for
S.Key
to instead be at
(S.Index - 1) & mask
(possibly subject to additional re-arranging to fill the gap formed by moving
S.Key
).
If slot
S
has
Score(S.Index, S.Key) == 0
, and
S
is part of some chain
C
K
, then
S
is at the very start of
C
K
. Hence it is viable to turn
(S.Index - 1) & mask
into an empty slot without breaking any chains.
This leads to the tombstone-free removal function, which follows the established pattern of returning either the old value or zero:
That wraps up the core concepts of this hash table, so now it is time to revisit some of the initial simplifications.
If keys are 32-bit integers but are
not
randomly-distributed, then we just need an invertible hash function from 32 bits to 32 bits, the purpose of which is to take keys following ~any real-world pattern and emit a ~random pattern. The
table_lookup
,
table_set
, and
table_remove
functions gain
key = hash(key)
at the very start but are otherwise unmodified (noting that if the hash function is invertible, hash equality implies key equality, hence no need to explicitly check key equality), and
table_iterate
is modified to apply the inverse function before calling
visit
. If hardware CRC32 / CRC32C instructions are present (as is the case on sufficiently modern x86-64 and arm64 chips), these can be used for the task, although their inverses are annoying to compute, so perhaps not ideal if iteration is an important operation. If CRC32 isn't viable,
one option
out of many is:
uint32_tu32_hash(uint32_t h){
h ^= h >> 16;
h *= 0x21f0aaad;
h ^= h >> 15;
h *= 0x735a2d97;
h ^= h >> 15;
return h;
}
uint32_tu32_unhash(uint32_t h){
h ^= h >> 15; h ^= h >> 30;
h *= 0x97132227;
h ^= h >> 15; h ^= h >> 30;
h *= 0x333c4925;
h ^= h >> 16;
return h;
}
If keys and values are larger than 32 bits, then the design can be augmented with a separate array of key/value pairs, with the design as shown containing a 32-bit hash of the key and the array index of the key/value pair. To meet property (3) in this case, either the hash function can be chosen to never be zero, or "array index plus one" can be stored rather than "array index". It is not possible to make the hash function invertible in this case, so
table_lookup
,
table_set
, and
table_remove
do
need extending to check for key equality after confirming hash equality. Iteration involves walking the separate array of key/value pairs rather than the hash structure, which has the added benefit of iteration order being related to insertion order rather than hash order. As another twist on this, if keys and values are variably-sized, then the design can instead be augmented with a separate array of
bytes
, with key/value pairs serialised somewhere in that array, and the hash structure containing a 32-bit hash of the key and the
byte
offset (within the array) of the key/value pair.
Of course, a design can only stretch so far. If you're after a concurrent lock-free hash table, look elsewhere. If you can rely on 128-bit SIMD instructions being present, you might instead want to group together every 16 key/value pairs, keep an 8-bit hash of each key, and rely on SIMD to perform 16 hash comparisons in parallel. If you're building hardware rather than software, it can be appealing to have multiple hash functions, each one addressing its own SRAM bank. There is no one-size-fits-all hash table, but I've found the one shown here to be good for a lot of what I do.
Have you been listening to the Hell Gate Podcast? You can catch last week's episode
here
.
On the subterranean practice basketball court at Barclays Center on Monday night, a group of real estate developers stood in front of more than 100 Brooklynites and asked them to participate in the "reimagining phase" of an unfinished project that began more than two decades ago, and is now marred with a history of broken promises.
The meeting was the second in a series of public workshops held by the state's quasi-private economic arm,
Empire State Development
, with developers Cirrus Real Estate Partners and LCOR after the pair
took over
the Atlantic Yards project from Greenland USA in October.
For locals, there's a certain deja vu: ESD
first came to them in 2003
with a plan to build a new neighborhood around a centerpiece arena. The plan would bypass the City's typical land use review process—with the state seizing some properties through eminent domain. But it would be totally worth it, because the developer would build 2,250 units of much-needed affordable housing atop the railyard. Also:
Brooklyn would get a basketball team
. If
Jay-Z was behind it
, could it really be that bad?
Despite
fierce opposition
to the plans at the time—including
warnings
that the affordable housing was an empty promise—the land got seized, the stadium got built, and 3,212 apartments went up, with fewer than half of them "affordable." Developer Greenland defaulted on
nearly $350 million of loans
in 2023 that forced a foreclosure auction before it could build the rest of the promised housing, shorting New Yorkers by about 900 affordable units. And despite signing a legally-binding agreement that required them to build all those units by May of 2025, New York officials opted to not hold Greenland accountable for the millions in monthly fines they were meant to pay for not doing the damn thing,
citing their fear of a lawsuit
.
Today, we're releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.
We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.
Highlights.
Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.
Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.
Mistral Vibe CLI: Native, open-source agent in your terminal solving software engineering tasks autonomously.
Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.
Compatible with on-prem deployment and custom fine-tuning.
Devstral: the next generation of SOTA coding.
Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.
Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.
Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.
Built for production-grade workflows.
Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.
The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.
We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.
“Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it's a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.
“Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.
Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents.
Mistral Vibe CLI.
Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.
Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:
Project-aware context: Automatically scans your file structure and Git status to provide relevant context
Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes
Multi-file orchestration: Understands your entire codebase—not just the file you're editing—enabling architecture-level reasoning that can halve your PR cycle time
Persistent history, autocompletion, and customizable themes.
You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.
Get started.
Devstral 2 is currently offered free via
our API
. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.
We’ve partnered with leading, open agent tools
Kilo Code
and
Cline
to bring Devstral 2 to where you already build.
Mistral Vibe CLI is available as an extension in
Zed
, so you can use it directly inside your IDE.
Recommended deployment for Devstral.
Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on
build.nvidia.com
. Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.
Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.
For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for
Mistral Vibe CLI
.
Contact us.
We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!
If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to
apply to join our team
.
Security updates for Tuesday
Linux Weekly News
lwn.net
2025-12-09 14:15:10
Security updates have been issued by AlmaLinux (kernel, kernel-rt, and webkit2gtk3), Fedora (abrt and mingw-libpng), Mageia (apache and libpng), Oracle (abrt, go-toolset:rhel8, kernel, sssd, and webkit2gtk3), Red Hat (kernel and kernel-rt), SUSE (gimp, gnutls, kubevirt, virt-api-container, virt-cont...
Border Patrol Agent Recorded Raid with Meta’s Ray-Ban Smart Glasses
403 Media
www.404media.co
2025-12-09 14:02:09
New videos and photos shared with 404 Media show a Border Patrol agent wearing Meta Ray-Bans glasses with the recording light clearly on. This is despite a DHS ban on officers recording with personal devices....
On a recent immigration raid, a Border Patrol agent wore a pair of Meta’s Ray-Ban smart glasses, with the privacy light clearly on signaling he was recording the encounter, which agents are not permitted to do, according to photos and videos of the incident shared with 404 Media.
Previously
when 404 Media covered
Customs and Border Patrol (CBP) officials’ use of Meta’s Ray-Bans, it wasn’t clear if the officials were using them to record raids because the recording lights were not on in any of the photos seen by 404 Media. In the new material from Charlotte, North Carolina, during the recent wave of immigration enforcement, the recording light is visibly illuminated.
That is significant because CBP says it does not allow employees to use personal recording devices. CBP told 404 Media it does not have an arrangement with Meta, indicating this official was wearing personally-sourced glasses.
An activist in Charlotte provided the photos and videos to 404 Media. 404 Media granted them anonymity to protect them from retaliation.
They said the encounter happened at a busy intersection surrounded by a forest where a flower seller usually sets up shop. “By the time we showed up, the flower vendor had apparently seen Border Patrol agents approaching, and he ran into the woods,” the activist said. “They then deployed agents that were wearing these bucket hats into the woods.”
Image: 404 Media.
One of those agents was wearing the Meta Ray-Ban glasses, the material shows.
When we initially wrote about CBP agents wearing Meta Ray-Bans in Los Angeles, privacy experts told 404 Media that Department of Homeland Security (DHS) policies ban agents from wearing personal recording devices and also explicitly ban agents from taking their own recordings.
CBP’s policy on recording devices
states that “no personally owned devices may be used in lieu of IDVRS [Incident Driven Video Recording Systems] to record law enforcement encounters.” It adds that “recorded data shall not be downloaded or recorded for personal use or posted onto a personally owned device.”
The broader DHS policy
says that “the use of personally owned [Body Worn Cameras] or other video, audio, or digital recording devices to record official law enforcement activities is prohibited.”
In a statement to 404 Media, a CBP spokesperson reaffirmed that the agency does not have any contract with Meta, and said that agents cannot use personal recording devices, but can bring “personally purchased sunglasses.” The statement did not say anything about what happens if the sunglasses happen to have a camera and microphone inside of them.
“CBP does not have an arrangement with Meta. The use of personal recording devices is not authorized; however, Border Patrol agents may wear personally purchased sunglasses,” the CBP spokesperson told 404 Media. “CBP utilize Go Pros mounted to helmets or body armor at times, as well as traditional DSLR handheld cameras.”
Meta did not respond to a request for comment.
In November, DHS launched an
operation it called “Charlotte’s Web,”
focused on the North Carolina city. In its announcement, DHS pointed to several criminals it said it detained. Data
recently obtained by the CATO Institute
showed that 73 percent of people detained by ICE since October had no criminal convictions, and five percent had a violent criminal conviction.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.
I'm excited to announce the initial alpha release of
fate
, a modern data client for React & tRPC.
fate
combines view composition, normalized caching, data masking, Async React features, and tRPC's type safety.
fate
is designed to make data fetching and state management in React applications more composable, declarative, and predictable. The framework has a minimal API, no DSL, and no magic—
it's just JavaScript
.
GraphQL and Relay introduced several novel ideas: fragments co‑located with components,
a normalized cache
keyed by global identifiers, and a compiler that hoists fragments into a single network request. These innovations made it possible to build large applications where data requirements are modular and self‑contained.
Nakazawa Tech
builds apps and
games
primarily with GraphQL and Relay. We advocate for these technologies in
talks
and provide templates (
server
,
client
) to help developers get started quickly.
However, GraphQL comes with its own type system and query language. If you are already using tRPC or another type‑safe RPC framework, it's a significant investment to adopt and implement GraphQL on the backend. This investment often prevents teams from adopting Relay on the frontend.
Many React data frameworks lack Relay's ergonomics, especially fragment composition, co-located data requirements, predictable caching, and deep integration with modern React features. Optimistic updates usually require manually managing keys and imperative data updates, which is error-prone and tedious.
fate
takes the great ideas from Relay and puts them on top of tRPC. You get the best of both worlds: type safety between the client and server, and GraphQL-like ergonomics for data fetching. Using
fate
usually looks like this:
I was part of the original Relay and React teams at Facebook in 2013, but I didn't build Relay. While I worked on deploying the first server-side rendering engine for React and migrating Relay from
React mixins
to higher-order components through codemods, I honestly didn't fully grasp how far ahead everyone else on the Relay team was back then.
In the following years, Relay became the default data framework at Facebook. It was such an elegant way to handle client-side data that I had assumed it would gain widespread adoption. That didn't happen, and its backend companion GraphQL has become divisive in the web ecosystem.
This boilerplate is repetitive and ok, but not great. The real problems start when data changes. Mutations tend to have complex logic with detailed patches to the local cache or for handling rollbacks. For example:
When your data client is an abstraction over
fetch
, keeping client state consistent gets hard quickly. Correctly handling mutations often requires knowing every place in your application that might fetch the same data. That often leads to defensive refetching and waterfalls down the component tree. Component trees frequently look like this:
To be clear: These libraries are
great at fetching data
. I know better patterns are available in most of these libraries, and advanced developers can avoid many of the downsides. Sync engines address these problems, but they're challenging to adopt and also come with trade-offs.
Still, it's too easy to get something wrong. Codebases become brittle and hard to maintain. Looking ahead to a world where AI increasingly writes more of our code and gravitates towards simple, idiomatic APIs,
the problem is that request-centric fetch APIs exist at all
.
I did not want to compromise on the key insights from Relay: a normalized cache, declarative data dependencies, and view co-location. At around the same time, I watched
Ricky Hanlon
's
two
-
part
React Conf talk about Async React and got excited to start building.
When fetch-based APIs cache data based on
requests
, people think about
when
to fetch data, and requests happen at
every level
of the component tree. This leads to boilerplate, complexity, and inconsistency. Instead,
fate
caches data by
objects
, shifts thinking to
what
data is
required
, and
composes
data requirements up to a single request at the root.
A typical component tree in a React application using
fate
might look like this:
Let me show you a basic
fate
code example that declares its data requirements as a "view", co-located with a component.
fate
requires you to explicitly "select" each field that you plan to use in your components as a "view" into your data:
tsx
import type { Post } from '@org/server/views.ts';import { UserView } from './UserCard.tsx';import { useView, view, ViewRef } from 'react-fate';export const PostView = view<Post>()({ author: UserView, content: true, id: true, title: true,});export const PostCard = ({ post: postRef }: { post: ViewRef<'Post'> }) => { const post = useView(PostView, postRef); return ( <Card> <h2>{post.title}</h2> <p>{post.content}</p> <UserCard user={post.author} /> </Card> );};
A
ViewRef
is a reference to a concrete object of a specific type, for example a
Post
with id
7
. It contains the unique ID of the object, the type name and some
fate
-specific metadata.
fate
creates and manages these references for you, and you can pass them around your components as needed to resolve them against their views.
fate
does not provide hooks for mutations like traditional data fetching libraries do. Instead, all tRPC mutations are exposed as actions for use with
useActionState
and React Actions. They support optimistic updates out of the box.
A
LikeButton
component using
fate
Actions and an async component library might look like this:
When this action is called,
fate
automatically updates all views that depend on the
likes
field of the particular
Post
object. It doesn't re-render components that didn't select that field. There's no need to manually patch or invalidate cache entries. If the action fails,
fate
rolls back the optimistic update automatically and re-renders all affected components.
All of the above works because
fate
has a normalized data cache under the hood, with objects stored by their ID and type name (
__typename
, e.g.
Post
or
User
), and a
tRPC backend conforming to
fate
's requirements
, exposing
byId
and
list
queries for each data type.
You can adopt
fate
incrementally in an existing tRPC codebase without changing your existing schema by adding these queries alongside your existing procedures.
With these three code examples we covered almost the entire client API surface of
fate
. As a result, the mental model of using
fate
is dramatically simpler compared to the status quo.
fate
's API is a joy to use and requires less code, boilerplate, and manual state management.
It's this clarity together with reducing the API surface that helps humans and AI write better code.
fate-template
comes with a simple tRPC backend and a React frontend using
fate
. It features modern tools to deliver an incredibly fast development experience. Follow its
README.md
to get started.
fate
is not complete yet. The library lacks core features such as garbage collection, a compiler to extract view definitions statically ahead of time, and there is too much backend boilerplate. The current implementation of
fate
is not tied to tRPC or Prisma, those are just the ones we are starting with. We welcome contributions and ideas to improve fate. Here are some features we'd like to add:
Support for Drizzle
Support backends other than tRPC
Persistent storage for offline support
Implement garbage collection for the cache
Better code generation and less type repetition
Support for live views and real-time updates via
useLiveView
and SSE
NOTE
80% of
fate
's code was written by OpenAI's Codex – four versions per task, carefully curated by a human. The remaining 20% was written by
@cnakazawa
.
You get to decide which parts are the good ones!
The docs were 100% written by a human.
TextKit 2 (
NSTextLayoutManager
) API was
announced
publicly during WWDC21, which is over 4 years ago. Before that, it was in private development for a few years and gained widespread adoption in the macOS and iOS frameworks. Promised an easier, faster, overall better API and text layout engine that replaces the aged TextKit 1 (
NSLayoutManager
) engine.
Over the years, I gained some level of expertise in TextKit 2 and macOS/iOS text processing, which resulted in
STTextView
- a re-implementation of TextView for macOS (AppKit) and iOS (UIKit) using TextKit 2 framework as a text layout engine, as well as
public speaking praising
the new, better engine we've just got to solve all the problems.
Based on my 4 years of experience working with it, I feel like I fell into a trap. It's not a silver bullet. It is arguably an improvement over TextKit 1. I want to discuss certain issues that make the TextKit 2 annoying to use (at best) and not the right tool for the job (at the worst)
The architecture & implementation
The TextKit2 architecture is good. The abstraction and the components make a lot of sense and deliver on the premise of progressive complexity. BUT the implementation is less so on par with the architecture. On the one side,
NSTextContentManager
provides an abstract interface for the layout engine.
In practice
, using anything other than
NSTextContentStorage
is impossible. NSTextContentStorage is one (and the only) provided implementation of the storage that works. That itself is backed by
NSTextStorage
, which is an abstract interface for the content storage itself - meaning all the problems I may have with NSTextStorage apply to TextKit 2 as well. In short,
the UITextView/NSTextView won't work with anything other than NSTextContentStorage
.
Text content manager operates on a series of
NSTextElement
blocks, but again, the only working implementation must inherit from
NSTextParagraph
, or you're in trouble (runtime assertions).
The implementation is inconsistent, and it seems intentional. TextKit2 is implemented to be used by UITextView, and that is quickly obvious. What a waste of a great idea that could have been otherwise.
Bugs in software are expected, and for TextKit 2, it's no exception.
I reported many bugs myself
. Some issues are fixed, while others remain unresolved. Many users received no response. Additionally, bugs occur in specific versions, and regressions are common. It is annoying to maintain compatibility, of course. From my perspective, probably the most annoying
bugs are around the "extra line fragment"
(the rectangle for the extra line fragment at the end of a document) and its broken layout.
Viewport is a struggle
Real struggle, though, is around the newly introduced idea of the viewport and how it works. Viewport is a tool that optimizes text layout engine work and minimizes memory footprint by focusing on the visible area, rather than the entire document, all the time. Viewport is a small portion of the visible area that "moves" as the user interacts with different parts of the document (eg, scrolling moves the viewport frame)
The viewport promise is that I don't have to ensure the layout of the whole document to get the layout of a random fragment of the document, and only layout lazily fragments that are actually important to display. To make this feature work, it requires various caching, managing intervals, invalidating ranges, and other related tasks; the TextKit 2 framework handles all of that.
Here's the stage: imagine you have a window with a text in it. Text scrolls up and down; as you scroll, the visible area displays the layout text. So, a typical text editor/viewer scenario.
TextEdit with plain text content. One of the first use of TextKit 2 on macOS.
One of the problems with viewport management is the very same thing that is the feature of the viewport. When ensuring layout only in the viewport (visible area), all other parts of the document are estimated. Specifically, the total height of the document is estimated. The estimation changes frequently as I lay out more/different parts of the document. That happens when I move the viewport while scrolling up/down. TextKit updates the value of
NSTextLayoutManager.usageBoundsForTextContainer
whenever the estimates change. The recipe to estimate the total height of the document is
ensureLayout(for: documentRange.endLocation)
that says, ensure layout of the end of the document, without forcing layout of the whole document. That operation, by definition, results in an estimated size.
Resize the view to match the
usageBoundsForTextContainer
value. In a scrollview, this results in an update of the
scroller
to reflect the current document position.
The first problem I notice with this approach is that as I scroll the document and continue to lay out the changing viewport position, the value of usageBoundsForTextContainer is
unstable
. It frequently changes value significantly. In a scrollview, such frequent and significant changes to the height result in "juggery" of the scroller position and size
scrolling down. as document content moves up, viewport moves downward
The jiggery is super annoying and hard to accept. This is also expected, given that the height is estimated. Works as-designed:
This is correct and as-designed – The viewport-based layout in TextKit2 doesn't require that the document is fully laid out; it just needs that the part of text to be displayed on screen is laid out, and that is the way it achieves a better scrolling performance.
A slightly "better" as a more stable value (from my observation), I receive when asking for the location of the last "layout element", using
enumerateTextLayoutFragments
and asking for the layout frame of the last, and only last fragment.
That estimation is also just an estimate, and usually the value is significantly higher than the final, fully laid out document. How do I jump to the end of the document? The answer is:
receive an estimated (too big or too small) content height
update the view content size with the estimated height
enforce layout at the end of the document
move (relocate) the viewport to the end of that height (either final or estimated)
And yes, the viewport will display the end of the document, but the total height of the content is still estimated, meaning the scroller is most likely at the wrong position (it is wrong). What's the "fix" to that? The best guess is to artificially and contiusly "adjust" viewport position, meaning: the view scroll to estimated bottom of the document. Still, we ignore that fact and recognize that fact (from the context) and "fake" the viewport to display end of the document at that position, even if that position is way out of bounds of the document size. That operation (more likely, I need more adjustments like this) is fragile, and frankly, not easy to handle in a way that is not noticeable.
For a long time, I thought that I "hold it wrong" and there must be a way (maybe a private API) that addresses these problems, then I realized I'm not wrong. TextEdit app from macOS suffers from the very same issues I do in my implementations:
0:00
/
0:06
TextEdit and TextKit 2 glitches. if you know where to push button.
0:00
/
0:14
TextEdit and TextKit 2 glitches. if you know where to push button.
So, so
Today, I believe that's not me. The TextKit 2 API and its implementation are lacking and unexpectedly difficult to use correctly. While the design is solid, it proved challenging to apply in real-world applications. I wish I had a better or more optimistic summary of my findings, but it is what it is. I've started to think that TextKit 2 might not be the best tool for text layout, especially when it comes to text editing UI. I remain open to suggestions, and hopefully, I will find a way to use TextKit 2 without compromising user experience.
Rahm Emanuel says U.S. should follow Australia's youth social media ban
Save Mumia's Eyesight: Supporters March to Prison to Demand Medical Care for Him & Aging Prisoners
Democracy Now!
www.democracynow.org
2025-12-09 13:47:49
Supporters of Mumia Abu-Jamal are on a 103-mile, 12-day march ending Tuesday in Frackville, Pennsylvania, where he is imprisoned at the Mahanoy state prison. The march ends on the same day Abu-Jamal was arrested in 1981 for the murder of Philadelphia police officer Daniel Faulkner, for which he has ...
Supporters of Mumia Abu-Jamal are on a 103-mile, 12-day march ending Tuesday in Frackville, Pennsylvania, where he is imprisoned at the Mahanoy state prison. The march ends on the same day Abu-Jamal was arrested in 1981 for the murder of Philadelphia police officer Daniel Faulkner, for which he has always maintained his innocence. One of the best-known political prisoners in the world, Abu-Jamal was an award-winning journalist and co-founder of the Philadelphia chapter of the Black Panther Party before his incarceration, and has continued to write and speak from prison. Human rights groups say he was denied a fair trial, with evidence unearthed in 2019 showing judicial bias and police and prosecutorial misconduct. Abu-Jamal is now 71 years old, and advocates say he is being denied proper medical care in prison, permanently risking his eyesight.
“We’re marching today to demand freedom for Mumia and all political prisoners,” says activist Larry Hamm.
“We ration healthcare in this country, and in particular for prisoners,” says Noelle Hanrahan, part of Abu-Jamal’s legal team, who is demanding “that Mumia get specialist care … and that he is given the treatment that he deserves.”
founder and producer of Prison Radio, which has been recording and distributing Mumia Abu-Jamal’s commentaries from prison since 1992. She is also an attorney on Abu-Jamal’s legal team.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Oliver Sacks Put Himself into His Case Studies. What Was the Cost?
When Oliver Sacks arrived in New York City, in September, 1965, he wore a butter-colored suit that reminded him of the sun. He had just spent a romantic week in Europe travelling with a man named Jenö Vincze, and he found himself walking too fast, fizzing with happiness. “My blood is champagne,” he wrote. He kept a letter Vincze had written him in his pocket all day, feeling as if its pages were glowing. Sacks had moved to New York to work as a fellow in neuropathology at the Albert Einstein College of Medicine, in the Bronx, and a colleague observed that he was “walking on air.” Every morning, he carefully polished his shoes and shaved. He adored his bosses. “I smile like a lighthouse in all directions,” he wrote Vincze.
Sacks was thirty-two, and he told Vincze that this was his first romantic relationship that was both physical and reciprocal. He felt he was part of a “two man universe,” seeing the world for the first time—“seeing it clear, and seeing it whole.” He wandered along the shipping piers on the Hudson River, where gay men cruised, with a notebook that he treated as a diary and as an endless letter to Vincze. “To watch life with the eyes of a homosexual is the greatest thing in the world,” Vincze had once told Sacks.
Sacks’s mother, a surgeon in London, had suspected that her son was gay when he was a teen-ager. She declared that homosexuality was an “abomination,” using the phrase “filth of the bowel” and telling him that she wished he’d never been born. They didn’t speak of the subject again. Sacks had moved to America—first to California and then, after five years, to New York—because, he wrote in his journal, “I wanted a
sexual and moral freedom
I felt I could never have in England.” That fall, during Yom Kippur, he decided that, rather than going to synagogue to confess “to the total range of human sin,” a ritual he’d grown up with, he’d spend the night at a bar, enjoying a couple of beers. “What I suppose I am saying, Jenö, is that I now feel differently about myself, and therefore about homosexuality as a whole,” he wrote. “I am through with cringing, and apologies, and pious wishes that I might have been ‘normal.’ ” (The Oliver Sacks Foundation shared with me his correspondence and other records, as well as four decades’ worth of journals—many of which had not been read since he wrote them.)
In early October, Sacks sent two letters to Vincze, but a week passed without a reply. Sacks asked his colleagues to search their mailboxes, in case the letter had been put in the wrong slot. Within a few days, however, he had given up on innocent explanations. He began dressing sloppily. He stopped coming to work on time. He had sex with a series of men who disgusted him.
After two weeks, Vincze, who was living in Berlin, sent a letter apologizing for his delayed reply and reiterating his love. He explained that he was so preoccupied by thoughts of Sacks that he felt as if he were living in a “Klaudur,” a German word that Vincze defined as a “spiritual cell.” He seems to have misspelled
Klausur
, which refers to an enclosed area in a monastery, but Sacks kept using the misspelled word, becoming obsessed with it. “It ramifies in horrible associations,” he wrote Vincze. “The closing of a door. Klaudur, claustrophobia, the sense of being shut in.” Sacks had long felt as if he were living in a cell, incapable of human contact, and this word appeared to be all he needed to confirm that the condition was terminal. The meaning of the word began morphing from “spiritual cell” to “psychotic cage.”
“He just got back from his poker game.”
Cartoon by Liana Finck
The intimacy Sacks had rejoiced in now seemed phony, a “folie à deux”—a two-person delusion. His doubts intensified for a month, then he cut off the relationship. “I must tear you out of my system,
because I dare not be involved
,” he told Vincze, explaining that he barely remembered how he looked, or the sound of his voice. “I hope I will not be taken in like this again, and that—conversely—I will have the strength and clarity of mind to perceive any future such relationships as morbid at their inception, and to abort the folly of their further growth.”
Two months later, Sacks felt himself “slipping down the greased path of withdrawal, discontent, inability to make friends, inability to have sex, etc. etc. towards suicide in a New York apartment at the age of 32.” He took enormous amounts of amphetamines, to the point of hallucinating. A family friend, a psychiatrist who worked with Anna Freud, urged him to find a psychoanalyst. She wrote him that his homosexuality was “a very ‘secondary phenomenon’ ”: he was attracted to men as “a substitute for veering uncertainties of what/whom you could love other than as ‘idealizations’ of yourself.” A few weeks later, he started therapy with Leonard Shengold, a young psychiatrist who was deeply immersed in Manhattan’s psychoanalytic culture. “I think he is very good, and he has at least a very considerable local reputation,” Sacks wrote his parents, who helped to pay for the sessions, three times a week.
Sacks had elevated yet hazy ambitions at the time: he wanted to be a novelist, but he also wanted to become the “Galileo of the inward,” he told a mentor, and to write the neurological equivalent of Sigmund Freud’s “Interpretation of Dreams.” He worked in wards with chronically ill and elderly patients who had been warehoused and neglected, and his prospects within academic medicine looked dim. “Have you published anything lately?” his father wrote him, in 1968. “Or have you found yourself temperamentally incapacitated from doing so?”
When Sacks began therapy, “my initial and ultimate complaint was of
fixity
—a feeling of
not-going
,” he wrote in his journal. He regarded Shengold as “a sort of analytic machine.” But gradually Sacks came to feel that “I love him, and need him; that I need him—and
love
him.” He had planned to stay in New York City only for a few years, but he kept delaying his return to England so that he could reach “a terminable point in my analysis.” Shengold, who would eventually publish ten books about psychoanalysis, wrote that therapy requires a “long period of working through”—a term he defined as the “need to repeat emotional conflicts over and over in life” until the patient has the “freedom to own what is there to be felt.”
Sacks saw Shengold for half a century. In that time, Sacks became one of the world’s most prominent neurologists and a kind of founding father of medical humanities—a discipline that coalesced in the seventies, linking healing with storytelling. But the freedom that Shengold’s analysis promised was elusive. After Vincze, Sacks did not have another relationship for forty-four years. He seemed to be doing the “working through” at a remove—again and again, his psychic conflicts were displaced onto the lives of his patients. He gave them “some of
my own powers
, and some of
my
phantasies too,” he wrote in his journal. “I write out symbolic versions of myself.”
During Sacks’s neurology internship, in San Francisco, his childhood friend Eric Korn warned him that the residents at his hospital could sense he was gay. “For God’s sake, exercise what seems to you immoderate caution,” Korn wrote, in 1961. “Compartmentalize your life. Cover your tracks. Don’t bring in the wrong sort of guests to the hospital, or sign your name and address to the wrong sort of register.” He encouraged Sacks to read “Homosexuality: Disease or Way of Life?,” a best-selling book by Edmund Bergler, who argued that homosexuality was an “illness as painful, as unpleasant and as disabling as any other serious affliction,” but one that psychoanalysis could cure. “The book is full of interest,” Korn wrote. “He claims a potential 100% ‘cures’ (a term he chooses to employ because he knows it teases) which is worth investigating perhaps.”
Freud characterized homosexuality as a relatively normal variant of human behavior, but when psychoanalysis came to the United States, in the postwar years, homophobia took on new life. The historian Dagmar Herzog has described how, in the U.S., “reinventing psychoanalysis and reinventing homophobia went hand in hand.” Faced with men who persisted in their love for other men, American analysts commonly proposed celibacy as a stopgap solution. In the historian Martin Duberman’s memoir “Cures,” he writes that his psychoanalyst instructed him to “take the veil”—live celibately—so that he could be cured of his desire for men. Duberman agreed to these terms. The best he could get, he thought, was sublimation: instead of enjoying an “affective life,” he would make “some contribution to the general culture from which I was effectively barred.” Sacks, who was closeted until he was eighty, also followed this course.
Shengold had portraits of Charles Dickens, William Shakespeare, and Sigmund Freud in his office, on the Upper East Side. Like Sacks, he came from a literary Jewish family. He seemed deeply attuned to Sacks’s creative life, which took the form of ecstatic surges of literary inspiration followed by months of sterility and depression. “Do your best to enjoy and to work—it is the power of your mind that is
crucial
,” Shengold wrote when Sacks was on a visit with his family in England. Sacks wrote in his journal that he’d dreamed he overheard Shengold telling someone, “Oliver is lacking in proper self-respect; he has never really appreciated himself, or appreciated others’ appreciation of him. And yet, in his way, he is not less gifted than Auden was.” Sacks woke up flushed with embarrassment and pleasure.
Sacks in 1987. He became the modern master of the case study. “I write out symbolic versions of myself,” he wrote.
Photograph by Lowell Handler
Unlike many of his contemporaries, Shengold was not a doctrinaire thinker, but he was still susceptible to psychoanalytic fashions. Reflecting on how he might have viewed living openly as a gay man at that time, Shengold’s daughter, Nina, told me, “I don’t know that was a door that Dad necessarily had wide open.” In several books and papers, Shengold, a prolific reader of Western literature, tried to understand the process by which troubled people sublimate their conflicts into art. In his 1988 book, “Halo in the Sky: Observations on Anality and Defense,” Shengold wrote about the importance of transforming “anal-sadistic drives”—he used the anus as a metaphor for primitive, dangerous impulses—into “adaptive and creative ‘making.’ ” When Sacks read the book, he wrote in his journal that it “made me feel I was ‘lost in anality’ (whatever this means).”
Before Vincze, Sacks had been in love with a man named Mel Erpelding, who once told him, Sacks wrote, that he “oozed sexuality, that it poured out through every pore, that I was alive and vibrant with sexuality (a positive-admiring way of putting things), but also that I was reeking and toxic with it.” (Erpelding, who ended up marrying a woman, never allowed his relationship with Sacks to become sexual.) In his early years of therapy, in the late sixties, Sacks resolved that he would give up both drugs and sex. It’s doubtful that Shengold encouraged his celibacy, but he may have accepted that sexual abstinence could be productive, at least for a time. Richard Isay, the first openly gay member of the American Psychoanalytic Association, said that, in the seventies, he’d “rationalized that maturity and mental health demanded the sublimation of sexual excitement in work.” Sacks told a friend, “Shengold is fond of quoting Flaubert’s words ‘the mind has its erections too.’ ”
For Sacks, writing seemed almost physiological, like sweating—an involuntary response to stimuli. He routinely filled a whole journal in two days. “Should I then
put down my pen
, my interminable Journal (for this is but a fragment of the journal I have kept all my life),” he asked, “and ‘start living’ instead?” The answer was almost always no. Sometimes Sacks, who would eventually publish sixteen books, wrote continuously in his journal for six hours. Even when he was driving his car, he was still writing—he set up a tape recorder so that he could keep developing his thoughts, which were regularly interrupted by traffic or a wrong turn. Driving through Manhattan one day in 1975, he reflected on the fact that his closets, stuffed with pages of writing, resembled a “grave bursting open.”
By the late sixties, Sacks had become, he wrote, “almost a monk in my asceticism and devotion to work.” He estimated that he produced a million and a half words a year. When he woke up in the middle of the night with an erection, he would cool his penis by putting it in orange jello. He told Erpelding, “I partly accept myself as a celibate and a cripple, but partly—and this is . . . the wonder of sublimation—am able to
transform
my erotic feelings into other sorts of love—love for my patients, my work, art, thought.” He explained, “I keep my distance from people, am always courteous, never close. For me (as perhaps for you) there is almost no room, no moral room.”
“I have some hard ‘confessing’ to do—if not in public, at least to Shengold—and myself,” Sacks wrote in his journal, in 1985. By then, he had published four books—“Migraine,” “Awakenings,” “A Leg to Stand On,” and “The Man Who Mistook His Wife for a Hat”—establishing his reputation as “our modern master of the case study,” as the
Times
put it. He rejected what he called “pallid, abstract knowing,” and pushed medicine to engage more deeply with patients’ interiority and how it interacted with their diseases. Medical schools began creating programs in medical humanities and “narrative medicine,” and a new belief took hold: that an ill person has lost narrative coherence, and that doctors, if they attend to their patients’ private struggles, could help them reconstruct a new story of their lives. At Harvard Medical School, for a time, students were assigned to write a “book” about a patient. Stories of illness written by physicians (and by patients) began proliferating, to the point that the medical sociologist Arthur Frank noted, “ ‘Oliver Sacks’ now designates not only a specific physician author but also a . . . genre—a distinctively recognizable form of storytelling.”
But, in his journal, Sacks wrote that “a sense of hideous criminality remains (psychologically) attached” to his work: he had given his patients “powers (starting with powers of speech) which they do not have.” Some details, he recognized, were “pure fabrications.” He tried to reassure himself that the exaggerations did not come from a shallow place, such as a desire for fame or attention. “The impulse is both ‘purer’—and deeper,” he wrote. “It is not merely or wholly a
projection
—nor (as I have sometimes, ingeniously-disingenuously, maintained) a mere ‘sensitization’ of what I know so well in myself. But (if you will) a
sort of autobiography
.” He called it “
symbolic
‘exo-graphy.’ ”
Sacks had “misstepped in this regard, many many times, in ‘Awakenings,’ ” he wrote in another journal entry, describing it as a “source of severe, long-lasting, self-recrimination.” In the book, published in 1973, he startled readers with the depth of his compassion for some eighty patients at Beth Abraham Hospital, in the Bronx, who had survived an epidemic of encephalitis lethargica, a mysterious, often fatal virus that appeared around the time of the First World War. The patients had been institutionalized for decades, in nearly catatonic states. At the time, the book was met with silence or skepticism by other neurologists—Sacks had presented his findings in a form that could not be readily replicated, or extrapolated from—but, to nonspecialists, it was a masterpiece of medical witnessing. The
Guardian
would name it the twelfth-best nonfiction book of all time.
“My handwriting is better than your finger-writing.”
Cartoon by William Haefeli
Sacks spent up to fifteen hours a day with his patients, one of the largest groups of post-encephalitic survivors in the world. They were “mummified,” like “living statues,” he observed. A medicine called L-dopa, which elevates the brain’s dopamine levels, was just starting to be used for Parkinson’s disease, on an experimental basis, and Sacks reasoned that his patients, whose symptoms resembled those of Parkinson’s, could benefit from the drug. In 1969, within days of giving his patients the medication, they suddenly “woke up,” their old personalities intact. Other doctors had dismissed these patients as hopeless, but Sacks had sensed that they still had life in them—a recognition that he understood was possible because he, too, felt as if he were “buried alive.”
In “Awakenings,” Sacks writes about his encounters with a man he calls Leonard L. “What’s it like being the way you are?” Sacks asks him the first time they meet. “Caged,” Leonard replies, by pointing to letters of the alphabet on a board. “Deprived. Like Rilke’s ‘Panther’ ”—a reference to a poem by Rainer Maria Rilke about a panther pacing repetitively in cramped circles “around a center / in which a mighty will stands paralyzed.”
When Sacks was struggling to write his first book, “Migraine,” he told a friend that he felt like “Rilke’s image of the caged panther, stupefied, dying, behind bars.” In a letter to Shengold, he repeated this image. When Sacks met Leonard, he jotted down elegant observations in his chart (“Quick and darting eye movements are at odds with his general petrified immobility”), but there is no mention of Leonard invoking the Rilke poem.
In the preface to “Awakenings,” Sacks acknowledges that he changed circumstantial details to protect his patients’ privacy but preserved “what is important and essential—the real and full presence of the patients themselves.” Sacks characterizes Leonard as a solitary figure even before his illness: he was “continually buried in books, and had few or no friends, and indulged in none of the sexual, social, or other activities common to boys of his age.” But, in an autobiography that Leonard wrote after taking L-dopa, he never mentions reading or writing or being alone in those years. In fact, he notes that he spent all his time with his two best friends—“We were inseparable,” he writes. He also recalls raping several people. “We placed our cousin over a chair, pulled down her pants and inserted our penises into the crack,” he writes on the third page, in the tone of an aging man reminiscing on better days. By page 10, he is describing how, when he babysat two girls, he made one of them strip and then “leaped on her. I tossed her on her belly and pulled out my penis and placed it between her buttocks and started to screw her.”
Leonard Shengold, Sacks’s psychoanalyst.
Photograph courtesy Nina Shengold
In “Awakenings,” Sacks has cleansed his patient’s history of sexuality. He depicts him as a man of “most unusual intelligence, cultivation, and sophistication”—the “ ‘ideal’ patient.” L-dopa may have made Leonard remember his childhood in a heightened sexual register—his niece and nephew, who visited him at the hospital until his death, in 1981, told me that the drug had made him very sexual. But they said that he had been a normal child and adolescent, not a recluse who renounced human entanglement for a life of the mind.
Sacks finished writing “Awakenings” rapidly in the weeks after burying his mother, who’d died suddenly, at the age of seventy-seven. He felt “a great open torrent—and
release
,” he wrote in his journal. “It seems to be surely significant that ‘Awakenings’ finally came forth from me like a cry after the death of my own mother.” He referred to the writing of the book as his “Great Awakening,” the moment he “came out.” He doesn’t mention another event of significance: his patients had awakened during the summer of the Stonewall riots, the beginning of the gay-rights movement.
Shengold once told Sacks that he had “never met anyone less affected by gay liberation.” (Shengold supported his own son when he came out as gay, in the eighties.) Sacks agreed with the characterization. “I remain resolutely locked in my cell despite the dancing at the prison gates,” he said, in 1984.
In “Awakenings,” his patients are at first overjoyed by their freedom; then their new vitality becomes unbearable. As they continue taking L-dopa, many of them are consumed by insatiable desires. “L-DOPA is wanton, egotistical power,” Leonard says in the book. He injures his penis twice and tries to suffocate himself with a pillow. Another patient is so aroused and euphoric that she tells Sacks, “My blood is champagne”—the phrase Sacks used to describe himself when he was in love with Vincze. Sacks begins tapering his patients’ L-dopa, and taking some of them off of it completely. The book becomes a kind of drama about dosage: an examination of how much aliveness is tolerable, and at what cost. Some side effects of L-dopa, like involuntary movements and overactivity, have been well documented, but it’s hard not to wonder if “Awakenings” exaggerates the psychological fallout—Leonard becomes so unmanageable that the hospital moves him into a “punishment cell”—as if Sacks is reassuring himself that free rein of the libido cannot be sustained without grim consequence.
After “Awakenings,” Sacks intended his next book to be about his work with young people in a psychiatric ward at Bronx State Hospital who had been institutionalized since they were children. The environment reminded Sacks of a boarding school where he had been sent, between the ages of six and nine, during the Second World War. He was one of four hundred thousand children evacuated from London without their parents, and he felt abandoned. He was beaten by the headmaster and bullied by the other boys. The ward at Bronx State “exerted a sort of spell on me,” Sacks wrote in his journal, in 1974. “I lost my footing of proper sympathy and got sucked, so to speak, into an improper ‘perilous condition’ of identification to the patients.”
Shengold wrote several papers and books about a concept he called “soul murder”—a category of childhood trauma that induces “a hypnotic living-deadness, a state of existing ‘as if’ one were there.” Sacks planned to turn his work at Bronx State into a book about “ ‘SOUL MURDER’ and ‘SOUL SURVIVAL,’ ” he wrote. He was especially invested in two young men on the ward whom he thought he was curing. “The miracle-of-recovery started to occur in and through their relation to me (our relation and feelings
to each other
, of course),” he wrote in his journal. “We had to meet in a passionate subjectivity, a sort of collaboration or communication which transcended the Socratic relation of teacher-and-pupil.”
In a spontaneous creative burst lasting three weeks, Sacks wrote twenty-four essays about his work at Bronx State which he believed had the “beauty, the intensity, of Revelation . . . as if I was coming to know, once again, what I knew as a child, that sense of Dearness and Trust I had lost for so long.” But in the ward he sensed a “dreadful silent tension.” His colleagues didn’t understand the attention he was lavishing on his patients—he got a piano and a Ping-Pong table for them and took one patient to the botanical garden. Their suspicion, he wrote in his journal, “centred on the unbearability of my uncategorizability.” As a middle-aged man living alone—he had a huge beard and dressed eccentrically, sometimes wearing a black leather shirt—Sacks was particularly vulnerable to baseless innuendo. In April, 1974, he was fired. There had been rumors that he was molesting some of the boys.
That night, Sacks tore up his essays and then burned them. “Spite! Hate! Hateful spite!” he wrote in his journal shortly after. “And now I am empty—empty handed, empty hearted, desolate.”
The series of events was so distressing that even writing about it in his journal made Sacks feel that he was about to die. He knew that he should shrug off the false accusations as “vile idle gossip thrown by tiddlers and piddlers,” he wrote. But he couldn’t, because of “the
parental
accusation which I have borne—a Kafka-esque cross, guilt without crime, since my earliest days.”
The historian of medicine Henri Ellenberger observed that psychiatry owes its development to two intertwined dynamics: the neuroses of its founders—in trying to master their own conflicts, they came to new insights and forms of therapy—and the prolonged, ambiguous relationships they had with their patients. The case studies of these relationships, Ellenberger wrote, tended to have a distinct arc: psychiatrists had to unravel their patients’ “pathogenic secret,” a hidden source of hopelessness, in order to heal them.
Sacks’s early case studies also tended to revolve around secrets, but wonderful ones. Through his care, his patients realized that they had hidden gifts—for music, painting, writing—that could restore to them a sense of wholeness. The critic Anatole Broyard, recounting his cancer treatment in the
Times Magazine
in 1990, wrote that he longed for a charismatic, passionate physician, skilled in “empathetic witnessing.” In short, he wrote, a doctor who “would resemble Oliver Sacks.” He added, “He would see the genius of my illness.”
It speaks to the power of the fantasy of the magical healer that readers and publishers accepted Sacks’s stories as literal truth. In a letter to one of his three brothers, Marcus, Sacks enclosed a copy of “The Man Who Mistook His Wife for a Hat,” which was published in 1985, calling it a book of “fairy tales.” He explained that “these odd Narratives—half-report, half-imagined, half-science, half-fable, but with a fidelity of their own—are what
I
do, basically, to keep MY demons of boredom and loneliness and despair away.” He added that Marcus would likely call them “confabulations”—a phenomenon Sacks explores in a chapter about a patient who could retain memories for only a few seconds and must “
make
meaning, in a desperate way, continually inventing, throwing bridges of meaning over abysses,” but the “bridges, the patches, for all their brilliance . . . cannot do service for reality.”
Sacks was startled by the success of the book, which he had dedicated to Shengold, “my own mentor and physician.” It became an international best-seller, routinely assigned in medical schools. Sacks wrote in his journal,
Guilt has been
much
greater since ‘Hat’ because of (among other things)
My lies,
falsification
He pondered the phrase “art is the lie that tells the truth,” often attributed to Picasso, but he seemed unconvinced. “I think I have to thrash this out with Shengold—it is killing me, soul-killing me,” he wrote. “My ‘cast of characters’ (for this is what they become) take on an almost
Dickensian
quality.”
Sacks once told a reporter that he hoped to be remembered as someone who “bore witness”—a term often used within medicine to describe the act of accompanying patients in their most vulnerable moments, rather than turning away. To bear witness is to recognize and respond to suffering that would otherwise go unseen. But perhaps bearing witness is incompatible with writing a story about it. In his journal, after a session with a patient with Tourette’s syndrome, Sacks describes the miracle of being “enabled to ‘feel’—that is, to imagine, with all the powers of my head and heart—how it felt to be another human being.” Empathy tends to be held up as a moral end point, as if it exists as its own little island of good work. And yet it is part of a longer transaction, and it is, fundamentally, a projection. A writer who imagines what it’s like to exist as another person must then translate that into his own idiom—a process that Sacks makes particularly literal.
“I’ll tell you what you are saying,” Sacks told a woman with an I.Q. of around 60 whose grandmother had just died. “You want to go down below and join your dead grandparents down in the Kingdom of Death.” In the conversation, which Sacks recorded, the patient becomes more expressive under the rare glow of her doctor’s sustained attention, and it’s clear that she is fond of him. But he is so excited about her words (“One feels that she is voicing universal symbols,” he says in a recording, “symbols which are infinite in meaning”) that he usurps her experience.
“I know, in a way, you don’t feel like living,” Sacks tells her, in another recorded session. “Part of one feels dead inside, I know, I know that. . . . One feels that one wants to die, one wants to end it, and what’s the use of going on?”
“I don’t mean it in that way,” she responds.
“I know, but you do, partly,” Sacks tells her. “I know you have been lonely all your life.”
Cartoon by Michael Maslin
The woman’s story is told, with details altered, in a chapter in “Hat” titled “Rebecca.” In the essay, Rebecca is transformed by grief for her grandmother. She reminds Sacks of Chekhov’s Nina, in “The Seagull,” who longs to be an actress. Though Nina’s life is painful and disappointing, at the end of the play her suffering gives her depth and strength. Rebecca, too, ends the story in full flower. “Rather suddenly, after her grandmother’s death,” Sacks writes, she becomes decisive, joining a theatre group and appearing to him as “a complete person, poised, fluent,” a “natural poet.” The case study is presented as an ode to the power of understanding a patient’s life as a narrative, not as a collection of symptoms. But in the transcripts of their conversations—at least the ones saved from the year that followed, as well as Sacks’s journals from that period—Rebecca never joins a theatre group or emerges from her despair. She complains that it’s “better that I shouldn’t have been born,” that she is “useless,” “good for nothing,” and Sacks vehemently tries to convince her that she’s not. Instead of bearing witness to her reality, he reshapes it so that she, too, awakens.
Some of the most prominent nonfiction writers of Sacks’s era (Joseph Mitchell, A. J. Liebling, Ryszard Kapuściński) also took liberties with the truth, believing that they had a higher purpose: to illuminate the human condition. Sacks was writing in that spirit, too, but in a discipline that depends on reproducible findings. The “most flagrant example” of his distortions, Sacks wrote in his journal, was in one of the last chapters of “Hat,” titled “The Twins,” about twenty-six-year-old twins with autism who had been institutionalized since they were seven. They spend their days reciting numbers, which they “savored, shared” while “closeted in their numerical communion.” Sacks lingers near them, jotting down the numbers, and eventually realizes that they are all prime. As a child, Sacks used to spend hours alone, trying to come up with a formula for prime numbers, but, he wrote, “I never found any Law or Pattern for them—and this gave me an intense feeling of Terror, Pleasure, and—Mystery.” Delighted by the twins’ pastime, Sacks comes to the ward with a book of prime numbers which he’d loved as a child. After offering his own prime number, “they drew apart slightly, making room for me, a new number playmate, a third in their world.” Having apparently uncovered the impossible algorithm that Sacks had once wished for, the twins continue sharing primes until they’re exchanging ones with twenty digits. The scene reads like a kind of dream: he has discovered that human intimacy has a decipherable structure, and identified a hidden pattern that will allow him to finally join in.
Before Sacks met them, the twins had been extensively studied because of their capacity to determine the day of the week on which any date in the calendar fell. In the sixties, two papers in the
American Journal of Psychiatry
provided detailed accounts of the extent of their abilities. Neither paper mentioned a gift for prime numbers or math. When Sacks wrote Alexander Luria, a Russian neuropsychologist, about his work with the twins, in 1973, he also did not mention any special mathematical skills. In 2007, a psychologist with a background in learning theory published a short article in the
Journal of Autism and Developmental Disorders
, challenging Sacks’s assertion that these twins could spontaneously generate large prime numbers. Because this is not something that humans can reliably do, Sacks’s finding had been widely cited, and was theoretically “important for not only psychologists but also for all scientists and mathematicians,” the psychologist wrote. (The psychologist had contacted Sacks to ask for the title of his childhood book of prime numbers, because he couldn’t find a book of that description, but Sacks said that it had been lost.) Without pointing to new evidence, another scientist wrote in Sacks’s defense, describing his case study as “the most compelling account of savant numerosity skills” and arguing, “This is an example of science at the frontier, requiring daring to advance new interpretations of partial data.”
After the publication of “Hat,” when Sacks was fifty-two years old, he wrote his friend Robert Rodman, a psychoanalyst, that “Shengold suggested, with some hesitancy, some months ago, that I should consider going
deeper
with him.” He added, “He also observes that I don’t complain, say, of sexual deprivation—though this is absolute.” At first, Sacks was worried that Shengold was preparing to dismiss him from treatment: “I’ve done all I can for you—now manage on your own!” Then he felt hopeful that he didn’t need to assume that “boredom-depression-loneliness-cutoffness” would define the rest of his life. He was also moved that, after twenty years, Shengold still considered him “worth extra work.”
But Sacks was shaken by the idea that they’d only been skimming the surface. He looked back through his notebooks and noticed “a perceptible decline in concern and passion,” which he felt had also dulled the quality of his thought. “Is the superficiality of my work, then, due to superficiality of relationships—to running away from whatever has deeper feeling and meaning?” he asked Rodman. “Is this perhaps spoken of, in a camouflaged way, when I describe the ‘superficialization’ of various patients?” As an example, he referenced an essay in “Hat” about a woman with a cerebral tumor. She was intelligent and amusing but seemed not to care about anyone. “Was this the ‘cover’ of some unbearable emotion?” he writes in the essay.
Sacks felt that Shengold was the reason he was still alive, and that he should go further with him. “What have I to lose?” he asked Rodman. But, he wrote, “what one has to lose, of course, may be just that quasi-stable if fragile ‘functioning’ . . . so there is reason to hesitate.” Going deeper would also mean more fully submitting to someone else’s interpretation, experiencing what he asked of his own patients; Rodman proposed that Sacks was “afraid of the enclosure of analysis, of being reduced and fixed with a formulated phrase.”
Sacks and his partner, Bill Hayes.
Photograph courtesy Oliver Sacks Foundation
In the early eighties, Lawrence Weschler, then a writer for
The New Yorker
, began working on a biography of Sacks. Weschler came to feel that Sacks’s homosexuality was integral to his work, but Sacks didn’t want his sexuality mentioned at all, and eventually asked him to stop the project. “I have lived a life wrapped in concealment and wracked by inhibition, and I can’t see that changing now,” he told Weschler. In his journal, Sacks jotted down thoughts to share with Weschler on the subject: “My ‘sex life’ (or lack of it) is, in a sense
irrelevant
to the . . . sweep of my
mind
.” In another entry, he wrote that the Freudian term “sublimation” diminished the process he’d undergone. When he was still having sex, as a young man in California, he used to sheath his body in leather gear, so he was “totally encased, enclosed,” his real self sealed in a kind of “black box.” He wrote, “I have,
in a sense
, ‘outgrown’ these extraordinary, almost
convulsive
compulsions—but this detachment has been made possible by
incorporating
them into a vast and comprehending view of the world.” (Weschler became close friends with Sacks, and, after Sacks died, published a “biographical memoir” titled “And How Are
You
, Dr. Sacks?”)
It’s unclear whether Sacks did “go deeper” with Shengold. In the late eighties, Sacks wrote in his journal that he was “scared, horrified (but, in an awful way, accepting or complaisant) about my non-life.” He likened himself to a “pithed and gutted creature.” Rather than living, he was managing a kind of “homeostasis.”
In 1987, Sacks had an intense friendship with a psychiatrist named Jonathan Mueller, with whom he briefly fell in love. Mueller, who was married to a woman, told me that he did not realize Sacks had romantic feelings for him. Sacks eventually moved on. But he felt that the experience had altered him. “I can read ‘love stories’ with empathy and understanding—I can ‘
enter into them
’ in a way which was impossible before,” he wrote in his journal. He perceived, in a new light, what it meant for his patients in “Awakenings” to glimpse the possibility of “liberation”: like him, he wrote, they were seeking “not merely a cure but an indemnification for the loss of their lives.”
By the nineties, Sacks seemed to ask less of himself, emotionally, in relation to his patients. He had started working with Kate Edgar, who’d begun as his assistant but eventually edited his writing, organized his daily life, and became a close friend. (Shengold had encouraged Sacks to find someone to assist with his work. “The secretary is certainly an important ‘ego-auxiliary,’ ” he wrote him in a letter.) Edgar was wary about the way Sacks quoted his patients—they were suspiciously literary, she thought—and she checked to make sure he wasn’t getting carried away. She spent hours with some of his patients, and, she told me, “I never caught him in anything like that, which actually surprises me.”
Weschler told me that Sacks used to express anxiety about whether he’d distorted the truth. Weschler would assure him that good writing is not a strict account of reality; there has to be space for the writer’s imagination. He said he told Sacks, “Come on, you’re extravagantly romanticizing how bad you are—just as much as you were extravagantly romanticizing what the patient said. Your mother’s accusing voice has taken over.” Weschler had gone to Beth Abraham Hospital to meet some of the patients from “Awakenings” and had been shaken by their condition. “There’s a lot of people shitting in their pants, drooling—the sedimentation of thirty years living in a warehouse,” he said. “His genius was to see past that, to the dignity of the person. He would talk to them for an hour, and maybe their eyes would brighten only once—the rest of the time their eyes were cloudy—but he would glom onto that and keep talking.”
After “Hat,” Sacks’s relationship with his subjects became more mediated. Most of them were not his patients; many wrote to him after reading his work, recognizing themselves in his books. There was a different power dynamic, because these people already believed that they had stories to tell. Perhaps the guilt over liberties he had taken in “Hat” caused him to curb the impulse to exaggerate. His expressions of remorse over “making up, ‘enhancing,’ etc,” which had appeared in his journals throughout the seventies and eighties, stopped. In his case studies, he used fewer and shorter quotes. His patients were far more likely to say ordinary, banal things, and they rarely quoted literature. They still had secret gifts, but they weren’t redeemed by them; they were just trying to cope.
In “An Anthropologist on Mars,” from 1992, a book of case studies about people compensating for, and adapting to, neurological conditions, some of the richest passages are the ones in which Sacks allows his incomprehension to become part of the portrait. In a chapter called “Prodigies,” he wants badly to connect with a thirteen-year-old boy named Stephen, who is autistic and has an extraordinary ability to draw, but Stephen resists Sacks’s attempts at intimacy. He will not allow himself to be romanticized, a refusal that Sacks ultimately accepts: “Is Stephen, or his autism, changed by his art? Here, I think, the answer is no.” In this new mode, Sacks is less inclined to replace Stephen’s unknowable experience with his own fantasy of it. He is open about the discomfort, and even embarrassment, of his multiple failures to reach him: “I had hoped, perhaps sentimentally, for some depth of feeling from him; my heart had leapt at the first ‘Hullo, Oliver!’ but there had been no follow-up.”
Mort Doran, a surgeon with Tourette’s syndrome whom Sacks profiled in “Anthropologist,” told me that he was happy with the way Sacks had rendered his life. He said that only one detail was inaccurate—Sacks had written that the brick wall of Doran’s kitchen was marked from Doran hitting it during Tourette’s episodes. “I thought, Why would he embellish that? And then I thought, Maybe that’s just what writers do.” Doran never mentioned the error to Sacks. He was grateful that Sacks “had the gravitas to put it out there to the rest of the world and say, ‘These people aren’t all nuts or deluded. They’re real people.’ ”
The wife in the title story of “Hat” had privately disagreed with Sacks about the portrayal of her husband, but for the most part Sacks appeared to have had remarkable relationships with his patients, corresponding with them for years. A patient called Ray, the subject of a 1981 piece about Tourette’s syndrome, told me that Sacks came to his son’s wedding years after his formal treatment had ended. Recalling Sacks’s death, he found himself suddenly crying. “Part of me left,” he said. “Part of my self was gone.”
A year after “Awakenings” was published, Sacks broke his leg in Norway, and Leonard L. and his mother wrote him a get-well letter. Thirty-two patients added their names, their signatures wavering. “Everybody had been counting the days for your return, so you can imagine the turmoil when they heard the news,” Leonard’s mother wrote. She explained that “most of the patients are not doing so well without your help and interest.” She added that Leonard “isn’t doing too well either.” When Leonard learned that Sacks wouldn’t be back, she said, “he shed enough tears to fill a bucket.”
Sacks spoke of “animating” his patients, as if lending them some of his narrative energy. After living in the forgotten wards of hospitals, in a kind of narrative void, perhaps his patients felt that some inaccuracies were part of the exchange. Or maybe they thought, That’s just what writers do. Sacks established empathy as a quality every good doctor should possess, enshrining the ideal through his stories. But his case studies, and the genre they helped inspire, were never clear about what they exposed: the ease with which empathy can slide into something too creative, or invasive, or possessive. Therapists—and writers—inevitably see their subjects through the lens of their own lives, in ways that can be both generative and misleading.
In his journal, reflecting on his work with Tourette’s patients, Sacks described his desire to help their illness “reach fruition,” so that they would become floridly symptomatic. “With my help and almost my collusion, they can extract the maximum possible from their sickness—maximum of knowledge, insight, courage,” he wrote. “Thus I will FIRST help them to get ill, to
experience
their illness with maximum intensity; and then,
only then
, will I help them get well!” On the next line, he wrote, “IS THIS MONSTROUS?” The practice came from a sense of awe, not opportunism, but he recognized that it made him complicit, as if their illness had become a collaboration. “An impulse both neurotic and intellectual (artistic) makes me
get the most out of suffering
,” he wrote. His approach set the template for a branch of writing and thinking that made it seem as if the natural arc of illness involved insight and revelation, and even some poetry, too.
In his journals, Sacks repeatedly complained that his life story was over. He had the “feeling that I have stopped doing, that doing has stopped, that life itself has stopped, that it is petering out in a sort of twilight of half-being,” he wrote, in 1987. His journals convey a sense of tangible boredom. He transcribed long passages from philosophers and theologists (Simone Weil, Søren Kierkegaard, Gottfried Wilhelm Leibniz, Dietrich Bonhoeffer) and embarked on disquisitions on the best definition of reality, the “metabolism of grace,” the “deep mystery of incubation.” His thoughts cast outward in many directions—notes for a thousand lectures—then tunnelled inward to the point of non-meaning. “Where Life is Free, Immaterial, full of Art,” he wrote, “the laws of life, of Grace, are those of
Fitness
.”
Sacks proposed various theories for why he had undergone what he called “psychic death.” He wondered if he had become too popular, merely a fuzzy symbol of compassionate care. “Good old Sacks—the House Humanist,” he wrote, mocking himself. He also considered the idea that his four decades of analysis were to blame. Was it possible, he wrote, that a “vivisection of inner life, however conceived, however subtle and delicate, may in fact destroy the very thing it examines?” His treatment with Shengold seemed to align with a life of “homeostasis”—intimacy managed through more and more language, in a contained, sterile setting, on Monday and Wednesday mornings, from 6:00 to 6:45
A
.
M
. They still referred to each other as “Dr. Sacks” and “Dr. Shengold.” Once, they ran into each other at a chamber concert. They were a few rows apart, but they didn’t interact. Occasionally, Shengold told his children that he “heard from the couch” about a good movie or play, but he never shared what happened in his sessions. They inferred that Sacks was their father’s patient after reading the dedication to him in “Hat.”
As Sacks aged, he felt as if he were gazing at people from the outside. But he also noticed a new kind of affection for humans—“homo sap.” “They’re quite complex (little) creatures (I say to myself),” he wrote in his journal. “They suffer, authentically, a good deal. Gifted, too. Brave, resourceful, challenging.”
Perhaps because love no longer appeared to be a realistic risk—he had now entered a “geriatric situation”—Sacks could finally confess that he craved it. “I keep being
stabbed
by love,” he wrote in his journal. “A look. A glance. An expression. A posture.” He guessed that he had at least five, possibly ten, more years to live. “I want to, I want to ••• I dare not say. At least not in writing.”
In 2008, Sacks had lunch with Bill Hayes, a forty-seven-year-old writer from San Francisco who was visiting New York. Hayes had never considered Sacks’s sexuality, but, as soon as they began talking, he thought, “Oh, my God, he’s gay,” he told me. They lingered at the table for much of the afternoon, connecting over their insomnia, among other subjects. After the meal, Sacks wrote Hayes a letter (which he never sent) explaining that relationships had been “a ‘forbidden’ area for me—although I am entirely sympathetic to
(indeed wistful and perhaps envious about)
other people’s relationships.”
A year later, Hayes, whose partner of seventeen years had died of a heart attack, moved to New York. He and Sacks began spending time together. At Sacks’s recommendation, Hayes started keeping a journal, too. He often wrote down his exchanges with Sacks, some of which he later published in a memoir, “Insomniac City.”
“It’s really a question of mutuality, isn’t it?” Sacks asked him, two weeks after they had declared their feelings for each other.
“Love?” Hayes responded. “Are you talking about love?”
“Yes,” Sacks replied.
Sacks began taking Hayes to dinner parties, although he introduced him as “my friend Billy.” He did not allow physical affection in public. “Sometimes this issue of not being out became very difficult,” Hayes told me. “We’d have arguments, and I’d say things like ‘Do you and Shengold ever talk about why you can’t come out? Or is all you ever talk about your dreams?’ ” Sacks wrote down stray phrases from his dreams on a whiteboard in his kitchen so that he could report on them at his sessions, but he didn’t share what happened in therapy.
Kate Edgar, who worked for Sacks for three decades, had two brothers who were gay, and for years she had advocated for gay civil rights, organizing Pride marches for her son’s school. She intentionally found an office for Sacks in the West Village so that he would be surrounded by gay men living openly and could see how normal it had become. She tended to hire gay assistants for him, for the same reason. “So I was sort of plotting on that level for some years,” she told me.
In 2013, after being in a relationship with Hayes for four years—they lived in separate apartments in the same building—Sacks began writing a memoir, “On the Move,” in which he divulged his sexuality for the first time. He recounts his mother’s curses upon learning that he was gay, and his decades of celibacy—a fact he mentions casually, without explanation. Edgar wondered why, after so many years of analysis, coming out took him so long, but, she said, “Oliver did not regard his relationship with Shengold as a failure of therapy.” She said that she’d guessed Shengold had thought, “This is something Oliver has to do in his own way, on his own time.” Shengold’s daughter, Nina, said that, “for my dad to have a patient he loved and respected finally find comfort in identifying who he’d been all his life—that’s growth for both of them.”
A few weeks after finishing the manuscript, Sacks, who’d had melanoma of the eye in 2005, learned that the cancer had come back, spreading to his liver, and that he had only months to live. He had tended toward hypochondria all his life, and Edgar thought that the diagnosis might induce a state of chronic panic. Since he was a child, Sacks had had a horror of losing things, even irrelevant objects. He would be overcome by the “feeling that
there was a hole in the world
,” he wrote in his journal, and the fear that “I might somehow fall through that hole-in-the-world, and be absolutely, inconceivably lost.” Edgar had dealt for decades with his distress over lost objects, but she noticed that now, when he misplaced things, he didn’t get upset. He had an uncharacteristic ease of being.
In the summer of 2015, before Shengold went on his annual summer break, Sacks said to Edgar, “If I’m alive in September when Shengold returns, I’m not sure I need to go back to my sessions.” They had been seeing each other for forty-nine years. Sacks was eighty-two; Shengold was eighty-nine.
When Sacks was struggling with his third book, “A Leg to Stand On,” which was about breaking his leg and his frustration that his doctors wouldn’t listen to him, he wrote in his journal that Shengold had suggested (while apologizing for the corniness of the phrase) that the book should be “a message of love”—a form of protest against the indifference that so many patients find in their doctors. Shengold may have been giving Sacks permission to see their own relationship—the one place in which Sacks felt an enduring sense of recognition and care—as a hidden subject of the book. Extending Shengold’s idea, Sacks wrote, of his book, “The ‘moral’ center has to do with . . . the irreducible ultimate in doctor-patient relations.”
In August, two weeks before Sacks died, he and Shengold spoke on the phone. Shengold was with his family at a cottage in the Finger Lakes region of central New York, where he spent every summer. Nina told me, “We all gathered in the living room of that little cottage and put my father on speakerphone. Oliver Sacks was clearly on his deathbed—he was not able to articulate very well. Sometimes his diction was just gone. Dad kept shaking his head. He said, ‘I can’t understand you. I’m so sorry, I can’t understand you.’ ” At the end of the call, Shengold told Sacks, “It’s been the honor of my life to work with you,” and said, “Goodbye, Oliver.” Sacks responded, “Goodbye, Leonard.” It was the first time they had ever used each other’s first names. When they hung up, Shengold was crying.
After Sacks died, Shengold started closing down his practice. “It was the beginning of the end for him,” his son David told me. “He had lost most of his colleagues. He was really the last of his generation.” Nina said, “I do think part of why my father lived so long and was able to work so long was because of that relationship. That feeling of affection and kindred spirit was lifesaving.”
In “Awakenings,” when describing how Leonard L.—his “ ‘ideal’ patient”—initially responded to L-dopa, Sacks characterizes him as “a man released from entombment” whose “predominant feelings at this time were feelings of freedom, openness, and exchange with the world.” He quotes Leonard saying, “I have been hungry and yearning all my life . . . and now I am full.” He also says, “I feel saved. . . . I feel like a man in love. I have broken through the barriers which cut me off from love.’ ”
For years, Sacks had tested the possibility of awakenings in others, as if rehearsing, or outsourcing, the cure he had longed to achieve with Shengold. But at the end of his life, like an inside-out case study, he inhabited the story he’d imagined for his patients. “All of us entertain the idea of
another
sort of medicine . . . which will restore us to our lost health and wholeness,” he wrote, in “Awakenings.” “We spend our lives searching for what we have lost; and one day, perhaps, we will suddenly find it.” ♦
"Honor Our History": Trump Slammed for Ending Free National Park Entry on Juneteenth & MLK Day
Democracy Now!
www.democracynow.org
2025-12-09 13:30:31
The Trump administration is facing backlash after ending free admission at national parks on the only two federal holidays honoring Black history — Juneteenth and Martin Luther King Jr. Day — while adding free entry on President Trump’s birthday, June 14. The Interior Department also announced...
The Trump administration is facing backlash after ending free admission at national parks on the only two federal holidays honoring Black history — Juneteenth and Martin Luther King Jr. Day — while adding free entry on President Trump’s birthday, June 14. The Interior Department also announced higher entry fees for non-U.S. residents under what it calls “America-first entry fee policies.”
Denigrating Black history “can’t erase the truth,” says Carolyn Finney, who served on the National Parks Advisory Board during the Obama administration. “It’s not going to change how we feel, not just as Black Americans, but Americans in general, about honoring our history.”
We also speak with Audrey Peterman, author of
Our True Nature: Finding a Zest for Life in the National Park System
, who says “the entire history of America, the entire history of every racial and ethnic group in America, is in the national park system.”
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
"Merger Madness": Trump at Center of Rival Netflix-Paramount Bids for Warner Bros.
Democracy Now!
www.democracynow.org
2025-12-09 13:16:10
President Donald Trump says he will be personally involved in the potential sale of Warner Bros. Discovery, with two enormous buyout offers on the table that risk further exacerbating U.S. media concentration. Netflix announced an $83 billion deal last week to buy Warner Bros. Discovery, which would...
This is a rush transcript. Copy may not be in its final form.
AMY
GOODMAN
:
Paramount has launched a hostile bid to take over Warner Bros. Discovery, just days after Netflix announced an $83 billion deal to acquire Warner’s studio and streaming assets, including
HBO
Max. Paramount’s largest shareholder is Larry Ellison, one of the world’s richest men and a close ally of President Trump. Paramount is attempting to fund the takeover of the Warner Bros. Discovery company with funding from sovereign wealth funds from Saudi Arabia, Abu Dhabi and Qatar, as well as Affinity Partners, the private equity fund led by Jared Kushner, Trump’s son-in-law.
Critics of media consolidation have warned against both Netflix’s offer and Paramount’s hostile bid. Actor and activist Jane Fonda, who recently relaunched her father’s free speech organization from the 1940s called the Committee for the First Amendment, published an
op-ed
in
The Ankler
last week headlined “The
WBD
Deal Puts Hollywood, and Democracy, at Risk.” She writes further consolidation will mean, quote, “fewer jobs, fewer opportunities to sell work, fewer creative risks, fewer news sources and far less diversity in the stories Americans get to hear. … And when only a handful of mega-companies control the entire pipeline, they gain the power to steamroll every guild —
SAG
-
AFTRA
, the
WGA
, the
PGA
, the
DGA
,
IATSE
, everyone — making it harder for workers to bargain, harder to stand up for themselves and harder to make a living at all.” The words of Jane Fonda.
The future of Warner Bros. may rest in part in the hands of federal regulators who must approve any merger. On Sunday night, prior to Paramount’s hostile bid, President Trump said a Netflix-Warner merger, quote, “could be a problem.”
PRESIDENT
DONALD
TRUMP
:
They have a very big market share. And when they have Warner Bros., you know, that share goes up a lot. So, I don’t know. That’s going to be for some economists to tell, and also — and I’ll be involved in that decision, too. … But it is a big market share. There’s no question about that. It could be a problem.
AMY
GOODMAN
:
On Monday, after Paramount announced its hostile bid, President Trump was asked about the competing bids for Warner Bros., including the involvement of his son-in-law, Jared Kushner.
PRESIDENT
DONALD
TRUMP
:
I know — I know the companies very well. I know what they’re doing. But I have to see. I have to see what percentage of market they have. We have to see the Netflix percentage of market, Paramount, the percentage of market. I mean, none of them are particularly great friends of mine. You know, I just — I want to — I want to do what’s right. It’s so — it’s so very important to do what’s right.
REPORTER
:
The Paramount deal is supported by Jared Kushner, Mr. President. Would that impact your decision?
PRESIDENT
DONALD
TRUMP
:
If Paramount is? No, I don’t know. I haven’t — I’ve never spoken with him.
AMY
GOODMAN
:
We’re joined now by Craig Aaron, the co-
CEO
of Free Press and Free Press Action, two media reform organizations — not to be confused with Bari Weiss’s The Free Press, which is now owned by Paramount. Craig’s most recent
article
is headlined “Stop the Merger Madness.” Free Press has also just published a new
report
headlined “Chokehold: Donald Trump’s War on Free Speech & the Need for Systemic Resistance.”
Craig Aaron, welcome back to
Democracy Now!
If you can respond to all that has happened in just a matter of days? There was enormous criticism of Netflix’s bid, and now, of course, you have this hostile bid by none other than President Trump’s son-in-law. Can you talk about all of this?
CRAIG
AARON
:
Absolutely. Good to be with you, Amy and Juan.
This is really a damned-if-you-do, damned-if-you-don’t situation, where we have these giant companies taking — trying to take control of even more of what we watch, see, hear and read every day. So, Netflix, of course, if they could get this deal done, would dominate online streaming. Paramount itself has a huge streaming business and, of course, is a major movie studio. So, this is another situation where we’re talking about giant companies merging, spending billions and billions of dollars, all the lawyers and bankers getting rich.
But you don’t have to look very far in the past to understand that media consolidation after media consolidation deal are disastrous for the workers in these industries. They’re disastrous for the audience, who see prices go up and choices go down.
And pretty much every time, they’re disastrous for the businesses, as well. This is the third giant merger in recent time just to involve Warner Bros. You’ll go back to
AOL
Time Warner. We’ve had AT&T and Time Warner — all of these deals collapsing, falling apart, costing thousands and thousands of jobs, Warner Bros. Discovery itself the product of failed mergers.
And now we’re being told we need more concentration, more consolidation. And we have to be asking ourselves: Who does this serve? It seems to serve these executives. Maybe it serves Donald Trump. He seems very interested in competition for his favor, not very interested in actual competition when it comes to media, when it comes to entertainment.
JUAN
GONZÁLEZ:
And, Craig, you mentioned Trump. This spectacle of the president saying he will be involved in the decision —
CRAIG
AARON
:
Unbelievable.
JUAN
GONZÁLEZ:
— on this merger one way or the other. Forget about the fact that his son-in-law is a participant, obviously, in one of the bids. How frequently have presidents in the past directly involved themselves in these kinds of merger decisions where the
FCC
is involved?
CRAIG
AARON
:
Well, you know, certainly these are political fights, and so the White House might have a stake. They might have an interest in the outcome. But the idea that the president would be announcing that he’s personally going to be involved, the idea that someone as close to the president as a member of his own family could be poised to benefit from a decision of the administration, this would be completely unthinkable. And even the way this whole decision is being made really does look like, you know, a Mafia-type situation, not anything that is recognizable in terms of policymaking.
Paramount and the Ellisons’ entire argument for why they should be the company that wins this bidding war essentially boils down to “Donald Trump likes us better.” And they’ve been very clear in trying to win over Trump. That’s why Jared Kushner is involved. They’re trying to win over Trump. They’re saying, “We want to control
CNN
. We’re going to make
CNN
better for you. Look what we’ve done at
CBS
, where we’ve muted
60 Minutes
, where we’ve put Bari Weiss in charge of the news operation.” This is their entire package and selling point to the administration, is that if they go with Paramount Skydance, then that’s going to be good for Trump, because these media executives understand, unfortunately, that’s how you get things done in the Trump era, is you appeal to the ego of Trump, you flatter Trump, and you try to line Trump’s pockets or the pockets of those closest to him. That’s how business gets done in the Trump era.
JUAN
GONZÁLEZ:
And what are the next steps here? What agencies do have to have oversight? And you mentioned Paramount, but isn’t the head — the chief legal officer of Paramount, wasn’t he the head of the Antitrust Division of the Justice Department during Trump’s first term?
CRAIG
AARON
:
That’s right, Juan. The revolving door is spinning. So, Makan Delrahim, who was the top antitrust enforcer under the first Trump administration, now, of course, he’s Paramount’s top lawyer trying to work the system to get approval for their hostile takeover, if it’s announced.
So, there are a lot of things that are going to happen in the weeks ahead. I would remind folks that every time a merger is announced, all the companies involved want to treat it like it’s a done deal, like it’s about to be finished. That is not the case. That’s just PR and spin. We’re looking at at least a year of evaluating this deal, and that’s not even including the fact that there are multiple suitors here trying to appeal to the Warner Bros. board, and now directly to their shareholders, in a hostile takeover.
But whatever deal Warner Bros. pursues, probably by the end of this month, by December 22nd, that would have to go before the Justice Department. That is going to be, I believe, who will review this deal at the federal level. Of course, the Justice Department is also not what it used to be even when Delrahim was there. You know, it has become an arm of the — a direct arm of the Trump administration, pursuing the Trump administration’s political goals. But if they actually follow the law, of course, they would have to scrutinize this deal. And really, looking at the basics there, there’s no way a deal like this should even be considered.
Now, it’s not just the Justice Department that will look here. State attorneys general could have a role that’s very important. If you’re the attorney general of California or New York, you should have a lot of interest in what is going to happen to this huge industry that is such a big part of your state. And this is a big enough deal that European regulators and others around the world are also going to be scrutinizing it, because it is such a ginormous, multibillion-dollar deal between these huge companies, that, you know, really will reshape the entertainment industry as we know it.
JUAN
GONZÁLEZ:
And I wanted to ask you about another merger under consideration, this one by the
FCC
. That’s of Nexstar, the country’s largest owner of TV stations, with a competitor,
TEGNA
. Tell us — Nexstar is pretty well known as a very conservative company, isn’t it?
CRAIG
AARON
:
That’s right, and especially during the Trump administration. Nexstar has been collecting hundreds of television stations across the country in this wave of media consolidation. They are now going to the Federal Communications Commission and asking them to waive laws, to actually overturn explicit instructions from Congress that limit the amount of television stations — the size of the audience one television chain can reach. They’re trying to get that thrown out so they can take over
TEGNA
.
And they’ve done that, again, by appealing to the Trump administration, by promising to crack down on critical journalism and by doing things like taking Jimmy Kimmel off the air. This was one of the companies, along with Sinclair Broadcasting, another very partisan broadcaster, that when the
FCC
chairman went on a podcast and started complaining about jokes Jimmy Kimmel was making about Charlie Kirk, it was Nexstar rushing to immediately yank him, pull him off the air, while having this multibillion-dollar deal before the
FCC
. So, again, we have the media executives seeing that the way to get ahead in the Trump administration is appeal to Trump.
We have the Trump administration abusing its power, really shaking down these companies, demanding loyalty, demanding they erase and eliminate their diversity and equity programs, and demanding that, in some cases, they pay off the administration through specious lawsuits, or maybe it’s they offer big movie contracts to the president’s wife or do favors for his son-in-law’s equity firm. This is the way the Trump administration has been working. This is what they’re pursuing to try to do a takeover of the media and make sure the mainstream, dominant media is really only there to serve Trump.
And you can see that in all of these deals. You know, over the weekend, Trump expressed skepticism of Netflix, but then, when
60 Minutes
interviewed Marjorie Taylor Greene, well, maybe he doesn’t like
CBS
and Paramount so much anymore. This is the game they’re playing. It’s all about control and dominance and one narrative. And unfortunately, media executives in broadcast, in cable, in Hollywood, instead of fighting back against this infringement on free speech and freedom, have simply capitulated, hoping it will get their deals done.
AMY
GOODMAN
:
Craig Aaron, I want to thank you for being with us, co-
CEO
of Free Press and Free Press Action. We’ll link to your new
report
titled “Chokehold: Donald Trump’s War on Free Speech & the Need for Systemic Resistance.” His new
article
is headlined “Stop the Merger Madness.” We’ll link to it at democracynow.org.
Next up, outcry is growing after the Trump administration drops free admission to national parks on the only two federal holidays honoring Black history: Juneteenth and Martin Luther King Day. Instead, the parks will be free on Donald Trump’s birthday. Stay with us.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Elon Musk’s X platform has
blocked the European Commission from making advertisements
, presumably in response to the €120 million fine for its misleading verification system and overall lack of transparency. We’re grateful to Elon Musk for proving once again why the world needs to log off corporate-owned, centrally-controlled social media platforms and log on to a better way of being online. The world needs an open social web through the fediverse and Mastodon.
Calls for public institutions to invest in digital sovereignty are increasing across civil society. The term
digital sovereignty
means that an institution has autonomy and control over the critical digital infrastructure, data, and services that make up their online presence. Up until this point, social media has not been a part of this conversation. We think it is time to change that.
In any free society, it is the right of every citizen to access and comment on the news, decisions, and reasonings of their government. We believe it is a government’s responsibility to ensure this right for its constituents. Public institutions should communicate with their citizens on open platforms, not ones that require creating an account and sending personal data to a self-serving tech company. Today, institutions often communicate through the censorious filter of corporations that do not have the best interests of people or society at heart. They let their message be governed by the whims of out-of-touch and overpaid people who believe they should have unchecked power. We cannot let this stand. Mastodon offers a path forward for any institution that wants to take control of their communications, and
we can help you get started
today.
One of the tools these corporate social media platforms use to control an institution’s communications is the algorithm. Platforms strategically tune their algorithms to make it difficult, if not impossible, for institutions to reach their people without paying the platform ad money. Musk’s move to turn off the European Commission’s advertising capabilities feels like a perverse power play over a legitimate fine, one that effectively silences a crucial avenue for public discourse. We should be horrified that any single individual can wield such influence over the relationship between governments and the people they represent. We should be especially concerned when that individual doesn’t think our governments should exist in the first place.
Mastodon’s chronological timeline means that no institution needs to game an algorithm to keep their people informed. By using hashtags, it’s easy for people who care about the topics you discuss to find you. What’s more, your constituents don’t need to be on Mastodon to follow your posts. They can subscribe via open protocols like RSS and
soon via email
. When it comes to the source of the fine in the first place—X’s infamous blue checks, a.k.a. verification—Mastodon also offers a better way. We empower people to
verify themselves
by linking their social profile to their official (or personal) website. This allows for greater transparency and trust than relying on the often less-than-reputable verification practices of a single corporate entity, especially one that is willing to sell reputation for a low monthly fee. (Meanwhile, another corporate social media platform made $16 billion, 10% of their 2024 revenue, from
advertisements for scams and banned goods
.)
In an era where information is power, it’s disheartening to see our institutions yield so much to the whims of industry and individuals. In contrast, the European Commission is leading the way in taking ownership of social sovereignty on behalf of their people. They own a Mastodon instance,
ec.social-network.europa.eu
, to reach Europeans directly and keep them well informed. Mastodon is proud to help them manage the technical side of things. If you are someone on the fediverse who would like to see their government own their social sovereignty, we encourage you to get in touch with your local representative and tell them why you think they should start using open social media networks like the fediverse. We’re starting a thread on Mastodon of resources to help you get in touch with your local representative
here
.
By making the news and truth contingent on advertising budgets we’ve created an environment where any narrative can win, as long as the storyteller is willing to pay. If we allow these conditions to continue, we will leave behind the voices that truly matter; the people and their public institutions. It is critical that those voices not be silenced forever. The promise of the fediverse is the promise of a better way forward: free from ads and manipulative algorithms, a place built by and for people like you, where our sovereignty is a right and not a privilege.
It will take all of us working together to build a better way of being online. If you want to start an instance or have ideas about how we can encourage more institutions to take control of their social sovereignty, get in touch us at
hello@joinmastodon.org
.
Designing Rust FDB Workloads That Actually Find Bugs
After
one trillion CPU-hours of simulation testing
, FoundationDB has been stress-tested under conditions far worse than any production environment. Network partitions, disk failures, Byzantine faults. FDB handles them all.
But what about your code?
Your layer sits on top of FDB. Your indexes, your transaction logic, your retry handling. How do you know it survives chaos?
At Clever Cloud, we are building
Materia
, our serverless database product. The question haunted us: how do you ship layer code with the same confidence FDB has in its own? Our answer was to hack our way into FDB's simulator using
foundationdb-simulation
, a crate that compiles Rust to run inside FDB's deterministic simulator. We're the only language besides Flow that can pull this off.
The first seed triggered
commit_unknown_result
, one of the most feared edge cases for FDB layer developers. When a connection drops, the client can't know if the transaction committed. Our atomic counters were incrementing twice. In production, this surfaces once every few months under heavy load and during failures. In simulation?
Almost immediately.
This post won't walk you through the code mechanics. The
foundationdb-simulation crate
and its
README
cover that. Instead, this teaches you how to
design
workloads that catch real bugs. Whether you're a junior engineer or an LLM helping write tests, these principles will guide you.
Traditional testing has you write specific tests for scenarios you imagined. But as Will Wilson put it at
Bug Bash 2025
:
"The most dangerous bugs occur in states you never imagined possible."
The key insight of autonomous testing (what FDB's simulation embodies) is that instead of writing tests, you write a
test generator
. If you ran it for infinite time, it would eventually produce all possible tests you could have written. You don't have infinite time, so instead you get a probability distribution over all possible tests. And probability distributions are leaky: they cover cases you never would have thought to test.
This is why simulation finds bugs so fast. You're not testing what you thought to test. You're testing what the probability distribution happens to generate, which includes edge cases you'd never have written explicitly. Add fault injection (a probability distribution over all possible ways the world can conspire to screw you) and now you're finding bugs that would take months or years to surface in production.
This is what got me interested in simulation in the first place: how do you test the things you see during on-call shifts? Those weird transient bugs at 3 AM, the race conditions that happen once a month, the edge cases you only discover when production is on fire. Simulation shifts that complexity from SRE time to SWE time. What was a 3 AM page becomes a daytime debugging session. What was a high-pressure incident becomes a reproducible test case you can bisect, rewind, and experiment with freely.
Here's why rare bugs are so hard to find: imagine a bug that requires three unlikely events in sequence. Each event has a 1/1000 probability. Finding that bug requires 1/1,000,000,000 attempts, roughly a billion tries with random testing. Research confirms this:
a study of network partition failures
found that 83% require 3+ events to manifest, 80% have catastrophic impact, and 21% cause permanent damage that persists after the partition heals.
But here's the good news for Rust workloads
: you don't solve this problem yourself. FDB's simulation handles fault injection. BUGGIFY injects failures at arbitrary code points. Network partitions appear and disappear. Disks fail. Machines crash and restart. The simulator explores failure combinations that would take years to encounter in production.
Your job is different. You need to design operations that exercise interesting code paths. Not just reads and writes, but the edge cases your users will inevitably trigger. And you need to write invariants that CATCH bugs when simulation surfaces them. After a million injected faults, how do you prove your data is still correct? This division of labor is the key insight: FDB injects chaos, you verify correctness.
The
operation alphabet
is the complete set of operations your workload can perform. This is where most workloads fail: they test happy paths with uniform distribution and miss the edge cases that break production. Think about three categories:
Normal operations
with realistic weights. In production, maybe 80% of your traffic is reads, 15% is simple writes, 5% is complex updates. Your workload should reflect this, because bugs often hide in the interactions between operation types. A workload that runs 50% reads and 50% writes tests different code paths than one that runs 95% reads and 5% writes. Both might be valid, but they'll find different bugs.
Adversarial inputs
that customers will inevitably send. Empty strings. Maximum-length values. Null bytes in the middle of strings. Unicode edge cases. Boundary integers (0, -1, MAX_INT). Customers never respect your API specs, so model the chaos they create.
Nemesis operations
that break things on purpose. Delete random data mid-test. Clear ranges that "shouldn't" be cleared. Crash batch jobs mid-execution to test recovery. Run compaction every operation instead of daily. Create conflict storms where multiple clients hammer the same key. Approach the 10MB transaction limit. These operations stress your error handling and recovery paths. The rare operations are where bugs hide. That batch job running once a day in production? In simulation, you'll hit its partial-failure edge case in minutes, but only if your operation alphabet includes it.
After simulation runs thousands of operations with injected faults, network partitions, and machine crashes, how do you know your data is still correct? Unlike FDB's internal testing, Rust workloads can't inject assertions at arbitrary code points. You verify correctness in the
check()
phase, after the chaos ends. The key question:
"After all this, how do I PROVE my data is still correct?"
One critical tip: validate during
start()
, not just in
check()
.
Don't wait until the end to discover corruption. After each operation (or batch of operations), read back the data and verify it matches expectations. If you're maintaining a counter, read it and check the bounds. If you're building an index, query it immediately after insertion. Early validation catches bugs closer to their source, making debugging far easier. The
check()
phase is your final safety net, but continuous validation during execution is where you'll catch most issues.
An invariant is just a property that must always hold, no matter what operations ran. If you've seen property-based testing, it's the same idea: instead of
assertFalse(new User(GUEST).canUse(SAVED_CARD))
, you write
assertEquals(user.isAuthenticated(), user.canUse(SAVED_CARD))
. The first tests one case. The second tests a rule that holds for all cases.
Four patterns dominate invariant design:
Reference Models
maintain an in-memory copy of expected state. Every operation updates both the database and the reference model. In
check()
, you compare them. If they diverge, something went wrong. Use
BTreeMap
(not
HashMap
) for deterministic iteration. This pattern works best for single-client workloads where you can track state locally.
Conservation Laws
track quantities that must stay constant. Inventory transfers between warehouses shouldn't change total inventory. Money transfers between accounts shouldn't create or destroy money. Sum everything up and verify the conservation law holds. This pattern is elegant because it doesn't require tracking individual operations, just the aggregate property.
Structural Integrity
verifies data structures remain valid. If you maintain a secondary index, verify every index entry points to an existing record and every record appears in the index exactly once. If you maintain a linked list in FDB, traverse it and confirm every node is reachable. The cycle validation pattern (creating a circular list where nodes point to each other) is a classic technique from
FDB's own Cycle workload
. After chaos, traverse the cycle and verify you visit exactly N nodes.
Operation Logging
solves two problems at once:
maybe_committed
uncertainty and multi-client coordination. The trick from
FDB's own AtomicOps workload
:
log the intent alongside the operation in the same transaction
. Write both your operation AND a log entry recording what you intended. Since they're in the same transaction, they either both commit or neither does. No uncertainty. For multi-client workloads, each client logs under its own prefix (e.g.,
log/{client_id}/
). In
check()
, client 0 reads all logs from all clients, replays them to compute expected state, and compares against actual state. If they diverge, something went wrong, and you'll know exactly which operations succeeded. See the
Rust atomic workload example
for a complete implementation.
FDB's simulation is deterministic. Same seed, same execution path, same bugs. This is the superpower that lets you reproduce failures. But determinism is fragile. Break it, and you lose reproducibility. Five rules to remember:
BTreeMap, not HashMap
: HashMap iteration order is non-deterministic
context.rnd(), not rand::random()
: All randomness must come from the seeded PRNG
context.now(), not SystemTime::now()
: Use simulation time, not wall clock
db.run(), not manual retry loops
: The framework handles retries and
maybe_committed
correctly
No tokio::spawn()
: The simulation runs on a custom executor, spawning breaks it
If you take nothing else from this post, memorize these. Break any of them and your failures become unreproducible. You'll see a bug once and never find it again.
Real production systems use tokio, gRPC, REST frameworks, all of which break simulation determinism. You can't just drop your production binary into the simulator. The solution is separating your FDB operations into a simulation-friendly crate:
my-project/
├── my-fdb-service/ # Core FDB operations - NO tokio
├── my-grpc-server/ # Production layer (tokio + tonic)
└── my-fdb-workloads/ # Simulation tests
The service crate contains pure FDB transaction logic with no async runtime dependency. The server crate wraps it for production. The workloads crate tests the actual service logic under simulation chaos. This lets you test your real production code, not a reimplementation that might have different bugs.
Beyond the determinism rules above, these mistakes will bite you:
Running setup or check on all clients.
The framework runs multiple clients concurrently. If every client initializes data in
setup()
, you get duplicate initialization. If every client validates in
check()
, you get inconsistent results. Use
if self.client_id == 0
to ensure only one client handles initialization and validation.
Forgetting maybe_committed.
The
db.run()
closure receives a
maybe_committed
flag indicating the previous attempt might have succeeded. If you're doing non-idempotent operations like atomic increments, you need either truly idempotent transactions or
automatic idempotency
in FDB 7.3+. Ignoring this flag means your workload might count operations twice.
Storing SimDatabase between phases.
Each phase (
setup
,
start
,
check
) gets a fresh database reference. Storing the old one leads to undefined behavior. Always use the
db
parameter passed to each method.
Wrapping FdbError in custom error types.
The
db.run()
retry mechanism checks if errors are retryable via
FdbError::is_retryable()
. If you wrap
FdbError
in your own error type (like
anyhow::Error
or a custom enum), the retry logic can't see the underlying error and won't retry. Keep
FdbError
unwrapped in your transaction closures, or ensure your error type preserves retryability information.
Assuming setup is safe from failures.
BUGGIFY is disabled during
setup()
, so you might think transactions can't fail. But simulation randomizes FDB knobs, which can still cause transaction failures. Always use
db.run()
with retry logic even in setup, or wrap your setup in a retry loop.
That
commit_unknown_result
edge case appeared on our first simulation seed. In production, we'd still be hunting it months later. 30 minutes of simulation covers what would take 24 hours of chaos testing. But the real value of simulation testing isn't just finding bugs, it's
forcing you to think about correctness.
When you design a workload, you're forced to ask: "What happens when this retries during a partition?" "How do I verify correctness when transactions can commit in any order?" "What invariants must hold no matter what chaos occurs?" Designing for chaos becomes natural. And if it survives simulation, it survives production.
Feel free to reach out with questions or to share your simulation workloads. You can find me on
Twitter
,
Bluesky
or through my
website
.
Semi-Interactive Assembly Verification in Knuckledragger
This link caused an XML parsing exception.
If this link has an extension(''), maybe
we should exclude it. Here's the link: https://www.philipzucker.com/asm_verify4/.
Australia’s world-first social media ban begins as millions of children and teens lose access to accounts
Guardian
www.theguardian.com
2025-12-09 13:01:04
Accounts held by users under 16 must be removed on apps that include TikTok, Facebook, Instagram, X, YouTube, Snapchat, Reddit, Kick, Twitch and Threads under banHow is Australia’s social media ban affecting you and your family?Get our breaking news email, free app or daily news podcastAustralia has...
Facebook, Instagram, Threads, X, YouTube, Snapchat, Reddit, Kick, Twitch and TikTok are expected to have taken steps from Wednesday to remove accounts held by users under 16 years of age in Australia, and prevent those teens from registering new accounts.
Platforms that do not comply risk fines of up to $49.5m.
There have been some teething problems with the ban’s implementation. Guardian Australia has received several reports of those
under 16 passing the facial age assurance tests
, but the government has flagged it is not expecting the ban will be perfect from day one.
All listed platforms apart from X had confirmed by Tuesday they would comply with the ban. The eSafety commissioner, Julie Inman Grant, said it had recently had a conversation with X about how it would comply, but the company had not communicated its policy to users.
Bluesky, an X alternative, announced on Tuesday it would also ban under-16s, despite eSafety assessing the platform as “low risk” due to its small user base of 50,000 in Australia.
Children had spent the past few weeks undertaking age assurance checks, swapping phone numbers and preparing for their accounts to be deactivated.
The Australian chief executive and co-founder of the age assurance service k-ID, Kieran Donovan, said his service had conducted hundreds of thousands of age checks in the past few weeks. The k-ID service was being used by Snapchat among others.
Parents of children affected by the ban shared a spectrum of views on the policy. One parent told the Guardian their 15-year-old daughter was “very distressed” because “all her 14 to 15-year-old friends have been age verified as 18 by Snapchat”. Since she had been identified as under 16, they feared “her friends will keep using Snapchat to talk and organise social events and she will be left out”.
Ezra is a teen quadriplegic. He says Australia’s social media ban will make him lonelier – video
Another parent said the ban had forced him to teach his child how to break the law. “I’ve shown her how VPNs work and other methods on bypassing age restrictions,” he said. “I’ve had to set her up with her own adult YouTube account and have assisted her in bypassing TikTok’s age-estimation and will keep doing so each time it asks.”
Others said the ban “can’t come quickly enough”. One parent said their daughter was “completely addicted” to social media and the ban “provides us with a support framework to keep her off these platforms”.
The Australian prime minister, Anthony Albanese, said in
an opinion piece on Sunday
: “From the beginning, we’ve acknowledged this process won’t be 100% perfect. But the message this law sends will be 100% clear … Australia sets the legal drinking age at 18 because our society recognises the benefits to the individual and the community of such an approach.
“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear, national standard.”
Polling has consistently shown that
two-thirds of voters support
raising the minimum age for social media to 16. The opposition, including leader Sussan Ley, have recently
voiced alarm about the ban
, despite waving the legislation through parliament and the former Liberal leader Peter Dutton championing it.
The ban has garnered worldwide attention, with several nations indicating they will adopt a ban of their own, including Malaysia, Denmark and Norway. The European Union passed a resolution to adopt similar restrictions, while a spokesperson for the British government told Reuters it was “closely monitoring Australia’s approach to age restrictions”.
Inman Grant told the Guardian that from Thursday, she would be sending notices to the platforms covered by the ban to find out how the implementation was progressing.
Questions included “how many accounts [they’ve] deactivated or removed, what challenges they’re finding, how they’re preventing recidivism and preventing circumvention, whether or not their abuse or reporting abuse and the appeals processes are working as planned”, she said.
Albanese said the information gathered in this process would be made public.
The regulator would need to assess whether platforms were taking reasonable steps. If they were not, it could take that platform to court to seek fines.
There would be an independent evaluation of the ban conducted by an academic advisory group examining the short-term, medium-term and longer-term impacts of the ban.
“It will look at the benefits over time, but also the unintended consequences,” Inman Grant said.
“Everything from are sleeping? Are they interacting or are they actually getting out on the sports fields? Are they reading books? Are they taking less medication like antidepressants? Are their Naplan scores improving over time?” Inman Grant said.
Potential unintended consequences to be investigated included whether children were moving on to “darker areas of the internet”, learning how to bypass the bans through VPNs, or moving on to other platforms, she said.
Teens on Snapchat affected by the ban had been publicly sharing their mobile numbers in their profiles ahead of their accounts being shut down.
A spokesperson for Snapchat said the platform understood under-16s were disappointed by the ban but “would strongly encourage any teens using Snapchat not to publicly share their personal contact information”.
Inman Grant said she had sent notices to 15 companies not initially included in the ban, asking them to self-assess whether they should be.
Yope and Lemon8, which shot up the app store rankings as teens looked for alternatives, were among those contacted.
Headlines for December 9, 2025
Democracy Now!
www.democracynow.org
2025-12-09 13:00:00
Israeli Military Chief Says Gaza’s “Yellow Line” WIll Become “New Border” for Israel, U.N. Condemns Israeli Raid on UNRWA Headquarters in Occupied East Jerusalem, Supreme Court Signals It Will Grant Trump Power to Fire Independent Agency Heads, Trump Insults Another Fem...
Israeli Military Chief Says Gaza’s “Yellow Line” WIll Become “New Border” for Israel
Dec 09, 2025
Israel’s military chief has told soldiers occupying the Gaza Strip that the “yellow line” dividing the Palestinian territory under President Trump’s ceasefire plan will become a “new border” for Israel. The comments by Lieutenant General Eyal Zamir come despite a provision in the October ceasefire deal stating that “Israel will not occupy or annex Gaza.” Such a move would give Israel control of more than half of Gaza’s territory, including farmland and the Rafah border crossing with Egypt.
Meanwhile, a new report by Reporters Without Borders finds Israel has killed more journalists in 2025 than any other country — for the third year running. The report found Israel’s military liable for the deaths of 29 Palestinian journalists, among 67 journalists killed around the world this year.
U.N. Condemns Israeli Raid on
UNRWA
Headquarters in Occupied East Jerusalem
Dec 09, 2025
In occupied East Jerusalem, the United Nations has condemned a raid by Israeli forces on the headquarters of the U.N.’s Relief and Works Agency for Palestine Refugees, known as
UNRWA
. A spokesperson for the secretary-general said the raid directly violated international law.
Stéphane Dujarric
: “Police motorcycles, as well as trucks and forklifts, were brought in, and all communications were cut. Furniture, IT equipment and other property was seized, and the U.N. flag was pulled down and replaced by the Israeli flag. … This compound remains United Nations premises and is inviolable and immune from any other form of interference.”
Supreme Court Signals It Will Grant Trump Power to Fire Independent Agency Heads
Dec 09, 2025
The U.S. Supreme Court signaled Monday it’s prepared to make it easier for President Trump to fire independent government officials, despite laws barring the president from removing them without cause. On Monday, the court heard oral arguments in the case of Federal Trade Commission member Rebecca Kelly Slaughter, who was fired by the White House in March. The court’s right-wing majority cast doubt on a 90-year-old precedent known as Humphrey’s Executor, which grants a president the power to fire a board member only for “inefficiency, neglect of duty, or malfeasance in office.” Liberal Justice Sonia Sotomayor warned that move would “destroy the structure of government,” while Justice Elena Kagan warned it would grant the president near-unlimited power.
Justice Elena Kagan
: “So, the result of what you want is that the president is going to have massive, unchecked, uncontrolled power not only to do traditional execution, but to make law through legislative and adjudicative frameworks.”
A ruling in the case is expected by June; until then, the Supreme Court has allowed the White House’s firing of Rebecca Kelly Slaughter and other commissioners to remain in effect.
Trump Insults Another Female Reporter as He Walks Back Support for Releasing Boat Strike Video
Dec 09, 2025
President Trump has walked back his remarks last week when asked if he would release video showing a series of strikes on an alleged drug boat in the Caribbean on September 2. Previously, Trump said he had “no problem” releasing the footage, but speaking to reporters Monday, Trump defended Defense Secretary Hegseth while insulting ABC’s Rachel Scott, who pressed him on whether he would release the full video.
Rachel Scott
: “Are you committing to releasing the full video?”
President Donald Trump
: “Didn’t I just tell you that?”
Rachel Scott
: “You said that it was up to Secretary Hegseth.”
President Donald Trump
: “You’re the most obnoxious reporter in the whole place. Let me just tell you, you are an obnoxious, a terrible — actually a terrible reporter. And it’s always the same thing with you. I told you: Whatever Pete Hegseth wants to do is OK with me.”
Recently, President Trump called CBS’s Nancy Cordes “stupid,” Katie Rogers from The New York Times “ugly,” and when Catherine Lucey of Bloomberg News asked him about releasing the Epstein files, Trump told her, “Quiet, piggy.”
Honduras Seeks to Arrest Ex-President and Narcotrafficker Juan Orlando Hernández After Trump Pardon
Dec 09, 2025
Despite claiming to target alleged drug boats in the Pacific and the Caribbean, President Trump has used his power to pardon about 100 people accused of drug-related crimes — that’s according to a Washington Post analysis. Just last week, Trump pardoned former Honduran President Juan Orlando Hernández, who was sentenced to 45 years in prison for conspiring to distribute more than 400 tons of cocaine in the U.S. On Monday night, the Honduran attorney general announced he had instructed his government and Interpol to arrest Hernández.
Meanwhile, a 2016 video unearthed by
CNN
of Defense Secretary Hegseth shows him repeatedly warning that the U.S. military should refuse “unlawful” orders from the President.
Pete Hegseth
: “If you’re doing something that is just completely unlawful and ruthless, then there is a consequence for that. That’s why the military said it won’t follow unlawful orders from their commander-in-chief.”
Republicans Unveil Record $901 Billion Military Spending Bill
Dec 09, 2025
Republican lawmakers have unveiled a bill to authorize $901 billion in military spending for the next fiscal year. House Speaker Mike Johnson says the National Defense Authorization Act would “ensure our military forces remain the most lethal in the world.” In a statement, Public Citizen’s Robert Weissman responded, “The last person who should be entrusted with an even bigger budget is the dangerous and lawless Pete Hegseth. As Hegseth illegally targets civilian boats near Venezuela with expensive Hellfire missiles, wastefully and recklessly deploys the National Guard in cities around the country, and teases an unconstitutional and costly war, Congress should refuse to add a penny to his budget.”
Meanwhile, lawmakers have attached a provision to the
NDAA
that would withhold money from Hegseth’s travel budget if the Pentagon refuses to hand over video of the September 2 boat strike.
Trump Announces $12 Billion Aid Package to Farmers Hit Hard by Trade War
Dec 09, 2025
President Trump has announced a $12 billion aid package to farmers struggling from the devastating effects of his tariffs. Farm bankruptcies rose by nearly 50% this year compared to last year. President Trump’s tariffs on China cut its imports of U.S. soybeans to zero before a deal was reached in October. Democratic Senator Ron Wyden of Oregon criticized the bailout for farmers, saying, “Instead of proposing government handouts, Donald Trump should end his destructive tariff spree so American farmers can compete and win on a level playing field.”
ICE
Points Guns at Crowd Protesting Arrest of Student at Augsburg University in Minneapolis
Dec 09, 2025
Image Credit: IG/ @mnicewatch
In Minnesota, students and staff at Augsburg University in Minneapolis say federal immigration agents pointed guns at a crowd that gathered to protest the arrest of an undergraduate student on Saturday. In a statement, the university wrote that the
ICE
agents lacked a signed judicial warrant, which is required for them to enter private buildings. It was just one of several reports of
ICE
agents physically threatening and wrongfully detaining people swept up in what
ICE
is calling “Operation Metro Surge,” targeting Minnesota’s Somali community, which President Trump described in a racist tirade last week as “garbage.”
Democratic Senator Murray Condemns
ICE
After Agents Release Attack Dog on Constituent
Dec 09, 2025
In Washington state, Democratic Senator Patty Murray is condemning an incident where
ICE
agents released an attack dog on one of her constituents last month in Vancouver. According to Senator Murray, Wilmer Toledo-Martinez suffered “horrific” injuries after an
ICE
agent lured him out of his home before violently arresting him. His neighbor, John Williams Sr., witnessed the attack; he spoke to TV station
KGW
.
John Williams Sr.
: “His wife’s screaming. The kids in the car are screaming. I’m glad his 7-year-old daughter wasn’t here. The 2- and 3-year-old was here. And we were trying to ask what’s going on, and he’s telling her to 'Get back! Get back! Or we're gonna sic the dog on you.’ … I never saw nothing like this in my life close up with no one, you know. And it hurts. It really hurts, man, especially to happen to a young man like that, man. You know, a good, honest young man.”
Toledo-Martinez was hospitalized and received stitches for gruesome injuries. His attorney says
ICE
delayed his medical care for several hours and that a prescription for antibiotics was never filled by staff at the Northwest
ICE
Processing Center in Tacoma, where he’s been held since his arrest. In a statement, Senator Murray said, “This should shock the conscience of every one of us. I do not want to live in an America where federal agents can sic attack dogs on peaceful residents with impunity and face no consequences.”
California’s
DOJ
Announces New Online Portal for Public to Document Unlawful
ICE
Activity
Dec 09, 2025
California’s Department of Justice has announced a new online portal for members of the public to share videos, photos and other evidence documenting unlawful activity by
ICE
agents. This follows similar efforts in other states, including Illinois and New York. Meanwhile, the makers of the smartphone app ICEBlock have sued the Trump administration on First Amendment grounds, after the Justice Department pressured Apple to remove its software from its app store. Before Apple banned the software in October, ICEBlock allowed users to anonymously track reported sightings of
ICE
agents.
Dozen Former
FBI
Agents File Lawsuit Accusing Kash Patel of Unlawfully Firing Them
Dec 09, 2025
Image Credit: Jose Luis Magana/AP
A dozen former
FBI
agents filed a lawsuit accusing
FBI
Director Kash Patel and other officials of unlawfully firing them for kneeling during a 2020 protest after the death of George Floyd. The lawsuit claims that the agents knelt to “defuse a volatile situation, not as an expressive political act.” Meanwhile, Patel reportedly yelled at agents on the security detail for his girlfriend to drive her allegedly drunken friend home. This comes as a leaked report compiled by retired and active-duty
FBI
special agents and analysts called the agency under Patel a “rudderless ship” and “chronically under-performing.”
Paramount Skydance Launches Hostile Takeover Bid for Warner Bros. Discovery
Dec 09, 2025
Paramount Skydance has launched a nearly $78 billion hostile takeover offer for Warner Bros. Discovery, just days after Warner Bros. accepted a $72 billion deal from Netflix. Paramount said it has secured funding commitments from the sovereign wealth funds of Saudi Arabia, Abu Dhabi and Qatar, along with support from Affinity Partners, the private equity firm run by Jared Kushner, President Trump’s son-in-law. President Trump reportedly favors Paramount to acquire Warner Bros. Discovery, and remarked over the weekend that he’ll intervene in the federal review process of Netflix’s proposed deal. We’ll have more on this story later in the broadcast.
ProPublica: Trump’s Mortgages Match His Description of Mortgage Fraud
Dec 09, 2025
President Trump has accused his political enemies, including Federal Reserve Governor Lisa Cook, of mortgage fraud for claiming more than one primary residence on her loans. Now a ProPublica investigation finds that President Trump did the same thing in the 1990s, when he took out two Florida mortgages and claimed that each home would be his main residence. According to ProPublica, President Trump never lived in the two Florida houses and instead used them as rental properties. Kathleen Engel, a Suffolk University law professor and leading expert on mortgage finance, told ProPublica, “Given Trump’s position on situations like this, he’s going to either need to fire himself or refer himself to the Department of Justice. Trump has deemed that this type of misrepresentation is sufficient to preclude someone from serving the country.”
Clashes Between Thailand and Cambodia Erupt Again After Trump-Brokered Ceasefire
Dec 09, 2025
Fighting between Thailand and Cambodia has erupted again after Thailand launched airstrikes Monday along its disputed border with Cambodia. A Thai soldier and four Cambodian civilians were killed in the renewed fighting, as both sides accuse each other of breaching a ceasefire deal brokered by President Trump back in October. Earlier this year, at least 48 people were killed and 300,000 were forced to flee their homes in the five-day conflict. This is the spokesperson from the Cambodian Defense Ministry.
Gen. Maly Socheata
: “The second invasion activity by the Thai side shows clearly their intention to grab their neighbor’s land using a unilateral map and using force to change borders.”
Longtime Peace Activist Cora Weiss Dies at Age 91
Dec 09, 2025
Image Credit: Reuters
Here in New York, the longtime peace activist Cora Weiss has died at the age of 91, after decades of advocacy demanding civil rights, nuclear disarmament, gender equality and the abolition of war. In the 1960s, Cora Weiss was a national leader of Women Strike for Peace, which played a major role in bringing about the end of nuclear testing in the atmosphere. She organized protests against the Vietnam War and served as president of the Hague Appeal for Peace. She was nominated for a Nobel Peace Prize multiple times. Cora Weiss also served for decades on the board of Downtown Community Television. She last appeared on Democracy Now! in 2022.
Cora Weiss
: “Climate change and nuclear weapons are the apocalyptic twins. And we have to prevent one and get rid of the other. We have to abolish nuclear weapons immediately. There should be no question about it anymore. They’re too dangerous and unnecessary. And who wants to destroy the world and the lives of everybody in it?”
Cora Weiss’s husband, Peter Weiss, the well-known human rights attorney, died several weeks ago just shy of his 100th birthday. Cora Weiss died yesterday on Peter Weiss’s 100th birthday.
The original content of this program is licensed under a
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License
. Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.
Part of the
Accepted!
series, explaining the upcoming Go changes in simple terms.
Automatically erase used memory to prevent secret leaks.
Ver. 1.26 • Stdlib •
Low impact
Summary
The new
runtime/secret
package lets you run a function in
secret mode
. After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable.
secret.Do(func(){// Generate a session key and
// use it to encrypt the data.
})
This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it.
The package is experimental and is mainly for developers of cryptographic libraries, not for application developers.
Motivation
Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, session keys (used to encrypt and decrypt data during a specific communication session) need to be erased from memory after they're used. If there's no reliable way to clear this memory, the keys could stay there indefinitely, which would break forward secrecy.
In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it.
The solution is to provide a runtime mechanism that automatically erases all temporary storage used during sensitive operations. This will make it easier for library developers to write secure code without using workarounds.
Description
Add the
runtime/secret
package with
Do
and
Enabled
functions:
// Do invokes f.
//
// Do ensures that any temporary storage used by f is erased in a
// timely manner. (In this context, "f" is shorthand for the
// entire call tree initiated by f.)
// - Any registers used by f are erased before Do returns.
// - Any stack used by f is erased before Do returns.
// - Any heap allocation done by f is erased as soon as the garbage
// collector realizes that it is no longer reachable.
// - Do works even if f panics or calls runtime.Goexit. As part of
// that, any panic raised by f will appear as if it originates from
// Do itself.
funcDo(ffunc())
// Enabled reports whether Do appears anywhere on the call stack.
funcEnabled()bool
The current implementation has several limitations:
Only supported on linux/amd64 and linux/arm64. On unsupported platforms,
Do
invokes
f
directly.
Protection does not cover any global variables that
f
writes to.
Trying to start a goroutine within
f
causes a panic.
If
f
calls
runtime.Goexit
, erasure is delayed until all deferred functions are executed.
Heap allocations are only erased if ➊ the program drops all references to them, and ➋ then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act.
If
f
panics, the panicked value might reference memory allocated inside
f
. That memory won't be erased until (at least) the panicked value is no longer reachable.
Pointer addresses might leak into data buffers that the runtime uses for garbage collection. Do not put confidential information into pointers.
The last point might not be immediately obvious, so here's an example. If an offset in an array is itself secret (you have a
data
array and the secret key always starts at
data[100]
), don't create a pointer to that location (don't create a pointer
p
to
&data[100]
). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.
The package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use
secret.Do
behind the scenes.
As of Go 1.26, the
runtime/secret
package is experimental and can be enabled by setting
GOEXPERIMENT=runtimesecret
at build time.
Example
Use
secret.Do
to generate a session key and encrypt a message using AES-GCM:
// Encrypt generates an ephemeral key and encrypts the message.
// It wraps the entire sensitive operation in secret.Do to ensure
// the key and internal AES state are erased from memory.
funcEncrypt(message[]byte)([]byte,error){varciphertext[]bytevarencErrerrorsecret.Do(func(){// 1. Generate an ephemeral 32-byte key.
// This allocation is protected by secret.Do.
key:=make([]byte,32)if_,err:=io.ReadFull(rand.Reader,key);err!=nil{encErr=errreturn}// 2. Create the cipher (expands key into round keys).
// This structure is also protected.
block,err:=aes.NewCipher(key)iferr!=nil{encErr=errreturn}gcm,err:=cipher.NewGCM(block)iferr!=nil{encErr=errreturn}nonce:=make([]byte,gcm.NonceSize())if_,err:=io.ReadFull(rand.Reader,nonce);err!=nil{encErr=errreturn}// 3. Seal the data.
// Only the ciphertext leaves this closure.
ciphertext=gcm.Seal(nonce,nonce,message,nil)})returnciphertext,encErr}
Note that
secret.Do
protects not just the raw key, but also the
cipher.Block
structure (which contains the expanded key schedule) created inside the function.
This is a simplified example, of course — it only shows how memory erasure works, not a full cryptographic exchange. In real situations, the key needs to be shared securely with the receiver (for example, through key exchange) so decryption can work.
Functionally, ARIA roles, states, and properties are analogous to a CSS for assistive technologies.
For screen reader users, ARIA controls the rendering of their non-visual experience.
Incorrect ARIA misrepresents visual experiences, with potentially devastating effects on their corresponding non-visual experiences.
Before using ARIA or any of the guidance in this document, please take time to understand the following two essential principles.
Principle 1: A role is a promise
This code:
<div role="button">Place Order</div>
Is a promise that the author of that
<div>
has also incorporated JavaScript that provides the keyboard interactions expected for a button.
Unlike HTML input elements, ARIA roles do not cause browsers to provide keyboard behaviors or styling.
Using a role without fulfilling the promise of that role is similar to making a "Place Order" button that abandons an order and empties the shopping cart.
One of the objectives of this guide is to define expected behaviors for each ARIA role.
Principle 2: ARIA Can Both Cloak and Enhance, Creating Both Power and Danger
The information assistive technologies need about the meaning and purpose of user interface elements is called accessibility semantics.
From the perspective of assistive technologies, ARIA gives authors the ability to dress up HTML and SVG elements with critical accessibility semantics that the assistive technologies would not otherwise be able to reliably derive.
Some of ARIA is like a cloak; it covers up, or overrides, the original semantics or content.
<a role="menuitem">Assistive tech users perceive this element as an item in a menu, not a link.</a>
<a aria-label="Assistive tech users can only perceive the contents of this aria-label, not the link text">Link Text</a>
On the other hand, some uses of ARIA are more like suspenders or belts; they add meaning that provides essential support to the original content.
<button aria-pressed="false">Mute</button>
This is the power of ARIA.
It enables authors to describe nearly any user interface component in ways that assistive technologies can reliably interpret, thus making components accessible to assistive technology users.
This is also the danger of ARIA.
Authors can inadvertently override accessibility semantics.
<table role="log">
<!--
Table that assistive technology users will not perceive as a table.
The log role tells browser this is a log, not a table.
-->
</table>
<ul role="navigation">
<!-- This is a navigation region, not a list. -->
<li><a href="uri1">nav link 1</a></li>
<li><a href="uri2">nav link 2</a></li>
<!-- ERROR! Previous list items are not in a list! -->
</ul>
Browser and Assistive Technology Support
Testing assistive technology interoperability is essential before using code from this guide in production.
Because the purpose of this guide is to illustrate appropriate use of ARIA 1.2 as defined in the ARIA specification, the design patterns, reference examples, and sample code intentionally
do not
describe and implement coding techniques for working around problems caused by gaps in support for ARIA 1.2 in browsers and assistive technologies.
It is thus advisable to test implementations thoroughly with each browser and assistive technology combination that is relevant within a target audience.
Similarly, JavaScript and CSS in this guide is written to be compatible with the most recent version of Chrome, Firefox, and Safari at the time of writing.
Except in cases where the ARIA Working Group and other contributors have overlooked an error,
examples in this guide that do not function well in a particular browser or with a specific assistive technology are demonstrating browser or assistive technology bugs.
Browser and assistive technology developers can thus utilize code in this guide to help assess the quality of their support for ARIA 1.2.
Mobile and Touch Support
Currently, this guide does not indicate which examples are compatible with mobile browsers or touch interfaces.
While some of the examples include specific features that enhance mobile and touch support, some ARIA features are not supported in any mobile browser.
In addition, there is not yet a standardized approach for providing touch interactions that work across mobile browsers.
More guidance about touch and mobile support is planned for future releases of the guide.
Mazda suitcase car, a portable three-wheeled vehicle that fits in the luggage
Portable mazda suitcase car for airports and travels
Back in the early 1990s,
Mazda
built a suitcase car, a portable three-wheeled vehicle for airports that fits inside hard-shell luggage. A project coming from an internal contest called Fantasyard between 1989 and 1991, the
concept
automobile was built by seven of the company’s engineers from their manual transmission testing and research unit. They wanted a vehicle to move around airports faster, so the team bought a pocket bike and the largest hard-shell Samsonite suitcase, size 57 cm by 75 cm. They used parts from the pocket bike, including its 33.6 cc two-stroke engine that produces 1.7 PS. The handlebars went inside the suitcase, the rear wheels attached to the outside of the case, and the front wheel came through a removable hatch in the front.
Assembling the portable Mazda suitcase car could take around a minute. Workers turned the front wheel to an upright position through the removable section, and they inserted the rear wheels. Then, they attached the seat above the rear axle. In the end, the vehicle weighed 32 kilos while the engine pushed it to a top speed of 30 km/h, or 19 mph. The concept automobile shared traits with earlier Mazda vehicles because it had three wheels, like the Mazda-Go from 1931, which was a motor rickshaw sold in Japan. Then, there’s the low center of gravity, which was found in the previous MX-5 roadster. So far, the portable Mazda suitcase car has never made it to production.
all images courtesy of Mazda UK
Two built versions, with the US one still existing
The early 1990s marked changes at Mazda, as the
company
faced high demand for its MX-5 roadster. In 1991, Mazda became the first Japanese brand to win the 24 Hours of Le Mans race with a rotary-engined car, the 787B. That same year, Mazda showed a hydrogen-powered rotary concept named HR-X. The company ran Fantasyard, where teams from different departments competed to create mobility ideas, and engineers had small budgets for their projects. It is from this event that the portable Mazda suitcase car came to fruition as a concept automobile.
During its time, it received so much media attention that the company even built two versions (US and Europe). The European model appeared at the 1991 Frankfurt International Motor Show next to the 787B racer, but the original prototype got destroyed by accident months after the Fantasyard event. The US model still exists (likely owned by a collector), while the European one is missing. While the company never produced it, the portable Mazda suitcase car showcased a design direction for the company, one that focuses on small, practical mobility.
the concept automobile could fit inside a hard-shell Samsonite suitcase
the handlebars went inside the suitcase, and the rear wheels were attached to the outside of the case
the front wheel came through a removable hatch in the front
Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:
In medical research, there’s a practice of ending a study ...
Two competing arguments are making the rounds. The first is by a neurosurgeon in the
New York Times
. In
an op-ed
that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:
In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do.
There’s a public health imperative to quickly expand the adoption of autonomous vehicles. More than
39,000 Americans died
in motor vehicle crashes last year, more than homicide, plane crashes and natural disasters combined. Crashes are the No. 2 cause of death for children and young adults. But death is only part of the story. These crashes are also the leading cause of spinal cord injury. We surgeons see the aftermath of the 10,000 crash victims who come to emergency rooms every day.
The other is a soon-to-be-published book:
Driving Intelligence: The Green Book
. The authors, a computer scientist and a management consultant with experience in the industry, make the opposite argument. Here’s one of the authors:
There is something very disturbing going on around trials with autonomous vehicles worldwide, where, sadly, there have now been many deaths and injuries both to other road users and pedestrians. Although I am well aware that there is not,
senso stricto
, a legal and functional parallel between a “drug trial” and “AV testing,” it seems odd to me that if a trial of a new drug had resulted in so many deaths, it would surely have been halted and major forensic investigations carried out and yet, AV manufacturers continue to test their products on public roads unabated.
I am not convinced that it is good enough to argue from statistics that, to a greater or lesser degree, fatalities and injuries would have occurred anyway had the AVs had been replaced by human-driven cars: a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway….
Both arguments are compelling, and it’s going to be hard to figure out what public policy should be.
Abstract
: How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern their deployment. One proposed way to assess safety is to test drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this paper, we calculate the number of miles of driving that would be needed to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared to vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, the possibility remains that it will not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. Therefore, it is imperative that autonomous vehicle regulations are adaptive—designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.
One problem, of course, is that we treat death by human driver differently than we do death by autonomous computer driver. This is likely to change as we get more experience with AI accidents—and AI-caused deaths.
I found the problem and it's really bad. Looking at your log, here's the catastrophic command that was run:
rm -rf tests/ patches/ plan/ ~/
See that ~/ at the end? That's your entire home directory. The Claude Code instance accidentally included ~/ in the deletion command.
— Claude, after Cla...
The GenAI bubble is going to pop. Everyone knows that. To me, the urgent and interesting questions are how widespread the
damage will be and what the hangover will feel like.
On that basis, I was going to post a link on Mastodon to Paul Krugman’s
Talking With Paul Kedrosky
. It’s great, but while I
was reading it I thought “This is going to be Greek to people who haven’t been watching the bubble details.”
So consider this a
preface to the Krugman-Kedrosky piece. If you already know about the GPU-fragility and SPV-voodoo issues, just skip this and go
read that.
Depreciation
·
When companies buy expensive stuff, for accounting purposes they pretend they haven’t spent the money; instead they
“depreciate” it over a few years. That is to say, if you spent a million bucks on a piece of gear and decided to depreciate it
over four years, your annual financials would show four annual charges of $250K.
Management gets to pick your depreciation period, which provides a major opening
for creative accounting when the boss wants to make things look better or worse than they are.
Even when you’re perfectly honest it can be hard to choose a fair figure. I can remember one of the big
cloud vendors announcing they were going to change their fleet depreciation from three to four years and that having an impact on
their share price.
Depreciation is orderly whether or not it matches reality: anyone who runs a data center can tell you about racks with 20
systems in them that have been running fine since 2012. Still, orderly is good.
In the world of LLMs, depreciation is different. When you’re doing huge model-building tasks, you’re running
those expensive GPUs flat out and red hot for days on end. Apparently they don’t like that, and flame out way more often than
conventional computer equipment. Nobody who is doing this is willing to come clean with hard numbers but there are data points, for
example from
Meta
and (very unofficially)
Google
.
So GPUs are apparently fragile. And they are expensive to run because they require huge
amounts of electricity. More, in fact, than we currently have, which is why electrical bills are spiking here and there around
the world.
Why does this matter? Because when the 19th-century railway bubble burst, we were left with railways. When the
early-electrification bubble built, we were left with the grid. And when the
dot-com bubble
burst, we were left with a lot of valuable
infrastructure whose cost was sunk, in particular dark fibre. The AI bubble? Not so much; What with GPU burnout and power charges,
the infrastructure is going to be expensive to keep running, not something that new classes of application can pick up and use on
the cheap.
Which suggests that the post-bubble hangover will have few bright spots.
SPVs
·
This is a set of purely financial issues but I think they’re at the center of the story.
It’s like this. The Big-Tech giants are insanely profitable but they don’t have enough money lying around to build the
hundreds of billions of dollars worth of data centers the AI prophets say we’re going to need. Which shouldn’t be a problem;
investors would line up to lend them as much as they want, because they’re pretty sure
they’re going to get it back, plus interest.
But apparently they don’t want to borrow the money and have the debts on their balance sheet. So they’re
setting up “Special Purpose Vehicles”, synthetic companies that are going to build and own the data centers; the Big Techs
promise to pay to use them, whether or not genAI pans out and whether or not the data centers become operational. Somehow,
this doesn’t count as “debt”.
If you think there’s a distinct odor of 2008 around all this, you’d be right.
If the genAI fanpholks are right, all the debt-only-don’t-call-it-that will be covered by profits and everyone can sleep
sound. Only it won’t. Thus, either the debts will apply a meat-axe to Big Tech profits, or (like 2008) somehow they won’t be
paid back. If whoever’s going to bite the dust is “too big to fail”,
the money has to come from… somewhere? Taxpayers? Pension funds? Insurance companies?
Paul K and Paul K
·
I think I’ve set
that piece
up enough now. It points out a few other
issues that I think people should care about. I have one criticism: They argue that genAI won’t produce sufficient revenue from
consumers to pay back the current investment frenzy. I mean, they’re right, it won’t, but that’s not what the investors are
buying. They’re buying the promise, not of
more revenue
, but of
higher profits
that happen when tens of
millions of knowledge workers are replaced by (presumably-cheaper) genAI.
I wonder who, after the loss of those tens of millions of high-paid jobs, are going to be the
consumers who’ll buy the goods that’ll drive the profits that’ll pay back the investors. But that problem is kind of
intrinsic to Late-stage Capitalism.
Anyhow, there will be a crash and a hangover. I think the people telling us that genAI is the future and we
must pay it fealty richly deserve their impending financial wipe-out.
But still, I hope the hangover is less terrible than I think it will be.
Then point your editor to
gren-language-server-unofficial
, see also
specific setups
.
You can also set their paths in the language server settings:
gren-language-server-unofficial.grenPath: string
: compiler executable, default
"gren"
. If the language server can't find it in the
$PATH
, please set this option to the path that
which gren
prints :)
gren-language-server-unofficial.grenFormatPath: "builtin" | string
: formatter executable, default
"builtin"
.
"builtin"
is a fast, unofficial rust formatter
open the command bar at the top and select:
>Extensions: Install from VSIX
build from source
clone this repo
open
vscode/
run
npm run package
to create the
.vsix
open the command bar at the top and select:
>Extensions: Install from VSIX
server only
There is no built-in language server bridge as far as I know but you can install an extension like
vscode-generic-lsp-proxy
that will work for any language server.
Then add a
.vscode/lsp-proxy.json
like
support renaming punned record fields by adding
{ originalName = newName }
show all module exposes when hovering
(..)
(only if I have time and there is interest)
add code actions like "expose (including variants)", "inline", "inline all uses" (leaning towards no as it is fairly complicated, though it is very useful for sure)
show function parameter names (leaning towards no, as they are often confusing if they are curried, reveal non-exposed variant patterns, have more parameters than the type suggests, are undescriptive etc)
currently, an exposed member will still be suggested even when a local module-declared reference/local binding with the same name exists. Likewise, a local module-declared reference will still be suggested even when a local binding with the same name exists. (somewhat easily fixable but I don't really see the harm in directly showing this shadowing in your face)
your idea 👀
known limitations
It is possible that an gren module belongs to multiple projects when source directory paths overlap between projects. This throws a wrench in pretty much all existing code (likely internal document source desync and a more limited lsp feature range in one of the containing projects).
This situation is, I assume, fixable by special-casing their storage and handling but it would require a
lot
of work
setup for developing
Rebuild the project with
Then point your editor to the created
???/target/debug/gren-language-server-unofficial
.
log of failed optimizations
switching to mimalloc, ~>25% faster (really nice) at the cost of 25% more memory consumption.
Might be worth for some people but I'm already worried about our memory footprint!
declarations.shrink_to_fit();
saves around 0.6% of memory at the cost of a bit of speed
upgrading
lto
to
"thin"
to
"fat"
both improve runtime speed by ~13% compared to the default (and reduce binary size) but increase build time by about 30% (default to thin) and 15% (thin to fat).
As this prolongs installation and prevents people from quickly trying it, the default is kept.
If this language server get distributed as a binary or people end up using this language server a lot, this
"thin"
might become a reasonable trade-off.
optimizations to try
reparse incrementally (somewhat easy to implement but somehow it's for me at least pretty much fast enough already without? More data points welcome)
switch to
position_encoding: Some(lsp_types::PositionEncodingKind::UTF8)
. This makes source edits and parsing easier and faster at the cost of compatibility with lsp clients below version 3.17.0. Is that acceptable? (leaning towards yes).
Also validate if gren --report region column is UTF-8 or UTF-16 (seems to be UTF-16 strangely)
if memory consumptions turns out to be a problem, stop storing the source in memory
and request full file content on each change (potentially only for dependencies).
This adds complexity and is slower so only if necessary.
in syntax tree, use separate range type for single-line tokens like keywords, symbols, names etc to save on memory consumption
in syntax tree, use
Box<[]>
instead of
Vec
for common nodes like call arguments
on init, read modules in parallel, not just projects, to even out difference in project size (seems not worth using threads, maybe something more lightweight?)
Why frozen test fixtures are a problem on large projects and how to avoid them
Tests grow to thousands
All make their claim on fixtures
Frozen by demands
An ancient Japanese Haiku about a common problem with software test fixtures
Act 1: The problem, frozen fixtures
Fixtures have a lot going for them: super fast, clearly structured, reusable across tests …
That last one is also the source of a common problem in large test suites. Every time you change fixtures you risk
falsely breaking
some tests. Meaning:
the test fails even though the feature it tests still works
. This is because every test makes assumptions about the fixtures. This is necessarily part of the test setup, even if it is not explicit in the test code. If the code breaks those assumptions the test itself will no longer work. The more tests there are, the more likely you are to falsely break some of them when you change fixtures.
This is why sufficiently complex data model fixtures tend to become frozen after a certain number of tests. If you aren’t careful, when you get to 1000s of tests, making any change to fixtures can break 10s or even 100s of unrelated tests. It becomes really hard to fix them so you try to avoid directly modifying the fixtures at all. You start to work around it (more on that below) and they stop changing. Hence,
frozen
fixtures.
Thankfully, there are ways to write tests to minimise this effect but it requires discipline.
Act 2: The bad solutions
First, let me go over 2 approaches I’ve seen on projects and why I think they’re bad:
If current fixtures can’t be reused, create new ones.
This is especially prominent in multi-tenant applications: create a brand new tenant in fixtures just for the new tests you’re adding. This is a road of ever increasing fixture size. It becomes really hard to understand which fixtures are for which tests and the testing database starts to become larger and larger. Reviewing existing fixtures for reuse becomes harder. It becomes easier to just add new fixtures for the next test which makes the problem worse.
Use code inside the test to modify the fixture records just for this test.
It seems obvious: let’s just modify the DB to match the state we need for the test. Congratulations! You’ve started to re-discover factories, except you’re doing it ad-hoc. If you start going down that road, consider using both fixtures and factories. I’m not being sarcastic, this combination can work really well.
Act 3: The right solution
First, recognise that every test is written to test a specific property of the code. It should be red if the property breaks and green if satisfied. Diverging from it in any direction is bad, in different ways:
A test that passes while the property breaks gives us false confidence. That’s obviously bad because we could ship a bug.
However, a test that breaks while the property holds distracts us with false information. That’s also bad because it is wasting our precious development time and reducing our confidence in our test suite.
To put it zenly
1
:
a test should test only that which it is meant to test, no more and no less
.
A great solution to remedy frozen fixtures is turning this principle to
11
2
.
Test only what you want to test
This means getting into the habit of asking yourself what a specific test
actually
tries to test. Then, write the test code to directly test exactly that property. This is not trivial but it becomes effortless with practice. Writing good tests is a skill that needs practice
like any other programming skill
.
I know that this still sounds abstract so here are 2 very concrete examples.
Example 1: Testing collection content
Testing collections is especially problematic because it has to involve multiple records. This means you’re probably using fixture records that are also used in many other tests. Either that or your fixtures list is crazy long.
Let’s say you are testing a scope on a model. You might be tempted to write something like:
1
2
3
test"active scope returns active projects"doassert_equal[projects(:active1),projects(:active2)],Project.activeend
This test has just made it impossible to introduce another active project without breaking it, even if the scope was not actually broken. Add a new variant of an active project for an unrelated test and now you have to also update this test.
Instead, try this:
1
2
3
4
5
6
test"active scope returns active projects"doactive_projects=Project.activeassert_includesactive_projects,projects(:active1)assert_includesactive_projects,projects(:active2)refute_includesactive_projects,projects(:inactive)end
The test will now:
Fail if the scope no longer includes active projects.
Fail if the scope now includes inactive projects.
Not be affected when new projects are added to fixtures.
This last one is key. By slightly rewriting the test, we’ve avoided freezing fixtures.
Example 2: Testing collection order
A related example is checking that a returned collection is in the correct order.
You might be tempted to do something like this:
1
2
3
test"ordered sorts by project name"doassert_equalProject.ordered,[projects(:aardvark),projects(:active1),projects(:inactive)]end
Instead, think like a zen master: to test sorting,
test that it is sorted
:
1
2
3
4
test"ordered sorts by project name"donames=Project.ordered.map(&:name)assert_equalnames,names.sortend
The test will now:
Fail if the collection is not sorted.
Not be affected by any other change.
To test a specific case of ordering, focus the test even more and only test that very specific ordering. For example, imagine you just fixed a bug where non latin characters were incorrectly sorted and you want to add a regression test. Do it this way:
1
2
3
4
5
6
test"ordered correctly sorts non latin characters"do# Č and Ć are non latin letters of the Croatian alphabet and unfortunately# their unicode code points are not in the same order as they are in the# alphabet, leading vanilla Ruby to sort them incorrectly.assert_equal[projectĆ,projectČ],Topic.ordered&[projectČ,projectĆ]end
The test will now:
Fail if the non latin characters are incorrectly sorted.
Not be affected by any other change in sorting logic.
Rewriting the test slightly made it both more precise and not freeze the fixtures.
Act 4: So … this makes fixtures better than factories?
Now that you know how to minimise fixtures’ downsides without sacrificing any of the benefits, surely, this means they’re better than factories? Right?
Fixtures vs factories is one of those topics that you really wouldn’t expect people to have strong feelings about but somehow they do. I like to irritate people by being pragmatic and not picking a side.
Sometimes I use fixtures sometimes factories. They have different tradeoffs and each could fit a different project better.
Sometimes I decide to go wild and use both, because that way I can annoy everyone at once!
Which is why I didn’t write an article about which one is better, enough digital ink has been spilled on that hill. I did write before about
a principle that makes factories easier to use
, if that is something you’re interested in.
As a programming language, laziness is the definitive feature of
Haskell. Every other quality such as: higher order functions,
Hindler-Milner type system or being lambda calculus based is present on
other languages. In fact Haskell was born as a committee initiated
language to avoid a proliferation of non-strict languages at the end of
the 80's, putting all the effort before the same cart.
If you take some time, and look at what
HN
,
stack
overflow
or
reddit
has to say about the general sentiment of non Haskell programmer on
laziness, it could be resumed on a single word:
anxiety
. It seems that
space
leaks
due to laziness are unavoidable and that Haskell programmers
choose to live with them. The methods of detecting them come from
dynamic
analysis of compiled programs
. There is no general advice on how to
avoid them apart from experience; an even then it not enough, as
expert
Haskell programmers still trip over them time to time.
What hope do new Haskell programmer have then?
I think this sentiment can be resumed on two general statement that I
have seen pop on the web:
Lazy/non-strict evaluation on functional programming languages
will unavoidably lead to space leaks and consequently general
performance degradation. You cannot rely on production with this system.
Strict functional languages are free from these concerns and should be
preferred.
Laziness by default is a mistake, but we can recover the benefits
of laziness with an explicit iterator concept as in rust or python. No
surprising space leaks that way.
I think these are strong man versions of the statements I have seen.
It is not my intention straw man my way on apologetics. But before
discussing these points we need to talk about value systems.
Difference
on values separate programming languages communities
There was a presentation done some years ago on a PL conference that
made the following statement:
The real distinction between programming languages and their
communities come from different orders on their value systems
I did not find the video on Youtube nor Google (has google gotten
worse?). If someone knows it, please send me a message. (edit: It was the great
Bryan Cantrill as always
)
So for example, what separates the C++ community from the other
languages communities? the extreme emphasis on performance. That is why
there is so much emphasis on zero cost abstractions. Lisp communities
care more about syntactic extensibility than performance, etc.
Haskell can be plenty of fast if you put your effort, but those
Haskell programs won't look like the normal ones that you encounter.
Take a look at the Haskell entries on the languages shootout benchmark.
These are written using mutable arrays and explicit recursion
everywhere. The pass to a C-like version is almost clear on most
cases.
For the Haskell community, having ultimate single core performance on
the common code is not a priority, we prefer to have
compositional
and
correct/reliable
code than fast code. With this in mind, we can approach the questions on
the previous section.
Is
not the presence of space leaks a matter of correctness and reliability
too?
That might seem the case from afar, but once you start writing
Haskell and start experimenting these space leaks, you will notice
that:
90% of the space leaks you write end up adding a tiny amount of
memory usage to your functions, mostly unnoticeable. Think thunks like
(1 + 2) that are subjected to demand analysis under
optimization.
1-2% of them are serious enough to require profiling your code
with cost-centres.
There is a power distribution on the amount of memory you will use on
these space leaks, thus
for the grand majority of the space
leaks you will write, they are a performance concern, not a
correctness/reliability concern
.
It is not a natural law that the distribution of intensity of space
leaks is like this, it has required countless hours of profiling and
optimization on part of GHC developers. The presence of a demand
analyser and fusion optimizations on GHC pushes the space leaks of
category (b) to be even more rare. But is true that for the remaining
space leaks of category (b), learning how to profile and insert cost
centres is a necessity, sorry.
In resume: space leaks are mostly a performance concern and for the
rare cases it is not, you need to learn to profile to manage them. The
Haskell community lives with it, because
its main focus is not
performance loss but the compositional benefits
.
But more than that, you end up with compositional benefits. In a lazy
language, function composition (either with (.) or with function calls)
ends up with the nice property of the
consumer of the pipeline
driving the reduction instead of the producer as in a strict
language
. And in a pure languages with inductive data types,
you will end up with a bunch of tiny pipelines
. Let's
work with the following example to see what this buys us:
let y = (f . g . h . i) x
On a strict language, evaluation of this would proceed by having the
output of
i x
fully evaluated as a inductive data type,
then pass that
complete
structure at one to
h
which would repeat that process until reaching
f
. Even if
f
could short circuit the evaluation given an edge case, it
has to consume all the intermediate values to be able to do so.
On a lazy language, evaluation proceeds by
demands and data
dependencies
. If we want the value
y
, we need to
reduce the pipeline and we need
f
to produce a value. That
will probably introduce a demand on the results of
g
, but
that could be not the case:
f
could be satisfied with just
part of the inductive data type.
This has a bunch of interesting consequences:
For library authors, you can provide big product data types and let
the consumers pick the correct subparts they care about. This is what
people mean with "lazy languages have products, strict ones sums".
You can define new control structures controlling which part of the
structures you choose to demand. Stuff like the
Alternative
type class would not be possible without laziness.
With inductive data types, the garbage collector has an easier time
as it can collect un-needed or already consumed parts of the structure
as they come and go. Also, subparts that where never generated (because
they where under a thunk) did not add pressure to the nursery of the GC.
Less garbage is a win.
Some asymptotics are improved on composed code. The well know
example of
head . sort
works on
O(n)
time.
You can say, so what? don't create the unnecessary parts in the first
place, but if you want to collaborate with other programmers, massaging
the output of their functions becomes a major concern. Having a system
to respect data dependencies becomes a god-send.
So to resume:
laziness super charges common composition of
functions in ways that it become the primary way you organize your
systems in Haskell
. A weaker composition of functions would
mean that other patterns such as imperative or object oriented programming could be seen as
an alternative when organizing the system. This plus being allowed to
use higher order functions without extra care is why haskellers prefer
lazy evaluation.
What about explicit
iterator like in Rust?
These iterators are explicit state machines with a
next()
function. They are quite nice. As a programming
concept though, they are a new concept you should learn.
With
lazy evaluation every inductive data structure pulls double duty: as a
data structure and as a control structure
.
So this means that a library author implicitly provides their yield
points on each constructor and you the consumer of the library can take
them apart. With iterators, this process is explicit in the
.iter()
or
.into_iter()
functions.
On a imperative language with statements, this is actually pretty
great. But on a functional languages with just expressions and pure
function, I prefer the merge of concern that have inductive (co)data
types. This is a whole dimension I just don't have to care about.
Although there is a nitpick on here: in presence of effects, you also
need a streaming library for functional languages. That is a separate
matter.
But
what about those space leaks that affect the correctness of a
program?
Most of my Haskell code end ups being pipelines even if it is not
written explicitly as a bunch of
(.)
everywhere. I have
encountered a pattern I call
"being a good consumer"
that help to avoid the major space leaks I have encountered of type (b).
On a next post I will discuss this concept, link to some PR that
implement fixes proposed by it.
Microsoft investigates Copilot outage affecting users in Europe
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 11:48:39
Microsoft is working to mitigate an ongoing incident that has been blocking users in Europe from accessing the company's AI-powered Copilot digital assistant. [...]...
Microsoft is working to mitigate an ongoing incident that has been blocking users in Europe from accessing the company's AI-powered Copilot digital assistant.
Additionally, some users who can access the affected service may experience degraded functionality with specific features.
"We're investigating an issue in which users in the United Kingdom may be unable to access Microsoft Copilot, or experience degraded functionality with some features," Microsoft said when it acknowledged the issue one hour ago.
After reviewing service monitoring telemetry to isolate the root cause of the outage, Microsoft added that the incident was caused by a capacity scaling issue that is now being addressed.
"Indications from service monitoring telemetry suggest an unexpected increase in traffic has resulted in impact. We're continuing to investigate further to determine the next steps required,"
Microsoft said
.
"We've identified an issue impacting service autoscaling to meet demand. We're manually scaling capacity to improve service availability, and we're monitoring this closely to ensure the expected outcome is achieved."
According to a service alert (
CP1193544
) seen by BleepingComputer, the outage may impact any user attempting to access Microsoft Copilot within the United Kingdom or Europe.
Microsoft is also tracking a separate incident (
DZ1193516
) causing some admins to experience errors when accessing Microsoft Defender for Endpoint features, including device inventory and threat analytics. While it has yet to share how many users are affected, the issue has been tagged as an incident in the admin center, a flag usually designating service problems with significant user impact.
One week ago, Microsoft also
mitigated an outage
that blocked access to some Defender XDR portal capabilities, including threat hunting alerts.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Offline cybersecurity AI using RAG + local LLM (Python, FAISS, Llama 3.1)
Lobsters
gitlab.com
2025-12-09 11:37:14
Built an offline AI assistant for security work in air-gapped environments (SCIFs,
classified networks, etc.). Runs entirely local - no API calls, no telemetry.
Technical approach:
RAG with 360k embedded chunks (sentence-transformers: all-MiniLM-L6-v2)
FAISS for vector similarity search
Local LLM i...
Below are some rules that I have developed over a long period of time writing fully encapsulated C programs. C is my favorite language and I love the freedom and exploration it allows me. I also love that it is so close to Assembly and I love writing assembly for much of the same reasons!
NOTE: You may see references to ‘perfect encapsulation’ throughout. I offer both a ‘performance’ and a ‘pure encapsulation’ approach to C here (first two headers). So feel free to interpret the rest of the rules based on the approach.
One of the great things about C is that it allows for “pure encapsulation”. What this means is that you can explain all the intent of your code through the header file and the developer who uses your lib/code never has to look at the actual implementations of the code. Now to take this a step further, well all know that C supports the
struct
keyword to group data, and we can also make the members of a struct hidden completely from the developer using the library. For example, we could declare the following header and C files:
As you can see in the above code sample, if you were just to have the header file, you would not know that this vector is implemented using 3
floats
. This is very important for pure encapsulation. With this, you can completely control the behavior of the struct and it’s contents using readable functions and not worry about the developer using the code directly mutating the members of your
struct
. Now that you’ve created pure encapsulation, you are able to feel safe knowing that developers can’t new up the struct or abuse it’s contents from anywhere other than through the code you’ve written in your
c
file.
No encapsulation performance
One of the flaws with pure encapsulation is that you can see a drop in performance. Having a bunch of functions to get inner members of a structure also blocks the compiler from optimizing it’s best. Member hiding is not usually because we don’t trust the end developer with the secrets of our structures, but is often so they don’t make mistakes by changing things they shouldn’t. Also member hiding helps so that we can easily update our code without changing the interface that developers rely on.
That being said, if we are dealing with performance critical code, or just want extra optimization by our compiler (and/or to write less code); we can expose the members of our structure. However, let’s be smart about exposing these members so that developers don’t accidentally make mistakes with their new found power.
Enter
const
, our best friend in this scenario. We can not only mark our members as
const
before their type, but also after their type. The general rule of thumb to remember is, if it is a pointer, the
const
goes after the type, otherwise put it before the type. In the simple example below, you can see how pointers have the
const
after, and the rest have
const
before their type declaration.
In this way we are able to expose the fields of the struct to the rest of the code for compiler optimizations, ease of access, etc; while also being able to prevent developers from directly assigning/changing the values of those fields. The obvious downside to this is that you will need to either create a macro, or manually cast assign the fields to change them in the implementation C file. I would recommend, if you are using C17, to use
_Generic
and macros so you can create a single
#define OVERRIDE(field)
type of macro and have the compiler throw if it finds an un-expected type. Of course, if you don’t want to use a macro, you can also create separate
inline
functions to do the same (just might be harder to manage). Below is an example of how we can tell the compiler we want to explicitly change the value in the implementation c file.
// employee.c filevoidemployee_set_age(structEmployee*employee,intnewAge){// Cast away the const and set it's value, the compiler should optimize this for you*(int*)&employee->age=newAge;}
Memory ownership
With perfect encapsulation you are most of the way towards having good memory ownership. If you purely encapsulate your structs the only way for a developer to create a new instance of the struct would be through functions you create yourself. From this point you can create the
new
and
free
functions to manage the memory of your struct. Below is an example building upon the previous code sample.
Above you can see that we encapsulate the creation, usage, and freeing of our
struct
. You would think, well with this, what else do we need to know about memory management? Well there is one more thing, more of a rule that you must follow more than anything else.
The thing that declares the memory is the thing that should free the memory
. We see this in action above, the
c
file that creates the memory in turn has a function for freeing the memory.
Now let’s look at another example using a
char*
to represent a string function. Here we have a function that takes a string and clones it (wrong way):
Now what is wrong with the memory management on this code? Answer, we are using
malloc
to create memory and then return the string. Let’s take a look at the developer using this.
How is the developer suppose to know that they are to free the
char*
? For all they know the
strclone
uses some-sort of pooling functionality to re-use a pool of memory, we can’t free that otherwise you risk seg-faulting. What is a better version of this?
Now with this version we make it explicit that the developer should manage their own object that they pass in. We use the hint name
out
as a prefix to the argument name to let them know memory will be allocated for this input variable. What does this look like to the developer?
Looking at this version, the developer knows they are in charge of freeing the
cpy
, this is because they declare the variable in the first place, rather than being assigned from a function. If the developer follows our rule (
The thing that declares the memory is the thing that should free the memory
), they declared the variable/pointer so they should be the ones freeing it. Now I know you can argue all sorts of alternative setups for the return value, but the fact of the matter is that passing in a pointer to a pointer is much more clear of ownership.
Avoid void*
One stigma people have against C is the use of
void*
, some think it is necessary, some use it to solve problems quickly through the path of least resistance, I say that there are
very few
cases when
void*
is acceptable and most of the time your current problem isn’t it. Like
NULL
,
void*
is a lazy solution to a problem and causes all kinds of un-necessary runtime checking.
In most cases you should create a
struct
that explicitly defines what type is accepted or stored. The
biggest
advantage of this approach is that you put the compiler to work for you. There are all sorts of compile-time checks that will prevent you from doing something you shouldn’t do. Also your IDE will be much more helpful when trying to navigate code as the IDE, nor the compiler, have any idea where a void* comes from or what it points to.
Don’t over-complicate strings
If I want to live in the 2020 era of programming, that means I probably will wind up using more than one library to solve a problem. My new problem is that people think it is cute to typedef
char*
to some other name and only accept that name in their code. In the era of UTF8, that is completely un-necessary and makes me have to do a lot of senseless casting. If you want to encapsulate that you are using a string (so I don’t know it) then cool, do that, but
typedef unsigned char* string
is not it. Please stick to the good ol’
char*
for strings.
Don’t over-complicate stdlib
TBD
Use utf8 strings
Talking about strings, I’d like to point out that UTF-8 is fully compatible with ASCII, this means we don’t need special functions for special characters or non English characters. All of our usual suspects of functions work on UTF-8 such as
fopen
! There are some helpful other things we can use thanks to compilers such as placing
u8
in front of an in-line string:
char*utf8=u8"Hello World!";
So in closing on the UTF-8 topic, please stop using
wchar_t
,
char16_t
, and all those other variants (except when you are forced to, due to 3rd party libraries). With that, I’ll leave you with this helper function to get you started:
NOTE 2:
This does not validate the utf8 string. I am not fond of making the length function also validate the string, for that we should create a separate method for validation. Using
this table I found on Wikipedia
we can construct a validation function (also this table was used for the length function).
NOTE 4
: Something that developers trip on is the file encoding for their source file. Be sure you are using UTF-8 encoding for your source file when typing UTF-8 strings directly into your source code. If you use the wrong encoding, the compiler may compile the inline string with the incorrect encoding, even with the
u8
prefix.
Don’t use char for memory array, use uint8_t
In making code readable, you should only use
char*
or
unsigned char*
for strings (character arrays). If you want a block of bytes/memory pointer, then you should use
uint8_t*
where
uint8_t
is part of
stdint.h
. This makes the code much more readable where memory is represented as an unsighned 8-bit array of numbers (byte array). Now you can trust when you see a
char*
that it is referring to a UTF-8 (or ASCII) character array (text).
Use standard bool
This one is easy:
Don’t make defines for
false
, or
False
, or
FALSE
and it’s true counterpart, please just use the standard library.
Don’t use static or global variables
So
static
functions are fine, they are great for breaking up functions to be readable. However,
static
variables are bad and almost always not needed. Remember that we are living in a world where our CPUs are not getting faster, they are just coming in larger quantities. Always think about threadability and controlling mutation. Even with a variable that is static to a
C
file and not global, you never know if someone is using threads to call your functions.
Prefer inline over macro
Functions that are
inline
are much more readable, work better with your IDE code searching, and are much more readable when you get errors/warnings from the compiler. Some macros are great, don’t ban them altogether, but do consider if you can do what you need through an inline function first.
Test your functions
C is beautiful in the fact that you don’t need unit test frameworks to fully test your code. It’s lack of objects and hidden state make it even better for testing. If you create a function, make a test for it and give it your known test case arguments. Did I mention that I love not having to have a big complicated mocking library to fully test my code? If your function takes in a complicated struct for some reason, feel free to
define
out/in a test function for creating the struct you expect to be testing (do NOT comprimise perfect encapsulation for the sake of testing).
Write functions to do one thing
Okay, this isn’t a C only thing, but make sure your functions are not creating large call stacks. Feel free to use
static
or
static inline
local functions to break up the readability of large functions if you just can’t seem to make functions do a single thing (for performance for example).
Don’t write big complicated systems to cover many problems, even if things are losely related in many ways. It is better to break up your code into useful functional pieces and cleverly put them together to have complex behavior. The
beauty of Unix
is that you can get many things done through many small programs pieced together. In the same way, you should develop useful functions that can be pieced together through data.
Warnings are errors
This one is a bit short. The idea is simple, warnings are errors.
Make sure
ALL
warnings are enabled (
/Wall
).
Make sure that you turn on
warnings as errors
Note: If you copied some source code from the internet that you need and it is producing warnings, turn it into a lib and use the lib,
do not
comprimise your code for other people’s un-checked code. You’d be surprised how many popular libraries fail warnings as errors test (often they develop assuming 32-bit code).
If there is a standard, use it
This was touched on before with
stdbool.h
, but if there is a standard function or type, use it. Use things like
int32_t
over just
int
hoping that
int
will be 32-bit. If there is a standard function for doing something, don’t re-invent the wheel. Wrapping standard functions such as
malloc
and
free
I would consider a necessary evil if you are creating tools to detect memory leaks and the like though.
Use float epsilon for 0 checking
First of all, don’t check a floating point value against
0
or
0.0F
. Instead check it against Epsilon like in the following:
#include<math.h>
#include<float.h>intmain(void){floatx=1.0F;x-=1.0F;if(fabsf(x)<=FLT_EPSILON){// ... The value is basically 0, do some stuff}return0;}
Alternatively you can choose a fractionally small number like
0.0001F
to check against if that is your cup of tea as well. The reason is floating point precision errors (which you probably know or have heard of by now). I enjoy
FLT_EPSILON
because it is part of the
float.h
lib and a standard for everyone to use.
Zero Set Your Structs
One thing that would get me when developing in C is pointers inside of objects not being set to
NULL
. Now I know I speak about hating that the idea of
NULL
exists, but when working with other people’s code it is impossible for you not to run into a situation where you need to set a pointer to
NULL
, pass a
NULL
or check a pointer against
NULL
. So do yourself a favor and always use
calloc
(or
memset(&thing, 0, sizeof(thing))
if it isn’t a pointer or new memory). Of course this doesn’t ban the use of
malloc
, in fact you should continue to use it on buffers, but as programmers, we have a problem with not touching code that works and you think it is fine to just add in that extra field, but if it is a pointer and you don’t initialize it to
NULL
where needed, you’re in for a world of hurt.
Big types first
When you create a structure, put the biggest types at the top of your struct and the smallest types at the bottom. Platforms like the x86 will magically help you with this (at a cost), but other platforms (like ARM) will generate SEGFALT if you don’t properly do this. This is because of padding in a struct. If you put a
bool
as the first field and a
int32_t
as the second field in a struct, like the one below, you will have a problem where you pack 1 byte, then 4 bytes into the struct, effectively having a 5 byte struct. The problem here is that the CPU is optimized to read along memory boundaries. When you
malloc
, you won’t get an odd memory address for example.
struct Bad {
bool first;
int32_t second;
};
struct Good {
int32_t first;
bool second;
};
More to come
There are inevitably more things I’ve forgotten about, but I’ve written this all in one sitting so this is good enough for now until I can update!
Imagine you are a software engineer a few years into your career, working on a service that is an important part of a large production system. One day in a routine 1:1, your manager mentions that there is a gap in the on-call roster and asks if you would like to join the rotation. The concept is simple, if the error rate spikes while it is your turn in the rotation, you'll get an alert. When that happens at any time of day, you'll need to open your laptop, investigate and restore the system. They even mention there is extra pay.
Chances are you can simply say "no" and keep your life exactly as it is. Or you can say "yes" and allow on-call to start changing you in ways you probably didn't anticipate.
Maybe you've already gone through this journey yourself. Or maybe you'll have to answer this exact question tomorrow. If so, I can't tell you what you should say, but I can tell you, in this article, how saying "yes" changed my life.
The good
To give some context, I joined an on-call rotation for a production service for the first time about nine years ago, none of my previous roles had involved being on-call. For most of that time I was on-call for a critical front-facing production service, the kind of service that shouldn't ever go down. And yet, every so often it would.
As an additional complication, I live in Australia, and the rest of the world is wide awake when we are asleep. So it's quite common for the peak traffic, which is a frequent trigger for incidents, to happen when it's the middle of the night here – waking the on-call engineer up. It's worth noting that this is less of an issue for giant companies that have engineering offices across all major timezones, but very few startups and smaller companies can afford this luxury.
You learn to deal with stress
When you receive an alert at 2am, you have to wake up, understand what's going on, fix the problem and provide updates, and all of this happening in the middle of the night can be incredibly stressful. This "on-call dance" is often hard to get used to because the causes are rarely the same, and yet after you go through this drill a few times, you learn to deal with it. This isn't simply because you got used to the process, you learn to deal with emergencies in general.
I've noticed this in myself – it's much easier to stay calm and act with a cool head in a real-life emergency situation when you've been through many incidents while on call before. Even though sitting in bed with a laptop at 2am and dealing with a severe injury in the middle of the forest look completely different, in both situations you're dealing with high amounts of stress.
You learn leadership and coordination skills
Not every incident occurs at 2am however, but it's not uncommon that when the whole system goes down, it turns into an all-hands-on-deck situation. Someone needs to start the mitigation using feature flags, someone might need to investigate the initial trigger, someone needs to provide a summary. As you deal with more and more incidents, at some point you'll find yourself leading the response, and these are important leadership and coordination skills that could be quite hard to acquire in other situations.
You learn systems in incredible depth
The incident response doesn't end after an hour of firefighting, it's followed by tens of hours of digging into
what happened
, and then making sure that it doesn't happen again. During those long debugging sessions, as you dissect the system and try to reconstruct the exact sequence of events, you learn the systems you work on at a much more intimate level than you ever would by just shipping changes.
You develop an appreciation for how systems interact, for CPU and memory constraints, for the intricacies of the runtime, all of which surface later as you design the future versions of those systems. You know what can go wrong, because you've seen those failure modes multiple times. Writing those detailed internal post-incident reports is what tuned my engineering thinking the most. Not to mention, many of those firefighting experiences become great stories to share.
The bad & the ugly
It's not all great and rosy, though.
You acquire physical constraints
When you're on-call, your life is inherently constrained by the need to always have your laptop with you and to stay within the bounds of reliable reception. How limiting this could feel depends on your lifestyle – if you spend your weekends tinkering with robots in your garage, that's not much of a problem, but if you're hiking in the mountains every weekend, being on-call quickly becomes a real burden.
The potential health effects
You're woken up at 1am, you start firefighting, the adrenaline is high, you mitigate the issue and go back to bed at 2am, and yet you struggle to fall asleep. Your mind is still racing, and only 30 minutes to an hour later do you finally fall back asleep. The next day you feel exhausted, your mind is foggy and you can't think straight. This is a situation that's unfortunately familiar to many on-call engineers, and depending on how often it's repeated, it can have adverse effects on your health. There have been multiple research papers highlighting the negative impact of sleep disruption and
on-call work specifically
, and it can even affect the
quality of relationships
.
This is the part that is easy to normalise but you shouldn't. Don't let yourself become so used to nightly alerts that you treat them as normal background noise. Find a way to fix the situation or find a way out. Every benefit listed above isn't worth much if the long-term price is your health, and being on call is only worth it if you can minimise the impact on your health.
Conclusion
I'm a firm believer in the "you build it, you run it" model, and I'm on-call as I'm writing this article. Luckily for me, the forecast for the weekend isn't great, so I'm not exactly missing out on a perfect day in the mountains.
If you're deciding whether to join an on-call rotation, I'd suggest giving it a try. It's not a one-way door, you can always reverse your decision. No judgement if you decide to say "no" either. I hope my experience helps you make that call. And if you're someone with years of on-call behind you, please do share your experience as well, and I'm sure you've collected plenty of great firefighting stories to share.
Subscribe
I'll be sending an email every time I publish a new post.
ChatGPT is not "intelligence", so please don't call it "AI".
I define "intelligence" as being capable of knowing or understanding,
at least within some domain. ChatGPT cannot know or understand
anything, so it is not intelligence. It does not know what its
output means. It has no idea that words can mean anything.
The same applies to many other "generative systems", for the same
reasons
The widespread public error of attributing intelligence to those
systems leads millions of people to a misplaced trust for them.
Please join me in spreading the word that people should not trust
systems that mindlessly play with words to be correct in what those
words mean.
Another reason to reject ChatGPT in particular is that users cannot
get a copy of it. It is unreleased software -- users cannot get even
an executable to run, let alone the source code. The only way to use
it is by talking to a server which keeps users at arm's length.
Getting my ZX Spectrum Next onto Wifi and the Internet, plus Pi Zero Accelerator
I’m enjoying my
Xberry Pi ZX Spectrum Next
, but I have to say the ‘simple’ upgrade of getting it onto Wifi via the cheap and cheerful ESP 8266 was not fun.
While I had every intention of setting up my Speccy keyboard on the board’s matrix connector, I need to wait for a socket of the correct pitch. It turns out the Xberry Pi needs 0.2mm spacing and I have none in my parts box. Instead I figured do the other main upgrades, namely add a Pi Zero “accelerator” and the Wifi upgrade.
ZX Next Pi Zero Accelerator Upgrade
Fully expecting the Pi Zero to be the thing that caused me trouble, I was shocked that not only my soldering had gone perfectly but also the flashing of the SD card.
So many community members had advised to use a highly specific Pi Zero, I thought my spare board would be wrong. But it worked first time!
My pin header made the soldering trickier than usual, maybe it soaked a lot of heat? Just remember the header needs to go on the ‘wrong’ side and take your time.
The Pi Zero upgrade is hardly used currently but the ZX Next team keep hinting at future utility, and at least one game is in development that uses it as a kind of math coprocessor.
Loading a TZX tape on the ZX Spectrum Next
So after running a TZX file and some SID tunes, I moved on to the ESP figuring I would have everything done quick as a flash. Pardon the pun.
Adding Wifi to the Next
The Next can come with Wifi already installed apparently, and some Xberry bundles come with one as standard, but the chosen module is an ESP 01 / 8266 which is something I have on hand.
There’s precious little documentation for this kind of thing out there, but I did find YouTube videos that made everything look very straightforward. That should have been the first red flag!
None of them showed installing the correct ESP firmware, what baud rate the Next expects, or even attaching the ESP.
I picked out a board I was confident was working fine, attached it, successfully ran the
ESP Updater
.
After it confirmed the update, I tried to connect to wifi using the built-in Next menu item …. and failure.
The fact the ESP Updater did its thing suggests also the board was fine therefore I tried and failed all the many ways to get the “driver” to install.
I quickly started to hate this “Installing Driver…” message. There is no indication if anything is happening when it gets stuck there for ages, but other times it quickly quits back to the menu, again with no feedback.
Spoiler alert
: Even with everything now ‘working’, it often takes a
lot
more attempts than you would expect.
To save anyone else the bother I went through, and potentially Future Chris, the key was to NOT update the ESP 01 firmware. Just use whatever your board came with by default. My
working
firmware is not even listed on the ESP Updater repo …
This ESP 8266 firmware works fine if you need to flash anything
I rummaged around and found another ESP board that had never had its firmware flashed. Using a breadboard I confirmed it was working and responding to AT commands.
CircuitPython Code to Communicate with ESP01 via UART
import board,binascii
import adafruit_pio_uart
# Define the UART pins (example using GP16 for TX and GP17 for RX)
# The actual pins depend on your specific wiring/board configuration.
UART_TX = board.GP16
UART_RX = board.GP17
# Initialize the UART connection
# The baudrate for the ESP01 is typically 115200
uart = adafruit_pio_uart.UART(UART_TX, UART_RX, baudrate=115200)
def send(s):
uart.write(s+'\r\n')
print(uart.read(255).decode())
send('AT+RST')
send('AT')
send('AT+GMR')
This code allows you to hook the ESP01 up to pins 16 and 17 for UART, and 3.3v and GND.
Wifi on the ZX Spectrum Next … Finally!
Again it didn’t work on the ZX Next right away, but I persisted and finally signs of life!
Attach the Wifi module to the Xberry Pi
Use .ESPBAUD dot command with -d to check the baud rate (needs to be 115200)
Using .UART you can manually send AT commands to test the Next is communicating over TX/RX
Run the Wifi Wizard from the Next menu
All being well, run Wifi stuff such as NXTel!
Conclusion
Hope this helps someone else out there! I found it incredibly frustrating but now I am back to having fun with my ZX Next experience rather than regretting my choices.
Seems life might have been easier if a Pi Zero W was used instead of a Pi
and
an ESP8266, or even an ESP32, but perhaps there is a good reason they went with the little old ESP01.
Skate Story review – hellish premise aside, this is skateboarding paradise
Guardian
www.theguardian.com
2025-12-09 10:00:28
Sam Eng/Devolver Digital, PC, PS5, Switch 2An exquisitely fluid game of tricks, grinds and manuals is framed by a story that uncovers the poignancy of the infamously painful pastime Skateboarding video games live and die by their vibe. The original Tony Hawk’s Pro Skater titles were anarchic, arcade...
S
kateboarding video games live and die by their vibe. The original
Tony Hawk’s Pro Skater
titles were anarchic, arcade fun while the recent return of EA’s beloved
Skate franchise
offered competent yet jarringly corporate realism. Skate Story, which is mostly the work of solo developer Sam Eng, offers a more impressionistic interpretation while capturing something of the sport’s essential spirit. It transposes the boarding action to a demonic underworld where the aesthetic is less fire and brimstone than glittering, 2010s-era vaporwave. It is also the most emotionally real a skateboarding game has ever felt.
The premise is ingenious: you are a demon made out of “pain and glass”. Skate to the moon and swallow it, says the devil, and you shall be freed. So that is exactly what you do. You learn to ollie first, a “delicate, precise trick” according to the artfully written in-game text. Then come the pop shuvit, kickflip, heelflip and more.
Captures the spirit of skateboarding … Skate Story.
Photograph: Devolver Digital
The controls are easy: one button to ollie. If you’re holding down a shoulder button at the same time, you perform a more involved trick. Beyond the ravishing visuals, what’s most striking is the exquisite fluidity, the delicious “gamefeel”, of the actual skateboarding: the way the knees of this glittering demon bend just the right amount after landing a trick; the way you can see their foot stretching out across the top end of the board in order to apply just the right force that will cause it to flip.
The vaporwave aesthetic is not Skate Story’s only bold design choice. You will fall many times on the ghoulish asphalt and when you do the action cuts to first-person, causing you to see the world tumbling for what feels like a tormenting eternity. Along the way, you meet a bizarre cast of characters: a mystical rabbit, a pigeon trying to type a screenplay, and a ghost hanging out in a launderette.
Real emotions … Skate Story.
Photograph: Devolver Digital
The game’s action can be divided into two types: narrow, linear tunnels that you hurtle through at breakneck speed, and wide-open sandbox levels. The former are furious, momentum-filled thrill rides that demand utmost precision; the latter, set in nightmarish, nocturnal visions of New York, feature many offbeat objectives, such as chasing spooky laundry. In these levels, there is ample room to enjoy the deceptively deep skating mechanics.
Gradually, a melancholy surfaces in this crystalline universe. Of course, the skateboarder wants to be free of the underworld, but they also seem enraptured by the idea of devouring these moons. As you thread together tricks with manuals and grinds, scoring ever-larger combos, all as a brilliantly downbeat electro soundtrack chimes away, questions arise. Why is this skateboarder so hungry? Why do they seek pain? In some ways, we’re reminded of the physical risks of skateboarding in real life.
These questions – and the sadness buried within their answers – distinguish Skate Story from its traditionally zany video game counterparts. Rather, Eng’s gently emotive work is more in touch with the likes of acclaimed documentary
Minding the Gap
and Jonah Hill’s movie
Mid90s
.
The result is a skateboarding game of rare poetry. There is the poetry of the skating itself, the miraculous interplay of body and board rendered with aplomb. There is the actual poetry that accompanies the end of each level. Finally, there are the tender emotions that refract through, and seem amplified by every bailed kickflip in this surreal, shimmering take on hell.
Compiler Engineering in Practice - Part 1: What is a Compiler?
“Compiler Engineering in Practice” is a blog series intended to pass on wisdom that seemingly every seasoned compiler developer knows, but is not systematically written down in any textbook or online resource. Some (but not much) prior experience with compilers is needed.
The first and most important question is “what is a compiler?”. In short, a compiler is:
a
translator
that translates between two different languages, where those languages represent a description of a computation, and
the behavior of the computation in the output language must “match” the behavior of the computation in the input language (more on this below).
For example, an input language can be C, and the output can be x86 assembly. By this definition, an assembler is also a compiler (albeit a simple one), in that it reads x86 textual assembly and outputs x86 binary machine code, which are two different languages. The
python
program that executes Python code contains a compiler – one that reads Python source code and outputs Python interpreter bytecode.
This brings me to my first important point about practical compiler engineering – it’s not some mystical art. Compilers, operating systems, and databases are usually considered some kind of special corner of computer science / software engineering for being complex, and indeed, there are some corners of compilers that are a black art. But taking a step back, a compiler is simply a program that reads a file and writes a file. From a development perspective, it’s not that different from
cat
or
grep
.
Why does this matter? Because it means that compilers are
easy to debug if you build them right
. There are no time-dependent interrupts like an operating system, async external events like a web browser, or large enough scale that hardware has to be considered unreliable like a database. It’s just a command line program (or can be reduced to one if engineered right), such that nearly all bugs are reproducible and debuggable in isolation
from the comfort of your workstation
. No connecting to a flaky dev board, no extensive mocking of various interfaces.
You might say – wait a minute – if I’m running on my company’s AI hardware, I may need to connect to a dev board. Yes, but if you do things right, you will rarely need to do that when debugging the compiler proper. Which brings me to…
Reliability
Compilers
are
like operating systems and databases in that the bar for reliability is extremely high. One cannot build a practical compiler haphazardly. Why? Because of miscompiles.
Miscompiles are when the compiler produces an output file in the output language that does not “match” the specification of its computation in the input language. To avoid a miscompile, the output program must behave identically to the input program, as far as can be observed by the outside world, such as network requests, values printed to the console, values written to files, etc.
For integer programs, bit-exact results are required, though there are some nuances regarding undefined behavior, as described in John Regehr’s
“laws of physics of compilers”
. For floating point programs, the expectation of bit-exact results is usually too strict. Transformations on large floating point computations (like AI programs) need some flexibility to produce slightly different outputs in order to allow efficient execution. There is no widely-agreed-upon formal definition of this, though there are reasonable ways to check for it in practice (
“atol/rtol”
go a long way).
How bad is a miscompile?
Miscompiles can have massive consequences for customers. A miscompile of a database can cause data loss. A miscompile of an operating system can cause a security vulnerability. A miscompile of an AI program can cause bad medical advice. The stakes are extremely high, and debugging a miscompile when it happens “in the wild” can easily take 3+ months (and it can take months for a customer to even realize that their issue is caused by a miscompile).
If that weren’t enough, there’s a self-serving reason to avoid miscompiles – if you have too many of them, your development velocity on your compiler will grind to a halt. Miscompiles can easily take 100x or 1000x of the time to debug vs a bug that makes itself known during the actual execution of the compiler (rather than the execution of the program that was output by the compiler). That’s why most aspects of practical compiler development revolve around ensuring that if something goes wrong, that it
halts the compiler before a faulty output program is produced
.
A miscompile is a fundamental failure of the compiler’s contract with its user. Every miscompile should be accompanied by a deep look in the mirror and self-reflection about what went wrong to allow it to sneak through, and what preventative measures can (and should immediately) be taken to ensure that this particular failure mode never happens again.
Especially in the AI space, there are lots of compilers that play fast and loose with this, and as a result get burned. The best compiler engineers tend to be highly pedantic and somewhat paranoid about what can go wrong.
Why compilers are hard – the IR data structure
Compilers do have an essential complexity that makes them “hard”, and this again comes from the whole business of making sure that the input program and the output of the compiler have the same behavior. To understand this, we have to discuss how a compiler represents the
meaning
of the input program and how it preserves that meaning when producing the output program. This notion of “meaning” is sometimes called the
program semantics
.
The primary data structure in a compiler is usually some form of graph data structure that represents the compiler’s understanding of “what computation this program is supposed to do”. Hence, it represents the computation that the compiler needs to preserve all the way to the output program. This data structure is usually called an IR (intermediate representation). The primary way that compilers work is by taking an IR that represents the input program, and applying a series of small transformations all of which have been individually verified to not change the meaning of the program (i.e. not miscompile). In doing so, we decompose one large translation problem into many smaller ones, making it manageable.
I think it’s fair to say that compiler IR’s are the single most complex monolithic data structure in all of software engineering, in the sense that interpreting what can and cannot be validly done with the data structure is complex. To be clear, compiler IR’s are not usually very complex in the implementation sense like a “lock-free list” that uses subtle atomic operations to present a simple insert/delete/etc. interface.
Unlike a lock-free list, compiler IR’s usually have a very complex interface, even if they have a very simple internal implementation. Even specifying declaratively or in natural language what are the allowed transformations on the data structure is usually extremely difficult (you’ll see things like “memory models” or “abstract machines” that people spend
years or decades
trying to define properly).
A very complex schema
Firstly, the nodes in the graph usually have a complex schema. For example, a simple “integer multiply operation” (a node in the graph) is only allowed to have certain integer types as operands (incoming edges). And there may easily be thousands of kinds of operations at varying abstraction levels in any practical compiler, each with their own unique requirements. For example, a simple C
*
(multiplication) operator will go through the following evolution in Clang:
It first becomes Clang’s
BinaryOperator
node, which takes two “expressions” as operands (which may be mutable uint32_t values, for example).
It will then be converted to an LLVM IR
mul
operation, which takes as operands an
llvm::Value
, which represents an immutable value of the
i32
type, say.
It will then be converted to a GlobalISel
G_MUL
operation, whose operands represent not only an 32-bit integer, but also begin to capture notions like which “register bank” the value should eventually live in.
It will then be turned into a target-specific MIR node like
IMUL32rri
or
IMUL32rr
selecting among a variety of physical x86 instructions which can implement a multiplication. At this level, operands may represent physical, mutable hardware registers.
From a compiler developer’s perspective, all these “multiply operations” are deeply different from each other because of the different information captured at each abstraction level (again, compiler developers are usually very pedantic). Failing to adequately differentiate between abstraction levels is a common disease among poorly written compilers.
At every level, precise attention to detail is needed – for example, if the multiplication is expected to overflow mod 2^32 in the source program, and we accidentally convert it to overflow mod 2^64 (such as by using a 64-bit register), then we have introduced a miscompile. Each operation has its own unique set of constraints and properties like these which apply when transforming the program.
Complex interactions between operations
Additionally, how these operations in the IR graph relate to each other can be very complex, especially when mutable variables and control flow are involved. For example, you may realize that an operation always executes, but we may be able to move it around to hide it under an
if
condition to optimize the program. Consider the program:
x = y + z;
...
if (condition) {
print(x); // The only time that `x` is referenced.
}
Is it safe to convert this to
...
if (condition) {
print(y + z);
}
? Well, it depends on what’s hidden in that
...
. For example, if the program is:
x = y + z;
...
y += 5;
...
if (condition) {
print(x);
}
Then it’s not legal, since by the time we get to the
if
, the value of
y
will have changed and we’ll print the wrong value. One of the primary considerations when designing compiler IR’s is how to make the transformations as simple and obviously correct as possible (more on that in another blog post).
Usually production compilers will deal with IR graphs from thousands to millions of nodes. Understandably then, the compounding effect of the IR complexity is front and center in all compiler design discussions. A single invalid transformation can result in a miscompile.
Compilers are just software
Practical compilers are often live for years or decades and span millions of lines of code, so the entire suite of software engineering wisdom applies to them – good API design, testing, reusability, etc. though usually with additional compiler-specific twists.
For example, while API design is very important for most programs’ code (as it is for compilers’), compilers also have an additional dimension of “IR design”. As described above, the IR can be very complex to understand and transform, and designing it right can greatly mitigate this. (more on this in a future blog post)
Similarly, since compilers are usually decomposed into the successive application of multiple “passes” (self-contained IR transformations), there are a variety of testing and debugging strategies specific to compilers. (more on this in a future blog post).
Conclusion and acknowledgements
I hope you have found this post helpful. I have a few more sketched out that should be coming soon. Please let me know on
my LinkedIn
if you have any feedback or topics you’d like to suggest. Big thanks to
Bjarke Roune
for his
recent blog post
that inspired me to finally get this series off the ground. Also to
Dan Gohman
for his
blog post on canonicalization
from years back. There’s too few such blog posts giving the big picture of practical compiler development. Please send me any other ones you know about on LinkedIn.
We are living through a Saturn renaissance. Buckets of titles previously locked away in Japan are seeing new audiences, thanks to the herculean efforts of small but dedicated teams of enthusiast translators, removing the veil of Japanese illiteracy from before our tired eyes. Interestingly, the majority of efforts are being directed at the games with the biggest scripts, and no other genre was as impacted by the language barrier as the text-heavy, story-driven RPG. Over a dozen quality titles are now playable in English. The Saturn is, once again, ascendant…
Ain’t life Grand?
Enter
Grandia
.
What hasn’t been said about
Grandia
? In the run-up to its late 1997 release, the game enjoyed significant worldwide coverage in the gaming press, not least because some positioned it as the anti-FF7 title. Hot on the heels of the remaster of
Lunar: Silver Star Story
and hailing from respected software house Game Arts, featuring state of the art fully 3D environments, a score by notable composer Noriyuki Iwadare, sound effects produced by Skywalker Sound…
Grandia
was indeed shaping up to be one of the premier JRPG experiences of the 5th generation. There was serious talk of bringing the game out West — Working Designs was touted as the favoured house to do the honors, owing to their strong partnership with Game Arts, but the game’s massive script would have meant a late 1998 release by even the speediest conversion standards of the time. By then, the Western Saturn retail market had collapsed, and despite a shrinking but fervently dedicated base of Saturn fans holding on to hope of seeing the title cross the ocean, the game wound up locked away in Japan, forever.
Sue’s always looking out for Justin.
NEVER say Forever
Game Arts subsequently ported
Grandia
to the PlayStation, dropping it in Japan in the summer of 1999. Sony speedily localized the game for Western release later that same year… but we aren’t going to focus too much on the PlayStation version here because, at the time of writing, PlayStation discs don’t boot on your SEGA Saturn. It’s the Saturn game that we are concerned with. For us Saturn stalwarts, we had to wait to the mid-2020s for
an intrepid team led by TrekkiesUnite113 to
transplant the PlayStation’s English script into the Saturn code
. By then, the game was decades old, not to mention re-released and ‘re-mastered’ on modern platforms. So, why translate
Grandia
for the Saturn, when multiple other English options exist?
Because
Grandia
is Best on Saturn.
How do you do
Set in an age of discovery at the dawn of the industrial revolution,
Grandia
initially tells the tale of young Justin — a 14-year-old fearless adolescent who can’t help but dream of adventure. When he isn’t playing at “hero” with his town friends, he’s dreaming of great expeditions to find the lost civilization of Angelou. He is joined by his friend Sue — an 8-year-old girl whose maturity belies her age, and who tries desperately to keep young Justin in line. Justin’s mom Lily runs the local Seagull Restaurant and does her best to raise Justin into a respectable young man… though in her youth, she was a scrappy pirate herself. In her heart, she knows her audacious spark has passed on to her son, and that
Justin will one day take up the adventurer’s mantle and take off on a grand adventure of his own
, so she does her best to prepare him for when the time comes.
She gives Justin a Spirit Stone
— a remnant of the Angelou civilization and a memento of his long-lost father — and in doing so, helps kick off a fantastic voyage that sees young Justin explore, learn, overcome all manner of obstacles, and ultimately, grow and become the hero that he always imagined himself to be.
The party’s travels take them to the most interesting locations.
During his quest, Justin encounters fascinating characters, both friend and foe. From quiet folk in sleepy villages to rambunctious youngsters eager for their own slice of adventure; from military platoons led by the most beautiful — but hopelessly shallow — lady sergeants to cunning merchants, towering warriors, alluring mermaids and ferocious dragons… Justin encounters them all, and for good or ill, manages to change the course of their lives in ways both subtle and grand.
Justin, Sue, and Feena are the first three playable characters in
Grandia
. Young Sue tries to keep Justin in line, while Feena searches for the true meaning of being an adventurer – with Justin slowly moving from admiring her to showing her the way.
The game is clever in pulling the player in for a ride that for a very long while feels very lighthearted and innocent. Even as Justin’s adventure begins in earnest and the player is exposed to antagonists, mysteries, undercurrents and intrigues, Justin can’t help but distill it back to the
very pure essence of boyhood adventure.
Mysterious tower causing problems for a nearby village for years? No problem, Justin will fix it! A dragon from a nearby volcano terrorizing the locals? Justin’s got this. A ghost ship sailing in to harass a passenger steamer? Justin is the answer, in the same way that, as youngsters, we all knew – we knew! – that
WE were the heroes
, and that WE would save the day, armed only with our courage and our grand imaginations. It was our duty, after all. We had it in us to go forth boldly, and change the world (and naturally, all before being called home for dinner).
This point is driven home by Justin’s insatiable desire to uncover the mystery of his Spirit Stone, and the ancient Angelou civilization. After an unfortunate but entirely predictable mishap in the local museum, followed by a mysterious revelation in the nearby Sult Ruins, Justin’s curiosity is ignited, and his drive for real adventure becomes indomitable. Meanwhile, forces are at work that care not for Justin’s explorations, and inevitably, the lad finds himself pitted against the Garlyle Forces and some of its top commanders. Their aims are complex and their operations span the world, and this scope creates a wonderful juxtaposition with Justin’s innocent demeanor and singular focus.
The amount of architecture being displayed here is stunning, though Grandia makes the Saturn work for it.
On Screen!
The Fifth Generation of consoles marked the rise of 3D graphics, but some genres made the leap easier than others. This shift was a struggle for RPGs, with many excellent titles continuing to employ 2D visuals, albeit in richer color and more sophisticated detail than seen in previous generations. Early attempts at the 3D RPG (
Virtual Hydlide
) highlighted how difficult it was to run this style of game on the hardware of the time without wrecking the framerate or keeping textures from looking like a checkerboard mess. Dungeon crawlers (
Shining the Holy Ark
) were among the first titles to get the 3D right, though the player’s scope of movement was very restricted. Finally, some fantasized that “3D” meant pre-rendered backgrounds and copious FMV clips, with the only real 3D being battle scenes. Ahem!
Grandia
took the traditional overhead RPG view and transformed the landscapes into
fully realized 3D polygonal playfields
that can be rotated and zoomed at will. Character and enemy sprites are then overlain on the 3D world to make the scenes come to life. The addition of the third dimension affords the use of depth in the environments: hills, cliffs, and valleys; minecar rails that ran higher or lower relative to other tracks, and so on. In this way, the player initially feels right at home with a view that looks comfortably familiar, but must quickly learn to constantly rotate the viewpoint to catch enemies in hiding, spy treasures only visible from certain angles, judge heights, and evaluate other geometric details to plot their best course forward.
Aside from technical achievements, the art direction is fantastic.
Grandia
wastes no time in getting gamers used to this new visual paradigm. One of the game’s first quests sees local frenemy Gantz challenge Justin and Sue to locate the three Legendary Treasures: the fabled helmet (Iron Pot), the storied shield (Pot Lid), and of course, the legendary (Wooden) Sword. The player must traverse all of Parm, climbing down river walkways, checking in enclosed spaces, and chasing down Gantz’s little brother to prove they are up to Gantz’ task — and in the process, get used to the game’s then-new control scheme.
The 3D is very well put together, both technical and artistically. The level of detail is truly phenomenal, from the tiniest objects and details, especially in the ‘in-town’ game sections. Justin is able to interact with some of the innocuous scenery — for example he can knock brooms over, disturb piles of plates, or bump into bells and chimes — just as any real, overly excited 14-year-old might clumsily do as they ran along. Animations, from little weathervanes rotating to laundry fluttering on a clothesline, to puffs of smoke coming up from fires or chimneys, all accentuate the feeling that these are real, living, bustling places. The level of detail, and all of it in 3D, is really special.
The coders at Game Arts made excellent use of the Saturn’s unique hardware when realizing
Grandia
’s locales. Where appropriate, textured infinite planes are used to draw floors, and they not only look good but also dramatically cut down on the usage of polygons in drawing the scene, leaving that much more in the processing budget to spend on other visual details. In later sections, those infinite planes take on a distortion effect to create some very cool-looking water flows — look for them initially in Parm’s pier, and later in places like the snowy Laine Village or the mysterious Castle of Dreams. The water shimmers as the player rotates their view to create a truly stunning effect.
Slimes are never that tough to dispatch in any RPG.
The game’s characters and enemies are all represented by sprites that animate quite well and take viewpoints into account as the player rotates the camera. In yet more attention to detail, the sprites darken and then lighten again as the player moves in and out of shadowed areas — an impressive little detail that accentuates the visuals even further.
The trio of General Baal, Colonel Mullen, and Leen is introduced in the game’s opening scene, and all three are more than they appear.
The care that Game Arts took in crafting the visuals is commendable and
Grandia
comes off as one of the very best-looking 3D RPGs for the system, but Game Arts was perhaps a mite too ambitious. There are sections of the game where the framerate really chugs. Now, it must be acknowledged that low framerates were a hallmark of many 3D games in the 32-bit era, so some of this is to be expected, but the more detail Grandia is trying to show you, the more you will feel the Saturn huffing and puffing to get the job done. The game’s 3D framerate is not high at the best of times but it is passable, so it’s somewhat of a relief that the areas where it truly takes a dive aren’t too common.
Pump up the Jam!
Game Arts’ attention to detail extends to the sound department. For
Grandia
, Game Arts commissioned Skywalker Sound to handle the game’s sound effects. The result is
positional sound
— effects like running water, crackling fire, etc. will fade in and out as Justin and co. move closer in or further away from the source. Often, if the effect is important, it will also somewhat dampen the volume of the BGM as it plays out. Additionally, the effects will pan left or right depending on the source, and especially as the player rotates the camera. These effects may be subtle, but they are very well implemented and add to the game’s overall polish.
The game is very colorful.
The game’s
soundtrack was composed by Noriyuki Iwadare
and is both varied and excellent. Iwadare’s use of instruments appropriate to the on-screen action is uncanny — for example, running around Parm we are treated to an industrial sounding theme, perfect for the town’s motif. The varied use of strings, drums and winds is frankly excellent and lends to the atmosphere, imitating the clang of metal and steel which so permeates the city. Equally impressive is that the music somehow manages to be exciting or somber or poignant without ever sounding overly (excuse the wordplay) grandiose. This keeps the soundtrack in line with the game’s more lighthearted narrative. Of course, where appropriate, the soundtrack does take on that epic quality. The desperate tones that play when the Garlyle forces appear contrast so well with the carefree, upbeat “Off Runs Sue” tune. Mullen’s theme is at once wistful and ambitious, and even the theme from the Sult Ruins dungeon is perfectly mood-setting. Multiple
Grandia
soundtracks have been released since the game’s debut and the soundtrack is universally praised.
Leen is one of Col. Mullen’s acolytes.
How it Plays Out
Grandia
’s gameplay, like so many RPGs before it, is split into two major gameplay slices: exposition-laden town sections and combat-focused dungeons.
Players will spend a fair bit of time in the ‘in-town’ sections of the game. Here, you will wander around, take in the scenery, interact with the NPCs of the area, and almost always, find a quest that must be completed. A quick word about the NPCs — there are quite a number of them in each town, and everyone has something interesting to say… and almost always, each NCP has at least two separate conversation sequences to offer, making for a truly large amount of story to soak in. And
it’s all entirely optional!
It’s completely possible to make one’s way through
Grandia
with only minimal NCP interaction, but the option to enhance the adventure with these extensive NPC interactions is always there, as each character will present a unique view or focused response.
An unlikely pairing.
Predictably, the towns are filled with shops, though
Grandia
keeps things rather simple — there is but one general store which carries weapons, armor, accessories, and even magic all under the same roof. Buy, sell or trade up to the latest gear which gradually increases in the stat boosts it confers to your characters. Additionally, each town typically has one or more important locales, such as mayors’ offices or the chambers of village chiefs.
There is typically an inn or other house where the party can take rest, and at certain points in the game, resting triggers a shared meal scene that sees Justin break bread with his party mates. These meal scenes offer up critical dialogue, which the gamer can extend or keep short at their whim. When the critical conversation has been had, a bedtime icon will appear over Justin’s character sprite, and if the player is quite finished listening to the party chatter, they can select it to end the meal and get some rest. These mealtime conversations serve not only to flesh out what the party must tackle next, but also to offer a glimpse into the inner thoughts of the individual characters as they share their opinions, hopes and fears. Like so much in the game,
Grandia
implements this character exposition in a way that allows the player to decide how much of it to take in.
Great use of color.
The visuals in the town sections really stand out.
The Saturn manages to shift not only impressive amounts of polygons for the various structures, but also vivid and complex textures. This technical prowess is coupled with lush and imaginative art direction, resulting in each locale feeling complete and distinct. The dense, humid and green surrounds of Luc Village, nestled deep within the Misty Forest and inhabited by humanoid creatures contrasts sharply with the seaside port town of Dight with its cerulean waves gently rolling in onto its sandy shores. Milda’s hometown village of Laine is covered in snow, and the ancient Zil Padon is an architectural wonder with a central fountain in the middle of the Savanna desert. Game Arts very clearly discarded their standard world building cookie cutters, and their efforts shine through.
The world map. The feather icon indicates where you will travel next.
Once a locale has been explored, it will appear as a selectable destination on a gorgeous, hand-drawn high-resolution world map. Exiting an area often brings our party to this world map, and the next destination can be selected.
If the towns serve to heal the party, upgrade equipment, and advance the story, then the dungeons of the game offer treasure hunting, exploration, and of course, combat! Dungeons in
Grandia
range from literal underground labyrinths to above-ground forest mazes, to even large open plains that Justin et al. must traverse. Some of the more noteworthy settings include scaling a giant wall that keeps the world divided into two separate societies, negotiating the bowels of a ghost ship which appears out of nowhere to molest a transcontinental steamer, and even conquering the inside of an unstable volcano that’s inhabited by an ancient dragon.
Here, the player really must use their L and R buttons to shift the 3D landscape around, to find all the nooks of treasure or paths forward. Some areas feature set pieces that Justin and party can activate — for example, knocking over a loose pillar to bridge a gap. These are usually indicated by an exclamation point icon when the party nears the set piece.
Some of the spells are quite spectacular.
All the while, treasure both great and small litters the landscape… but so do enemies! Enemies are visible in the dungeons and so can be avoided to an extent, but if Justin and party come in contact with an enemy, combat ensues.
Grandia
Grinder Alert!
Grind for Experience Points Using Environmental Damage!
Are YOU a
Grandia
grinder?? Some sections of the game will deal damage to Justin and party outside of combat. First noticed in the Dom Ruins, rock faces painted into some of the dungeon walls will cause mild HP damage by springing out and poking the party when the party doesn’t want to be poked! The player can then use heal magic and spam this process to quickly increase Water magic levels. Although definitely a grind, it’s much faster than earning those experience points via combat. A few other areas in the game present similar opportunities — such as the basement in the Castle of Dreams.
A New Kind of Kombat
Grandia
introduced an all-new combat system to the RPG genre, though it could be said to be a variant of other similar RPG battle systems. Essentially, all battle participants have icons that continuously move along a
universal IP Gauge
, until they reach the Command point. Here, the player will enter from a selection of commands which includes attacking, using an item or a spell, guarding, or even retreating. They then wait to reach the very end of the gauge to execute their selected action, and the more experienced the character, the faster that point is reached.
A ton of strategy is introduced here
as during this waiting period between selecting an action and executing it, they are vulnerable to both Cancels and Counterattacks from their opponents. Unlike many contemporary RPGs where the instinct is to simply unleash physical and magical attacks in a turn-based order,
the player can take advantage of these waiting periods
to cancel out incoming enemy attacks and push them back on their IP gauge. The system will take some getting used to, but can be used to devastating effect, especially in the more drawn-out boss battles. It is entirely possible to strategically get in a half-dozen actions by each character and prevent a boss from retaliating during the entire sequence, by carefully timing attacks. This makes combat a lot more involved and exciting.
Cancel culture? Counterculture? Grandia’s got it all.
There are also advantages to catching an enemy unawares — player characters start much further ahead on their IP Gauge, with the reverse being true if Justin’s party is ambushed.
Players have a range of actions they can take when their IP Gauge is full, from the standard fare of using items, defending, running away, or even inspecting an enemy (is that slug-monster male or female, for example*).
Nana, Saki, and Mio are Mullen’s three she-sergeants. Serving as comedic relief, they are nevertheless quite capable opponents in battle.
By Your Powers Combined… I Am Captain Planet!
Earth. Fire. Wind. Water. These are the elemental forces that move the world, and most characters can master them! Learning magic in
Grandia
first requires that the party finds a
Mana Egg
. These rare items can then be exchanged in a shop for magic for a single character of your choice. That party member then learns the basics of your chosen magic element.
Inside of the four elements, magic spells are further split into levels, from one to three, to indicate their potency. Level 1 spells are your most basic spells and are what a character starts off with should they buy magic with their mana egg. Players that use magic in combat will gain skill points in that particular element, and those skill points are applied to all spells of that element, regardless of spell level — so, use a Level 1 Fire spell, and all levels of your Fire magic gain skill. Spell skill progression is represented by five red stars that fill up like a gauge, turning yellow as they gain experience. Greater experience shortens casting time (which, remember, is a vulnerable time as your spell can be cancelled by an opponent) and at higher levels, allows your character to learn combined element magic spells. All magic spells consume MP making them a limited resource, though a character’s overall MP capacity will grow with experience.
The snowy village of Laine. The water effects are
chef’s kiss
.
Outside of magic, each character can also execute
special attacks
that are unique to them. These attacks are usually more devastating than standard attacks and sometimes require that the character is using a particular weapon class. These, too, gain skill points represented by five red stars that slowly build up to yellow, though special attacks consume SP (skill points). SP works much the same way as MP.
Grandia Grinder Alert!
Rare Enemies Give High XP
Typically, the game’s monsters do a good job of seeking you out, but there are occasional difficult-to-catch enemies to be found as well. Notice, for instance, the Chameleon enemies in the Virgin Forest. These green creatures are shy and are hard to catch and engage. But persist, and finish them off for a huge load of experience points — well worth a grinding sesh or three.
Experience Required
Grandia
has a complex (for the time) experience points system, which is cleverly segmented into several categories.
Level up!
To start, each playable character has a set of basic stats that slowly increase as they gain experience. Hit Points (HP) are your standard measure of health and these increase at level-ups. SP are your skill points, which increase the speed and potency of your special attacks, as well as unlock new special attacks as you accumulate experience. Finally, the same is true of the more traditional magic points (MP), with the difference between SP and MP being that special attacks are individualized whereas magic attacks are more common amongst party members and can be bought in exchange for Mana Eggs.
As they adventure, Justin and company will occasionally find items that slightly
boost a particular stat on a permanent basis.
These items are rare indeed, but as with life, incremental gains tend to compound until the effects are undeniable.
The Seed of Speed grants a permanent stat boost.
Most traditionally, defeating enemies grants experience points and accumulating the required amount grants characters a level-up, which slightly increases basic stats. Experience gained and gold / treasure collected is displayed on an after-battle screen. It is this type of XP that most contemporary RPGs concerned themselves with.
Grandia
ups the complexity a bit by introducing leveling for magic and skills, and further mixes things up by employing different weapon classes.
Justin and company are each capable of wielding a few different types of weapons, of which there are seven in total, ranging from swords to maces to staffs to bows. Each weapon class has its advantages and disadvantages, be it speed of use (from Command input to Execution on the IP gauge), to range, to overall damage dealt. As party members use their weapons, they gain experience in those weapon types, separately from their character experience.
The texture work is awesome throughout.
In total,
Grandia
features basic character experience points which boosts common stats, magic experience which results in spells being cast faster and the learning of higher-level spells for various element types, skill experience for faster execution of special attacks, and weapon experience points which increase how well a character will handle that weapon type. Cleverly, these different experience categories are implemented in such a way as to make it entirely possible for gamers to completely ignore this aspect of the game should they so fancy. Because the system is automated, gamers can pay all of it little heed and still progress and have a great time with the game. Alternately, gamers can dive right into the finer points of the system to make those minor tweaks to get their characters to exactly the state they prefer.
The mysterious Liete awaits at Alent. The enigmatic Puffy accompanies Sue wherever she goes. Lastly, Darlin is one of the many non-human denizens of Grandia.
Go with the Flow
Grandia
allows up to
four playable characters
to form Justin’s party at any one time. As the story progresses, some of the main characters will permanently step away from the adventure, for reasons practical and dramatic alike. One such parting in particular tugs at the heartstrings — it is nothing quite as dramatic as the year’s earlier death of Aeris (Aerith) from that big RPG on Sony’s lesser 32-bit machine, but it somehow feels more relatable, and more impactful. Players ought not be surprised by the need for tissues to manage an unexpected tear or two. And here, too,
Grandia
innovates: a portion of a departing playable character’s magic and weapon experience points are stored in the stashing place, to be retrieved and applied to whatever character you see fit. This strengthens their legacy in your party, as well as provide a practical reason not to neglect building up a character just because they may eventually leave the party. A nice touch.
At the foot of the End of the World.
Is It Perfect?
Grand as it sounds, the game isn’t without a few small flaws. Story-wise, players will be left wanting to know more about Justin’s father and how he came to be the keeper of his Spirit Stone. He is mentioned often in the early stages of the game, but as Justin’s adventure takes off, that arc never completes. Likewise for General Baal — we eventually learn his motivations, but not so much why he has become who he is today. A really well put together villain is one with whom we can empathise; someone whose circumstance we can understand. Both with Justin’s unnamed father and with Baal, there is a feeling that we are reading a book and that the answers lie just ahead, but despite some teasing,
Grandia
never lets us turn the page.
Technically, the game’s 3D is solid and varied, with plenty of minor details and meticulous textures, especially in the town sections. Blending VDP2-drawn planes with solid geometry and animated sprites means the world of
Grandia
is beautifully rendered, but that comes at the cost of an oft-stuttering framerate. The more of
Grandia
’s world we are allowed to see at once, the more the framerate suffers. Now, these were the formative years of 3D gaming, but at times, that framerate simply chugs, and it’s noticeable to the point of distraction. Thankfully, for most of the game, the framerate sits comfortably in the ‘acceptable’ space, but you won’t get through the game without feeling the Saturn sweat as it works to display all that
Grandia
’s artists wanted you to see.
Special Moves. These gain experience as well.
Speaking of 3D, the game often requires the shifting of camera angles when exploring. When in long dungeons or any other large space, this can quickly become disorienting, and the player will lose their sense of direction. The game compensates somewhat for this with the addition of the compass, though its implementation is somewhat clumsy as rather than point north, it points to an exit or other objective.
There is also lookout points called Dungeon Scopes
, where the player is given a bird’s eye view of their current location from a default ‘north is up’ viewpoint. This helps orientating, but those lookout points are few and far between and using them tends to break up the game’s flow. Players may well find themselves keeping their camera shifting to a minimum as a result.
Lastly, a technical note:
Grandia
sure gives the Saturn’s laser a workout, and there are some clever pre-loading techniques implemented to keep the game flowing as smoothly as possible. The cost here is that
Grandia
is very sensitive to disc quality. Those that have burnt their English-patched game onto CDs and are playing on real Saturn hardware may well find the game freeze, especially in battle when calling various spells. This is VERY annoying, especially as dungeon save points are sparse, and it is not uncommon to be in the heat of a battle only to have everyone freeze with the reset button being the only escape. This is remedied by using an ODE solution that omits discs entirely, but the game’s sensitivity to the quality of your CD-R burn needs to be called out.
Hell yeah! Feena’s strongest spell.
Final Word
Grandia
is great.
The visuals are gorgeous, the music is appropriately evocative, the combat is frenetically strategic, and the story is well paced. Tough battles and surprise plot twists await intrepid gamers, and sub-plots occasionally weave their way into the adventure, too — especially in sections where we briefly leave Justin. On occasion, players will follow Colonel Mullen with Feena, explore the mysterious past of Mullen’s attaché Leen, or even soak in the comedic antics of the three beautiful Garlyle generals Mio, Nana, and Saki.
Ultimately,
Grandia
a delight to play. A total joy… but one that demands an
intense time commitment
. A player Justin’s age surely has the time, but what about those of us that are well into adulting? Some sections of the game, especially the longer dungeons, have few opportunities to save one’s game. In that sense, the game is a total hardcore, traditional JRPG. It is not easily digested in small play sessions, so playing
Grandia
is committing a huge slice of one’s discretionary time budget.
And yet, perhaps paradoxically, playing
Grandia
has a way of making one feel young again.
Grandia
is grand in the same way we ourselves felt grand as youngsters — that, armed with a stick we’ve just picked up and nothing more than our imagination, our wits, and our indomitable spirit, we could go forth boldly and change the world. That’s the beauty of a main character like Justin — he is not yet jaded; he has not yet borne the
burden of grown-up problems on his shoulders
. In many ways, we were all Justin (or Sue!) at one point, and the game shines a light on that part of us that is now long behind (most of) us. Perhaps the most memorable aspect of
Grandia
is that it allows us, for a moment all too brief, to once again be that young boy or girl full of optimism and energy, and in today’s complex and stressful world, that feels simply wonderful.
Promotional art that showcases one of the game’s most powerful moments: Justin, Sue, and Feena have climbed the wall at the end of the world, and see, for the first time, the lands on the other side.
Three Optional Dungeons
Grandia
is generally a well-balanced affair, with experience accumulating at the right rate for players to progress in the game. That said, the world of
Grandia
plays host to three completely optional dungeons meant solely for increasing character abilities and experience — and goes so far as to explicitly point out that these areas are not part of the game’s story and are entirely optional.
The first such dungeon can be found just west of the first section of the Zil Desert. It’s a large, very dark brown multi-leveled maze with the only save point being at the entrance. The enemies are tougher than one would expect at this point in the game, but nothing is impossible for Justin et al. The key here is to find the four Soldier’s Souls, which grants access to the treasures of the dungeon, at the very end, past the boss. The boss is a remix of a previous boss from Feena’s failed wedding to Pakon and packs quite a punch. The main prize here is the excellent Godspeed Knife, which adds a huge ACT boost, to massively speed up the user’s IP gauge.
The Soldier’s Graveyard entrance.
The second optional dungeon is also found to the west but is accessible from the second part of the Zil Desert. This dungeon is very small but has perhaps the most charm. Justin and company are greeted by a mysterious Lady at the castle entrance, begging for help but also warning of a curse on the castle. Once inside, there are several rooms to visit and loot to collect. Really simplistic and set to lure the player to lower their guard, just in time to battle the formidable Lord’s Ghost boss. This guy’s TOUGH, with strong multi-character attacks and cancelling moves. Take him down to claim the awesome Lightning Sword, which gives a 50 ATK boost and, as an elemental sword, has the Zap! spell built in.
Don’t thank us yet…
The final optional dungeon is the mother of all dungeons in
Grandia
. Found tucked away in the Savanna Wilderness and accessible via a secret passage, the Tower of Temptation consists of an outside area and 12 (!) floors of punishing combat. Of course, the only save point is at the very start of the outside area, though Justin can activate a couple of shortcuts through the tower as he makes progress, so that backtracking to heal and save is a bit easier. Interestingly, the starting area is surrounded by six Zero Weapons – one of each kind of weapons that grants a 0 ATK value — ideal for training weapons on weaker enemies, as these will do nearly no damage.
Grandia
Grinder Mini-Alert
: many enemies in the Tower drop stat-increasing items, making this an ideal place to pull it all out and go for that growth.
Prepare to spend hours on this dungeon.
Each floor of the Tower features maze sections, hidden doors, powerful enemies, and of course, switches to hit. Simply by making one’s way through the tower will increase the party’s levels, as there is so much battling to do. It is not uncommon to spend hours in the Tower, so it’s a welcome fact that the Tower is entirely optional. The final three floors are all boss — yes, there are three bosses to fight in a row. No saving, no healing. The final of the three bosses is tough as nails, but the reward is well worth it — NINE amazing items to pick up, including two items from the Grinder’s Gear™ premium collection: the Astral Miracle and the Ethereal Miracle, both accessories that double weapon or magic experience gained. VERY useful, but they better be, considering the pain just endured to complete the Tower of Temptation!
The Universe is Grand…ia
Grandia
went on to
sell bucket-loads in Japan
, especially during release week. It received a Digital Museum DLC-style disc, got a port on the mass-market PlayStation including a PS Greatest Hits re-release, and finally, a PlayStation English localization in 1999. The series continued in 2000 with the excellent
Grandia 2
on Dreamcast, which itself was later poorly ported to Sony’s killer of dreams, the PlayStation 2. That system would also see the less-well received
Grandia 3
, which would spell the end of the main series’ run. The series also saw several spin-off games such as
Grandia Xtreme
and
Grandia Online
. Additionally, the first
Grandia
was recently remade for modern consoles with the release of the
Grandia HD Collection
.
*Note: you cannot inspect monsters’ genders in battle. That was just a joke. Also there is no Grinder’s Gear in Grandia.
I’m Not Crying, You’re Crying!
A beautiful scene.
A bit of a personal story… The above screenshot is my favorite scene in all of Grandia. See, the game does a brilliant job of bringing us back to the days of youthful adventures where nothing at all was impossible, and despite whatever danger beset us, we knew deep down that in the end, we would be all right. But in the most subtle of ways, Grandia also covers personal growth and the passage of time.
At some point, deep into the adventure, 8-year-old Sue gets tired. At first, she temporarily leaves the party whilst recuperating at a local sick house, with everyone hoping (and the player confidently knowing) that she will get better. But… she doesn’t. She puts on a brave face and re-joins the party, going on one final quest. As the gamer, I kept looking for the herb or special item that I could find to cure her, but no such moment ever came. There never was any single wound or ailment that Sue suffered, it’s just that one day, she simply… got tired, and ultimately, had to leave the party. She was a trooper through the entire adventure; completely indispensable she was, but there was a sunset to her time on the grand adventure, and she ended up leaving far too soon for my liking.
In real life, this sometimes happens, too. People in our orbit — strong, vibrant people, whom we believe will be with us forever — sometimes, unexpectedly, undeservedly… get tired, and have to quit the great adventure. Sometimes they are even younger than us, or in better health than us, or benefitting from any number of other factors that make their leaving seem senseless and cruelly unfair. It’s a reminder of the finite nature of life, and that sometimes we are living oh so naively and innocently through what we will later call the best times of our lives.
Sometimes, we get a chance to say our goodbyes before they depart us, and this is something Justin and Feena were able to share with Sue. With tears in her eyes, even as she bade farewell, she wished for Justin to follow his dreams and complete his long quest to find Angelou. It’s this that ties all of these sentiments together, for me. We all get older. We all leave our childhood behind us and begin to lead our adult lives in earnest. Our carefree days of questing and playing our days away, confident that in the end, everything will be all right, are replaced by planning, worrying, pressure, stress, failure, and other harsh realities of life. Here, Sue reminds us of the importance of not forgetting our dreams. We may not have the time or the energy that we did then, but whatever the obstacles, we must always go boldly in the direction of our dreams, hand-in-hand with those who love us, for we, too, will one day exit the adventure. In our final moments, what sweeter satisfaction could there be than to warmly smile at those who walked with us, and to look back on our journey with pride.
This release is brought to you with almost 700 commits by the following individuals:
Aleksander Sabak, Andy Kluger, Cat Stevens, Dmitry Matveyev, Doug Coleman,
Giftpflanze, John Benediktsson, Jon Harper, Jonas Bernouli, Leo Mehraban, Mike
Stevenson, Nicholas Chandoke, Niklas Larsson, Rebecca Kelly, Samuel Tardieu,
Stefan Schmiedl,
@Bruno-366
,
@bobisageek
,
@coltsingleactionarmyocelot
,
@inivekin
,
@knottio
,
@timor
Besides some bug fixes and library improvements, I want to highlight the following changes:
Moved the UI to render buttons and scrollbars rather than using images, which
allows easier theming.
Fixed
HiDPI
scaling on Linux and
Windows, although it currently doesn’t update the window settings when
switching between screens with different scaling factors.
The
http
vocabulary
request
tuple had a slot rename from
post-data
to
data
.
The
furnace.asides
vocabulary had a slot rename from
post-data
to
data
, and might require running
ALTER TABLE asides RENAME COLUMN "post-data" TO data;
.
The
html.streams
vocabulary was renamed to
io.streams.html
The
pdf.streams
vocabulary was renamed to
io.streams.pdf
What is Factor
Factor is a
concatenative
, stack-based
programming language with
high-level
features
including dynamic types, extensible syntax, macros, and garbage
collection. On a practical side, Factor has a
full-featured
library
,
supports many different platforms, and has been extensively documented.
The implementation is
fully
compiled
for performance, while still supporting
interactive
development
.
Factor applications are portable between all common platforms. Factor
can
deploy stand-alone
applications
on
all platforms. Full source code for the Factor project is available
under a BSD license.
New libraries:
base92
: adding support for Base92 encoding/decoding
It has now been 30 years since WarCraft II: Tides of Darkness was released. After the great response to Warcraft: Orcs and Humans, released in November 1994, Blizzard began working on Warcraft II: Tides of Darkness. Development stared in the first months of 1995, and the game was released in North America and Australia on December 9, 1995.
While WarCraft: Orcs and Humans had laid the foundations of the series — arguably even for the RTS genre at a whole — it was really WarCraft II that took things to new heights. More units could be selected at once, the player could right-click to issue commands, naval and aerial combat was introduced, and buildings and units could be upgraded. The graphics were more vivid and visually appealing, and features like the Fog of War was introduced, where you could only see in the vicinity of your own units — unlike in the first game, where you could indefinitely see any area you had previously visited, you now had to continuously scout the map.
Many things still resembled the first game. The two factions — the Humans and the Orcs — were balanced through their similarites. For every unit and building of one faction, the other had one that was functionally equivalent, and so the sides largely mirrored each other. The only real differences lay in the spells available to their higher-level units. In that regard, the clear winners were the Orcs, who had a tremendous advantage thanks to the incredibly powerful and unbalanced Bloodlust spell of the Ogre-Magi.
It is quite impressive that Blizzard managed to release a title of such quality in such a short span of time, especially considering that the overall design and gameplay evolved during development. Originally, Blizzard’s concept blended modern and fantasy elements, such as fighter pilots being ambushed by a fire-breathing dragon. In the Alpha version (it is still probably floating around somewhere on the Internet) which was given to magazines for review shortly before the game's release, players could, for example mine rocks, which acted as an additional required resource.
Several versions and bundles of WarCraft II were released over the years:
WarCraft II: Tides of Darkness, originally written for DOS, though it had a Windows launch screen and ran well under Windows 95. A Macintosh version was also released. The DOS version supported multiplayer games via null modem cable, modem, or IPX, while Mac players could also play via TCP/IP or AppleTalk.
WarCraft II: Beyond the Dark Portal, the expansion, released in April 1996.
WarCraft: Battle Chest, released in 1996, was a bundle which included WarCraft: Orcs and Humans, WarCraft II: Tides of Darkness, and WarCraft II: Beyond the Dark Portal.
WarCraft II: The Dark Saga, released in 1997, was a port for the Sega Saturn and PlayStation consoles by Electronic Arts, including the campaigns from both Tides of Darkness and Beyond the Dark Portal.
WarCraft II: Battle.net Edition, released in 1999, ported the game's code to Microsoft Windows, fixed some minor bugs, and enabled multiplayer support via Blizzard's online service, Battle.net.
WarCraft II Battle Chest, released in 1999, included the Battle.net Edition and its official strategy guide.
WarCraft II: Remastered, released in November 2024, is modern remaster of Tides of Darkness and Beyond the Dark Portal, with improved graphics and updated controls.
WarCraft II: Tides of Darkness received enthusiastic reviews, elevating Blizzard to the top ranks alongside Westwood Studios, id Software, and LucasArts. The rivalry between Blizzard's series and Westwood Studios' Command and Conquer series helped fuel the RTS boom of the late 1990s. PC Gamer US named WarCraft II the best game of 1995, calling it an "easy" choice and writing that "Warcraft II stand[s] out — way out — as the most impressive, most entertaining, game of 1995". The editors also awarded it Best Multi-Player Game of 1995.
WarCraft II was notable for the large number of third-party utilities created for it. Quickly,
Daniel Lemberg
reverse-engineered and published the map file (*.pud) format and wrote the first third-party map editor,
War2xEd
, which could do multiple things that the bundled map editor could not, such as editing unit attributes. Blizzard apparently began using
War2xEd
internally, and it influenced their decision to later ship a feature-rich map editor with StarCraft.
Next,
Alexander Cech
and
Daniel Lemberg
reverse-engineered the game data format, the WAR archives.
Alexander Cech
went on to make a hugely important tool called
Wardraft
, which allowed users to browse and modify the contents of the WAR archives. This enabled extensive game modifications, known as "Total Conversions". Many such projects gained popularity and remained in development for a long time, notable examples being
DeathCraft: Twilight of Demons
,
War of the Ring
,
Editor's Total Conversion
,
Funcraft
and
Rituals of Rebirth
.
Most of these utilities and conversions have long since faded into obscurity, but their legacy lives on. They impacted Blizzard's desicion to bundle ever more powerful editors and trigger systems into StarCraft and later WarCraft III, which in turn later spawned entire games such as Dota (which began as a WarCraft III map). Hopefully, someday (soon?) we can host some of the Total Conversions here at
Jorvik Systems
.
As a personal anecdote, I vividly remember two defining moments related to the game. I was young when it came out, and my dad's friend had pirated it; somehow the game ended up on our computer. I was too young to speak English at the time, and the interface was confusing to me, so a relative helped me understand the basics — how to make peons construct buildings, how to control units, and how to navigate around the map. I hadn't played computer games much before then, but from that moment on, I was arguably obsessed.
A second strong memory came a few months later, at my friend
Erik
's house, on his Intel 486 PC. He was experimenting with the WarCraft II map editor, which I hadn't known existed, and I was blown away. I simply could not believe that Blizzard would ship such a tool with the game; to me, it meant that people could essentially create their own games by designing entirely new scenarios. It is quite possible that my fascination with modding was born in that very moment. We probably went outside to play shortly afterward, which I found incredibly lame — we had at our disposal the most powerful tool I could imagine, so why were we not inside using it?
EU opens investigation into Google’s use of online content for AI models
Guardian
www.theguardian.com
2025-12-09 08:48:06
European Commission to assess whether Gemini owner is putting rival companies at a disadvantageBusiness live – latest updatesThe EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from web publishers and YouTube for artificial...
The EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from web publishers and
YouTube
for artificial intelligence.
The European Commission said on Tuesday it will examine whether the US tech company, which runs the Gemini AI model and is owned by
Alphabet
, is putting rival AI owners at a “disadvantage”.
“The investigation will notably examine whether
Google
is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage,” the commission said.
It said it was concerned that Google may have used content from web publishers to generate AI-powered services on its search results pages without appropriate compensation to publishers and without offering them the possibility to refuse such use of their content.
The commission said it was also concerned as to whether Google has used content uploaded to YouTube to train its own generative AI models without offering creators compensation or the possibility to refuse.
“Content creators uploading videos on YouTube have an obligation to grant Google permission to use their data for different purposes, including for training generative AI models,” the commission said.
Google does not pay YouTube content creators for their content, nor does it allow them to upload their content on YouTube without allowing Google to use such data, it said. The commission noted that rival developers of AI models are barred by YouTube policies from using YouTube content to train their own AI models.
deftests():assertsubparts('^it$')=={'^','i','t','$','^i','it','t$','^it','it$','^it$'}assertsubparts('this')=={'t','h','i','s','th','hi','is','thi','his','this'}subparts('banana')=={'a','an','ana','anan','b','ba','ban','bana','n','na','nan','nana'}assertdotify('it')=={'it','i.','.t','..'}assertdotify('^it$')=={'^it$','^i.$','^.t$','^..$'}assertdotify('this')=={'this','thi.','th.s','th..','t.is','t.i.','t..s','t...','.his','.hi.','.h.s','.h..','..is','..i.','...s','....'}assertregex_parts({'win'},{'losers','bin','won'})=={'^win$','^win','^wi.','wi.','wi','^wi','win$','win','wi.$'}assertregex_parts({'win'},{'bin','won','wine','wit'})=={'^win$','win$'}regex_parts({'boy','coy'},{'ahoy','toy','book','cook','boycott','cowboy','cod','buy','oy','foil','coyote'})=={'^boy$','^coy$','c.y$','coy$'}assertmatches('a|b|c',{'a','b','c','d','e'})=={'a','b','c'}assertmatches('a|b|c',{'any','bee','succeed','dee','eee!'})=={'any','bee','succeed'}assertOR(['a','b','c'])=='a|b|c'assertOR(['a'])=='a'assertwords('this is a test this is')=={'this','is','a','test'}assertfindregex({"ahahah","ciao"},{"ahaha","bye"})=='a.$'assertfindregex({"this","that","the other"},{"one","two","here","there"})=='h..$'assertfindregex({'boy','coy','toy','joy'},{'ahoy','buy','oy','foil'})=='^.oy'assertnotmistakes('a|b|c',{'ahoy','boy','coy'},{'joy','toy'})assertnotmistakes('^a|^b|^c',{'ahoy','boy','coy'},{'joy','toy','kickback'})assertmistakes('^.oy',{'ahoy','boy','coy'},{'joy','ploy'})=={"Should have matched: ahoy","Should not have matched: joy"}return'tests pass'tests()
Trump clears way for Nvidia to sell powerful AI chips to China
Guardian
www.theguardian.com
2025-12-09 08:29:08
Commerce department finalizing deal to allow H200 chips to be sold to China as strict Biden-era restrictions relaxed Donald Trump has cleared the way for Nvidia to begin selling its powerful AI computer chips to China, marking a win for the chip maker and its CEO Jensen Huang, who has spent months l...
Donald Trump has cleared the way for Nvidia to begin selling its powerful AI computer chips to China, marking
a win for the chip maker and its CEO Jensen Huang, who has spent months lobbying the White House to open up sales in the country.
Before Monday’s announcement, the US had prohibited sales of Nvidia’s most advanced chips to China over national security concerns.
“I have informed President Xi, of China, that the United States will allow NVIDIA to ship its H200 products to approved customers in China, and other Countries, under conditions that allow for continued strong National Security,” Trump
posted
to Truth Social on Monday. “President Xi responded positively!”
Trump said the Department of Commerce is finalising the details and that he was planning to make the same offer to other chip companies, including Advanced Micro Devices (AMD) and Intel. Nvidia’s H200 chips are the company’s second-most powerful, and far more advanced than the H20, which was originally designed as a lower-powered model for the Chinese market, which wouldn’t breach restrictions, but which the US banned anyway in April.
According to the Hill, Democratic senators Elizabeth Warren of Massachusetts and Andy Kim of New Jersey
sent a letter
to commerce secretary Howard Lutnick last week, outlining their concerns with selling these chips to China and saying it risked powering the country’s “surveillance, censorship, and military applications”.
“I urge you to stop ignoring the input of bipartisan members of Congress and your own experts in order to cut deals that trade away America’s national security,” the senators wrote.
On social media, Warren called for Huang to appear before Congress to testify under oath.
Huang has worked closely with Trump since the inauguration, and has made several trips to the White House. The CEO attended the president’s AI summit in July, met with Trump as recently as last week and was even a guest at the White House dinner for the Saudi crown price Mohammed bin Salman. Huang has also
pledged to invest $500bn in AI infrastructure
in the US over the next four years.
Huang has also visited
China
several times, meeting with officials and Chinese tech executives, as US bans were variously lifted and reintroduced. Earlier this year, China imposed its own controls on the imports of Nvidia chips, with top tech firms reportedly instructed to cancel orders, citing national security concerns and confidence in China’s domestic chip development.
In October Huang said Nvidia has gone from having 95% of the Chinese market to having 0%, and called the bans a “strategic mistake”.
Now, selling chips to China – the world’s second-largest economy – could mean a windfall worth billions of dollars for Nvidia, which is already valued at $4.5tn.
“We applaud President Trump’s decision,” said a Nvidia spokesperson. He added that offering the H200 chips “to approved commercial customers, vetted by the Department of Commerce, strikes a thoughtful balance that is great for America”.
The Nvidia spokesperson and Trump said the move would support US jobs and manufacturing. In his Truth Social post, Trump condemned the Biden administration’s policies, which imposed strict export controls on powerful chips. The Biden administration had said withholding such technology from China bolstered US competition, protected national security and hampered AI development in China.
“That Era is OVER!” Trump wrote. “My Administration will always put America FIRST.” .
On Tuesday afternoon China’s foreign ministry said it had noted the reports.
“China has always adhered to the principle that China and the United States can achieve mutual benefit and win-win results through cooperation,” the spokesperson said.
Ma Jihua, a telecom industry analyst, told state media outlet, the Global Times, that years of US curbs on AI exports had “provided a rare chance of China’s domestic chip industry to grow and catch up”.
As of midday today (GMT), New Year’s Eve, Longplayer has been playing continuously, without repetition, for 25 years.
Playing since the cusp of the new millennium, at midnight on 31 December 1999, the composition will continue without repetition (if circumstances permit it to) until the last moments of 2999, when it will return to the point at which it first began – and begin again.
We want to take this opportunity to thank you, our community of listeners, for helping to keep Longplayer playing. We also invite you to celebrate Longplayer’s 25th birthday by joining us in a year-long international programme of events, collaborations and initiatives reflecting the work’s unique articulation of time and its dimensions, and the investment in the long-term that Longplayer continues to inspire.
At present, Longplayer is being performed mostly by computers, and can be heard, playing continuously
online
(via live transmission and an iOS app) and in various locations around the world, including Yorkshire Sculpture Park and London’s only lighthouse, overlooking the Thames at Trinity Buoy Wharf.
From January 2025,
Longplayer will also be playing from a listening post on the rooftop of
La Casa Encendida
,
a cultural centre in Madrid.
Originally conceived as a way of articulating time and its dimensions, Longplayer has become much more than that. As a catalyst for creative engagement with long-term thinking, it connects an international community of listeners and custodians to futures (and increasingly, a past) beyond their own lifetimes. Over the last 25 years, Longplayer has provided a uniquely accessible and eloquent platform for projects and initiatives connected by their shared ambition to push beyond the reactive short-termism of our present age. These have included
conversations
between leading figures from culture, science and beyond, technological iteration, conceptual artworks,
musical performances and community gatherings
on a local, national and international scale.
Daisy Hildyard in conversation with Kate Briggs, The Longplayer Conversation 2024 at Swedenborg House. Photo credit:
Tarlan Lotfizadeh.
The engine of much of this activity has been experimentation with the Longplayer score itself. Despite its current reliance on computers, it has been sung, released on vinyl, encoded through light and beamed across the river, and realised live in rare durational performances.
The next of these will take place at the
Roundhouse on Saturday 5 April 2025,
when a 1000-minute section of its score, as written for that particular time and date, will be performed on a large orchestral instrument comprised of 234 singing bowls, arranged in six concentric rings and played by shifts of six to twelve people at any one time, reading from a graphic score. We would be delighted if you would join us for this rare performance, to celebrate Longplayer reaching a quarter century. For booking and information see
here
.
A community through time
As Longplayer completes its first quarter century, it has become a laboratory for urgent questions relating to stewardship, intergenerational connection, the future of music, and adaptability to technological and environmental change. At this milestone, we want to thank the thousands of listeners around the world who tune into Longplayer for purposes of pleasure, relaxation, reflection and utility.
Jem Finer, Longplayer’s composer, said:
‘Twenty-five years is really nothing in Longplayer
’s
scheme of things though it’s starting to feel more substantial to me. People have come and, sadly, people have gone, while some who were young children back in 2000 are now looking after Longplayer. This feels right, that a community through time is emerging, that where Longplayer once felt only future facing it’s now accruing a past. I send great thanks to all those who have supported Longplayer and to the many people who have worked so inspiringly and generously to get it to this point. I hope we can all find some light and peace in the year ahead.’
The nature of
time
is one of the most profound and longstanding problems in physics – one that
no one can agree on
. From our perspective, time seems to steadily progress forward with each tick of the clock.
But the closer we look, the more bizarre time becomes – from equations that state time
should flow as freely backwards
as it does forwards, to the strange quantum realm where cause and effect can flip on their heads.
What makes time so confounding is that we have three very different ways of defining it, which don’t easily fit together.
The first definition comes from the equations that describe how things change over time.
We have many such equations describing everything from the motion of tennis balls to the decay of atomic nuclei. In all these equations, time is a quantity, referred to as ‘
coordinate time
’.
Time is no more than a mathematical label to which we can assign a particular value.
The second definition of time comes from Einstein’s theories of relativity, where it’s a
dimension
in addition to the three we’re familiar with. It’s a direction in four-dimensional spacetime.
Our picture of reality then becomes one in which all times – past, present and future – are equally real and co-exist, just as all points in space are equally real.
More than that; time has a deep connection with
gravity
according to General Relativity, where the shape of spacetime is influenced by gravity.
Much of the effort at the forefront of theoretical physics over the past half-century has been devoted to unifying General Relativity with the strange world of quantum mechanics.
Mathematical frameworks that attempt to do this are known as theories of quantum gravity.
But how do we reconcile these two notions of time – the quantum mechanical idea, in which time is a mere parameter, versus the relativistic idea that time is a dimension in spacetime?
I call this ‘the first problem of physical time’.
Time in quantum gravity
The reason it’s so difficult to reconcile quantum mechanics with General Relativity is that their mathematics are fundamentally incompatible.
Not only that, but quantum effects primarily govern very small scales such as subatomic particles, while gravity impacts much larger scales such as planets and galaxies, so trying to create an experiment where both scales are not only relevant, but can be accurately measured, has proved exceedingly difficult.
Early attempts at unifying a quantum description of reality with the 4D spacetime of General Relativity led John Wheeler and Bryce DeWitt to come up with an equation –
the Wheeler-DeWitt equation
– in 1967, in which time no longer appears at all.
What they were attempting to describe is the quantum state of the entire Universe, independent of time. This, many physicists have suggested, means that time might just be an illusion.
But should we be so radical or dismissive about time? We’ve come a long way since then, so how does time enter current attempts to develop a theory of quantum gravity?
Here, things get very murky.
Some approaches still start from something like traditional coordinate time, but then add time again as part of a spacetime with more dimensions than the four we’re used to.
In other approaches, time emerges from more fundamental concepts about the Universe.
Time might even turn out to be ‘quantised’, meaning that if we were to zoom down to small enough scales, we would see both time and space as lumpy. So, we end up with quanta (atoms) of spacetime.
Combining quantum mechanics and General Relativity is all well and good, but there‘s one key mystery it doesn’t address: why does time only seem to flow in one direction?
Superstring theory, which views the constituents of the Universe as vibrating strings rather than points in space, is an attempt to unify quantum mechanics and General Relativity, but requires a wholly different understanding of time - Image credit: Science Photo Library
This brings us to the third definition of time, stemming from thermodynamics, which describes the properties of large numbers of particles treated in terms of macro quantities like heat, temperature and pressure.
Here, time is neither a dimension nor a label, but a direction – pointing from the past to the future.
This is typically phrased as being in the direction of increasing entropy: our unwinding Universe, balls rolling downhill, ice cubes melting in a glass of water and so on.
However, despite all the irreversible processes we see around us, the fact is that, in all the fundamental equations of physics, reversing the direction of time doesn’t prevent the equations from working.
That is, time could point either way and we wouldn’t be able to tell the future from the past. Yet we see a clear difference between the past and the future.
This is ‘the second problem of physical time’. How do we reconcile the fact that our equations work just as well whichever way time is running with the irreversibility of time that we experience in the world?
For this, we might have to look towards the quantum domain and the strange phenomena of entanglement.
Quantum objects like electrons or photons can have properties that are not fixed before they’re measured, such as location, momentum, energy or spin direction.
That is, they can exist in a ‘
quantum superposition
’ of having a range of values at once, such as being spread out in space or spinning in two directions at the same time.
Only when we choose to observe a property do we force the quantum system to decide on one of the many options of that property it was co-existing in.
But if, before our measurement, an electron interacts with a second one, then this second electron can be ‘infected’ by the superposition of the first. It’ll also find itself in a limbo state prior to measurement.
We say the two electrons are
quantum entangled
and we have to describe them as a single quantum entity.
Quantum entanglement (illustrated here) is a theory that links two particles across time and space. Changes to one particle will be reflected in the other - Image credit: Science Photo Library
The strange feature of entanglement is that observing just one of the two electrons also forces the second to snap into one of the available options in its superposition. This will happen at the same time, however far apart they are.
And it’s not even the entanglement between two electrons that needs to be considered. The entire Universe can become – indeed will inevitably become – quantum entangled with its surroundings.
In fact, we should stop thinking of quantum entanglement as some sort of bizarre phenomenon that only rarely happens in nature, or that it’s ‘spooky’, as Einstein once said.
Rather, it’s one of the most, if not the most prevalent process in the Universe. So, how can it help us demystify the nature of time?
In 1983, Don Page and William Wootters
first suggested
a link between time and quantum entanglement, rescuing time from the timeless Wheeler-DeWitt equation.
Imagine that some hypothetical quantum clock is entangled with its environment.
Instead of thinking of the clock being in a superposition of two locations in space, we can combine them into an entangled clock+environment system in a superposition of states at different times.
Now, when we measure the clock by reading the time, it forces the clock’s environment to snap into what it was doing at that time only.
So, what if we think of the overall state of the Universe, which might be timeless, as being composed of two parts: (1) a clock and (2) everything else?
For us, embedded within the ‘everything else’, perceiving a particular time amounts to measuring the clock at that time, so we perceive reality – the clock’s environment, aka the Universe – at that moment.
But, viewed from ‘outside’ the Universe, all times co-exist and there’s no ‘passage’ of time, as Wheeler and DeWitt argued.
Quantum causality
If quantum mechanics tells us that a system can be in a superposition of states at two different times, then this has an even more fascinating consequence when we consider the ordering of cause and effect.
That is, for something to occur, the cause must come before the effect.
Consider two events, A and B, such as flashes of
light
made by two sources in different places.
Cause and effect means there are three possibilities: 1) Flash A happened before flash B, and via some mechanism, could have triggered B; 2) Flash B happened before Flash A and could have triggered it; 3) Neither one could have triggered the other because they are too far apart in space and too close in time for a triggering signal to have been sent from one location to the other.
Entropy, the idea that the order of a system breaks down as time moves forwards, is perceived as being inevitable and irreversible. But our theories appear to suggest otherwise - Image credit: Science Photo Library
Now, Einstein’s Special Theory of Relativity states that all observers, no matter how fast they’re moving relative to each other, see light travelling at the same constant speed.
This strange but simple fact can lead to observers seeing events happening in different orders.
For option (3) above, two observers moving relative to each other close to the speed of light might disagree on the ordering of flashes.
Thankfully, there’s no danger of an effect coming before its cause (known as a ‘violation of causality’) since the events are too far apart for either to cause the other.
However, what if options (1) and (2) coexisted in a quantum superposition? The causal order of the two events would no longer be fixed.
They would exist in a combined state of Flash A happening before and triggering Flash B, and of B happening first. We see then that cause and effect can become blurred when we bring quantum mechanics and relativity together.
It gets even weirder when we introduce gravity via General Relativity.
Here’s an interesting thought experiment. Imagine two quantum entangled clocks, each in a superposition of different heights above Earth’s surface.
According to General Relativity, this would mean the two clocks tick at slightly different rates, due to the slight difference in the gravitational field.
The superposition here is a combination of State 1 in which clock A is higher than clock B, and so ticking a little faster, and State 2 in which the clocks are swapped over.
Until this combined entangled state is measured by reading the time on one of the clocks, it’s not possible to determine the ordering of any events recorded by the two clocks.
And if we can’t determine which events are in the future and which are in the past, we arrive at the possibility of events acting backwards in time to cause events in their past.
If, at the quantum level, events in the past can be affected by events in the future, then all bets are off.
While some physicists argue that causality is sacred and must be preserved at all costs, others have argued in favour of the idea of retrocausality (the future affecting the past) and even of quantum time travel.
It may well be the case that even if we find our true theory of quantum gravity, time will turn out not to be one single concept, but rather a multi-faceted, complex thing.
Perhaps it really does retain its different properties depending on how we’re using it: a dimension of spacetime, a coordinate to be measured against, and an irreversible
arrow
.
All of these are only meaningful in the approximate, zoomed-out way we subjectively perceive time. Maybe that’s the best we can hope for.
Or maybe, just maybe, we need to dig even deeper into the mysteries of time.
The following subscription-only content has been made available to you
by an LWN subscriber. Thousands of subscribers depend on LWN for the
best news from the Linux and free software communities. If you enjoy this
article, please consider
subscribing to LWN
. Thank you
for visiting LWN.net!
The
Internet Engineering Task Force
(IETF) is the standards body responsible
for the TLS encryption standard — which your browser is using right now
to allow you to read LWN.net. As part of its work to keep TLS secure, the IETF
has been entertaining
proposals
to adopt "post-quantum" cryptography (that is,
cryptography that is not known to be easily broken by a quantum computer) for TLS
version 1.3. Discussion of the proposal has exposed a large disagreement between
participants who worried about weakened security and others who worried about
weakened marketability.
What is post-quantum cryptography?
In 1994, Peter Shor developed
Shor's algorithm
,
which can use a quantum computer to factor large numbers asymptotically faster
(i.e. faster by a proportion that grows as the size of the input does)
than a classical computer can. This was a huge blow to the theoretical security of the then-common
RSA
public-key encryption
algorithm, which depends on the factoring of numbers being hard in order to
guarantee security. Later work extended Shor's algorithm to apply to other
key-exchange algorithms, such as
elliptic-curve Diffie-Hellman
, the most common key-exchange algorithm on the
modern internet. There are
doubts
that any attack using a quantum computer could actually be made practical — but
given that the field of cryptography moves slowly, it could still be worth
getting ahead of the curve.
Quantum computing is sometimes explained as trying all possible answers to a
problem at once, but that is incorrect.
If that were the case, quantum computers could trivially break any possible
encryption algorithm. Instead, quantum computers work by applying a limited set
of transformations to a quantum state that can be thought of as a
high-dimensional unit-length vector. The beauty of Shor's algorithm is that he
showed
how to use these extremely limited operations to reliably factor numbers
.
The study of post-quantum cryptography is about finding an encryption mechanism
that none of the generalizations of Shor's algorithm or related quantum
algorithms apply to: finding encryption techniques where there is no known way
for a quantum computer to break them meaningfully faster than a classical computer can.
While attackers may not be breaking encryption with quantum computers today, the
worry is that they could use a "store now, decrypt later" attack to break
today's cryptography with the theoretically much more capable quantum computers
of tomorrow.
For TLS, the question is specifically how to make a
post-quantum key-exchange mechanism. When a TLS connection is established, the
server and client use public-key cryptography to agree on a shared encryption
key without leaking that key to any eavesdroppers. Then they can use that shared
key with (much less computationally expensive) symmetric encryption to secure the
rest of the connection. Current symmetric encryption schemes are almost
certainly not vulnerable to attack by quantum computers because of their
radically different design, so the only part of TLS's security that needs
to upgrade to avoid attacks from a quantum computer is the key-exchange mechanism.
Belt and suspenders
The problem, of course, is that trying to come up with novel, hard mathematical
problems that can be used as the basis of an encryption scheme does not always
work. Sometimes, cryptographers will pose a problem believing it to be
sufficiently hard, and then a mathematician will come along and discover a new
approach that makes attacking the problem feasible. That is exactly what
happened to the SIKE protocol
in 2022. Even when a cryptosystem is not
completely broken, a particular implementation can still suffer from side-channel
attacks or other problematic behaviors, as
happened
with post-quantum encryption standard Kyber/ML-KEM
multiple
times
from its initial draft in 2017 to the present.
That's why, when the
US National Institute of Standards and Technology (NIST)
standardized Kyber/ML-KEM
as its recommended
post-quantum key-exchange mechanism in August 2024, it
provided
approved ways to combine a traditional key-exchange mechanism with
a post-quantum key-exchange mechanism. When these algorithms are properly combined (which is not too
difficult, although cryptographic implementations always require some care),
the result is a hybrid scheme that remains secure so long as either one of its
components remains secure.
The Linux Foundation's
Open Quantum
Safe
project, which provides open-source implementations of post-quantum
cryptography, fully supports this kind of hybrid scheme. The IETF's
initial
draft recommendation
in 2023 for how to use post-quantum cryptography in TLS
specifically said that TLS should use this
kind of hybrid approach:
The migration to [post-quantum cryptography] is unique in the history of modern digital cryptography in
that neither the traditional algorithms nor the post-quantum algorithms are
fully trusted to protect data for the required data lifetimes. The traditional
algorithms, such as RSA and elliptic curve, will fall to quantum cryptanalysis,
while the post-quantum algorithms face uncertainty about the underlying
mathematics, compliance issues (when certified implementations will be
commercially available), unknown vulnerabilities, hardware and software
implementations that have not had sufficient maturing time to rule out classical
cryptanalytic attacks and implementation bugs.
During the transition from traditional to post-quantum algorithms, there is a
desire or a requirement for protocols that use both algorithm types. The primary
goal of a hybrid key exchange mechanism is to facilitate the establishment of a
shared secret which remains secure as long as as one of the component key
exchange mechanisms remains unbroken.
But the
most
recent draft
from September 2025, which was ultimately adopted as a working-group document, relaxes that
requirement, noting:
However, Pure PQC Key Exchange may be required for specific deployments with
regulatory or compliance mandates that necessitate the exclusive use of
post-quantum cryptography. Examples include sectors governed by stringent
cryptographic standards.
This
refers
to the US National Security Agency (NSA)
requirements
for products purchased by the US government.
The requirements "
will effectively deprecate the use of RSA, Diffie-Hellman (DH), and elliptic
curve cryptography (ECDH and ECDSA) when mandated.
" The NSA has
a history
of publicly endorsing weak (plausibly already broken, internally)
cryptography in order to make its job
— monitoring internet communications —
easier. If the draft were to become an internet standard, the fact that it
optionally permits the use of non-hybrid post-quantum cryptography might make
some people feel that such cryptography is safe, when that is not the current
academic consensus.
There are
other arguments
for allowing non-hybrid post-quantum encryption — mostly
boiling down to the implementation and performance costs of supporting a more
complex scheme. But when Firefox, Chrome, and the Open Quantum Safe project all
already support and use hybrid post-quantum encryption, that motivation
didn't ring true
for other IETF participants.
Some proponents of the change argued that supporting non-hybrid post-quantum
encryption would be simpler, since a non-hybrid encryption scheme would be
simpler than a hybrid one. Opponents said that was focusing on the wrong kind of
simplicity; adding another method of encryption to TLS makes implementations
more complex, not less. They also pointed to the cost of modern elliptic-curve
cryptography as being so much smaller than the cost of post-quantum cryptography
that using both would not have a major impact on the performance of TLS.
From substance to process
The disagreement came to a head when Sean Turner, one of the chairs of the IETF
working group discussing the topic,
declared
in March 2025 that consensus had been reached and the proposal ought
to move to the next phase of standardization: adoption as a working-group
document. Once a draft document is adopted, it enters a phase of editing by the
members of the working group to ensure that it is clearly written and
technically accurate, before being sent to the Internet Engineering Steering
Group (IESG) to possibly become an internet standard.
Turner's decision to adopt the draft
came as a surprise
to some of the participants in the discussion, such as
Daniel J. Bernstein, who
strongly disagreed
with weakening the requirements for TLS 1.3 to allow
non-hybrid key-exchange mechanisms and had repeatedly said as much. The IETF
operates on a consensus model where, in theory, objections raised on the mailing
list need to be responded to and either refuted or used to improve the standard
under discussion.
In practice, the other 23 participants in the discussion
acknowledged the concerns of the six people who objected to the inclusion of non-hybrid
post-quantum key-exchange mechanisms in the standard. The group that wanted to
see the draft accepted just disagreed that
it was an important weakening in the face of regulatory and maintenance
concerns, and wanted to adopt the standard as written anyway.
From there, the discussion turned on
the question
of whether the working-group charter allowed for adopting a
draft that reduced the security of TLS in this context. That question never
reached a consensus either. After repeated appeals from Bernstein over the next
several months,
the IESG, which handles the IETF's internal policies and procedures,
asked Paul Wouters and Deb Cooley, the IETF's area directors responsible for the
TLS working group, whether Turner's declaration of consensus had been made
correctly.
Wouters
declared
that Turner had made the right call, based on the state of the
discussion at the time. He pointed out that while the draft permits TLS to use
non-hybrid post-quantum key-exchange algorithms, it doesn't recommend them: the
recommendation remains to use the hybrid versions where possible. He also noted
that the many voices calling for adoption indicated that there was a market
segment being served by the ability to use non-hybrid algorithms.
A few days after Wouters's response, on November 5, Turner
called for last objections
to adopting the draft as a working-group document. Employees of
the NSA
,
the United Kingdom's Government Communications Headquarters
(GCHQ), and
Canada's Communications Security Establishment Canada
(CSEC) all wrote in
with their support, as did employees of several companies working on US
military contracts. Quynh Dang, an employee of NIST,
also supported
publication as a working-group document, although claimed not
to represent NIST in this matter.
Among others,
Stephen Farrell
disagreed, calling for the standard to at
least add language addressing the fact that security experts in the
working group thought that the hybrid approach was more secure: "
Absent that, I think producing an RFC based on this draft
provides a misleading signal to the community.
"
As it stands now, the working group has adopted the draft that allows for
non-hybrid post-quantum key-exchange mechanisms to be used in TLS. According to
the IETF process
, the draft will now be edited by the working-group members for
clarity and technical accuracy, before being presented to the IESG for approval
as an internet standard. At that point, companies wishing to sell their devices
and applications to the US government will certainly enable the use of these
less-secure mechanisms — and be able to truthfully advertise their products as meeting
NIST, NSA, and IETF standards for security.
[ Thanks to Thomas Dalichow for bringing this topic to our attention. ]
‘I feel it’s a friend’: quarter of teenagers turn to AI chatbots for mental health support
Guardian
www.theguardian.com
2025-12-09 05:00:04
Experts warn of dangers as England and Wales study shows 13- to 17-year-olds consulting AI amid long waiting lists for services It was after one friend was shot and another stabbed, both fatally, that Shan asked ChatGPT for help. She had tried conventional mental health services but “chat”, as she c...
It was after one friend was shot and another stabbed, both fatally, that Shan asked
ChatGPT
for help. She had tried conventional mental health services but “chat”, as she came to know her AI “friend”, felt safer, less intimidating and, crucially, more available when it came to handling the trauma from the deaths of her young friends.
As she started consulting the AI model, the Tottenham teenager joined about 40% of 13- to 17-year-olds in
England
and Wales affected by youth violence who are turning to AI chatbots for mental health support, according to research among more than 11,000 young people.
It found that both victims and perpetrators of violence were markedly more likely to be using AI for such support than other teenagers. The findings, from the Youth Endowment Fund, have sparked warnings from youth leaders that children at risk “need a human not a bot”.
The results suggest chatbots are fulfilling demand unmet by conventional mental health services, which have long waiting lists and which some young users find lacking in empathy. The supposed privacy of the chatbot is another key factor in driving use by victims or perpetrators of crimes.
After her friends were killed Shan, 18, not her real name, started using Snapchat’s AI before switching to ChatGPT, which she can talk to at any time of day or night with two clicks on her smartphone.
“I feel like it definitely is a friend,” she said, adding that it was less intimidating, more private and less judgmental than her experience with conventional
NHS
and charity mental health support.
“The more you talk to it like a friend it will be talking to you like a friend back. If I say to chat ‘Hey bestie, I need some advice’. Chat will talk back to me like it’s my best friend, she’ll say, ‘Hey bestie, I got you girl’.”
One in four of 13- to 17-year-olds have used an AI chatbot for mental health support in the past year, with black children twice as likely as white children to have done so, the study found. Teenagers were more likely to go online for support, including using AI, if they were on a waiting list for treatment or diagnosis or had been denied, than if they were already receiving in-person support.
Crucially, Shan said, the AI was “accessible 24/7” and would not tell teachers or parents about what she had disclosed. She felt this was a considerable advantage over telling a school therapist, after her own experience of what she thought were confidences being shared with teachers and her mother.
Boys who were involved in gang activities felt safer asking chatbots for advice about other safer ways to make money than a teacher or parent who might leak the information to police or other gang members, putting them in danger, she said.
Another young person, who has been using AI for mental health support but asked not to be named, told the Guardian: “The current system is so broken for offering help for young people.
Chatbots
provide immediate answers. If you’re going to be on the waiting list for one to two years to get anything, or you can have an immediate answer within a few minutes … that’s where the desire to use AI comes from.”
Jon Yates, the chief executive of the Youth Endowment Fund, which commissioned the research, said: “Too many young people are struggling with their mental health and can’t get the support they need. It’s no surprise that some are turning to technology for help. We have to do better for our children, especially those most at risk. They need a human not a bot.”
There have been growing concerns about the dangers of chatbots when children engage with them at length. OpenAI, the US company behind ChatGPT, is facing
several lawsuits
including from families of young people who have killed themselves after long engagements.
In the
case of the Californian 16-year-old Adam Raine
, who took his life in April, OpenAI has
denied
it was caused by the chatbot. It has said it has been improving its technology “to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”. The startup
said
in September it could start contacting authorities in cases where users start talking seriously about suicide.
Hanna Jones, a youth violence and mental health researcher in London, said: “To have this tool that could tell you technically anything – it’s almost like a fairytale. You’ve got this magic book that can solve all your problems. That sounds incredible.”
But she is worried about the lack of regulation.
“People are using ChatGPT for mental health support, when it’s not designed for that,” she said. “What we need now is to increase regulations that are evidence-backed but also youth-led. This is not going to be solved by adults making decisions for young people.
Young people
need to be in the driving seat to make decisions around ChatGPT and mental health support that uses AI, because it’s so different to our world. We didn’t grow up with this. We can’t even imagine what it is to be a young person today.”
‘Don’t pander to the tech giants!’ How a youth movement for digital justice is spreading across Europe
Guardian
www.theguardian.com
2025-12-09 05:00:03
Gen Z are the first generation to have grown up with social media, they were the earliest adopters, and therefore the first to suffer its harms. Now they are fighting back Late one night in April 2020, towards the start of the Covid lockdowns, Shanley Clémot McLaren was scrolling on her phone when s...
L
ate one night in April 2020, towards the start of the Covid lockdowns, Shanley Clémot McLaren was scrolling on her phone when she noticed a
Snapchat
post by her 16-year-old sister. “She’s basically filming herself from her bed, and she’s like: ‘Guys you shouldn’t be doing this. These fisha accounts are really not OK. Girls, please protect yourselves.’ And I’m like: ‘What is fisha?’ I was 21, but I felt old,” she says.
She went into her sister’s bedroom, where her sibling showed her a Snapchat account named “fisha” plus the code of their Paris suburb. Fisha is French slang for publicly shaming someone – from the verb “
afficher
”, meaning to display or make public. The account contained intimate images of girls from her sister’s school and dozens of others, “along with the personal data of the victims – their names, phone numbers, addresses, everything to find them, everything to put them in danger”.
McLaren, her sister and their friends reported the account to Snapchat dozens of times, but received no response. Then they discovered there were fisha accounts for different suburbs, towns and cities across France and beyond. Faced with the impunity of the social media platforms, and their lack of moderation, they launched the hashtag #StopFisha.
It went viral, online and in the media.
#StopFisha
became a rallying cry, a safe space to share information and advice, a protest movement. Now it was the social media companies being shamed. “The wave became a counter-wave,” says McLaren, who is now 26. The French government got involved, and launched an online campaign on the dangers and legal consequences of fisha accounts. The social media companies began to moderate at last, and #StopFisha is now a “trusted flagger” with Snapchat and TikTok, so when they report fisha content, it is taken down within hours. “I realised that if you want change in your societies, if you come with your idea alone, it won’t work. You need support behind you.”
Shanley Clémot McLaren at the UN.
Photograph: Baz Ratner
Four years later, this strategy is playing out on an even larger scale. McLaren and other young activists across Europe are banding together against social media and its ruinous effects on their generation. Individually, young people are powerless to sway big tech, but they are also a substantial part of its business model – so, collectively, they are powerful.
This is the first generation to have grown up with social media: they were the earliest adopters of it, and therefore the first to suffer its harms. The array of problems is ever-expanding: misogynistic, hateful and disturbing content; addictive and skewed algorithms; invasion of privacy; online forums encouraging harmful behaviours; sextortion; screen addiction; deepfake pornography; misinformation and disinformation; radicalisation; surveillance; biased AI – the list goes on. As the use of social media has risen, there has been a corresponding increase in youth mental health problems, anxiety, depression, self-harm and even suicide.
“Across Europe, a generation is suffering through a silent crisis,”
says a new report
from
People vs Big Tech
– a coalition of more than 140 digital rights NGOs from around Europe – and
Ctrl+Alt+Reclaim
, their youth-led spin-off. A big factor is “the design and dominance of social media platforms”.
Ctrl+Alt+Reclaim, for people aged 15 to 29, came about in September last year when People vs Big Tech put out a call – on social media, paradoxically. About 20 young people who were already active on these issues came together at a “boot camp” in London. “We were really given the tools to create the movement that we wanted to build,” says McLaren, who attended with her partner. “They booked a big room, they brought the food, pencils, paper, everything we needed. And they were like: ‘This is your space, and we’re here to help.’”
The group is Europe’s first digital justice movement by and for young people. Their demands are very simple, or at least they ought to be: inclusion of young people in decision-making; a safer, healthier, more equitable social media environment; control and transparency over personal data and how it is used; and an end to the stranglehold a handful of US-based corporations have over social media and online spaces. The overarching principle is: “Nothing for us, without us.”
“This is not just us being angry; it’s us having the right to speak,” says McLaren, who is now a youth mobilisation lead for Ctrl+Alt+Reclaim. Debates over digital rights are already going on, of course, but, she says: “We find it really unfair that we’re not at the table.
Young people
have so much to say, and they’re real experts, because they have lived experience … So why aren’t they given the proper space?”
McLaren’s work with #StopFisha took her on a journey into a wider, murkier world of gender-based digital rights: misogynist trolling and sexism, cyberstalking, deepfake pornography – but she realised this was just one facet of the problem. What women were experiencing online, other groups were experiencing in their own ways.
A fellow activist, Yassine, 23, is well aware of this. Originally from north Africa and now living in Germany, Yassine identifies as non-binary. They fled to Europe to escape intolerance in their own country, but the reality of life, even in a supposedly liberal country such as Germany, hit them like a “slap”, they say. “You’re here for your safety, but then you’re trying to fight not only the system that is punishing the queerness of you, but you also have another layer of being a migrant. So you have two battles instead of one.”
‘The systems are patriarchal and racist by design’ … Yassine, who leads on digital rights at LGBTQ+ youth rights organisation IGLYO.
Photograph: IGLYO
As a migrant they are seen as a threat, Yassine says. “Our bodies and movements must be tracked, fingerprinted and surveilled through intrusive digital systems designed to protect the EU.” For queer people, there are similar challenges. These include “shadow-banning”, for example, by which tech platforms “silence conversations about queer rights, racism or anything that is challenging the dominant system”, either wilfully or algorithmically, through built-in biases.
Measures such as identity verification “are also putting a lot of people at risk of being erased from these spaces”, says Yassine. There can be good reasons for them, but they can also end up discriminating against non-binary or transgender people – who are often presented with binary gender options; male or female – as well as against refugees and undocumented people, who may be afraid or unable to submit their details online. Given their often tenuous residency status, and sometimes limited digital literacy and access, migrants tend not to speak out, Yassine says. “It definitely feels like you are in a position of: ‘You need to be grateful that you are here, and you should not question the laws.’ But the laws are harming my data.”
On a more day-to-day level, Yassine says, they must “walk through online spaces knowing they could do harm to me”. If they click on the comments under a social media post, for example, they know they are likely to find racist, homophobic or hateful attacks. Like McLaren, Yassine says that complaining is futile. “I know that they will come back with, ‘This is not a community guidelines breach’, and all of that.”
These are not mere glitches in the system, says Yassine, who now leads on digital rights at
IGLYO
, a long-running LGBTQ+ youth rights organisation, founded in Brussels, with a network of groups across Europe. “The systems we design inherit the very structures they arise from, so they inevitably become systems that are patriarchal and racist by design.”
Adele Zeynep Walton’s participation in Ctrl+Alt+Reclaim came through personal experience of online harm. In 2022, Walton’s 21-year-old sister, Aimee, took her own life. She had been struggling with her mental health, but had also been spending time on online suicide and self-harm forums, which Walton believes contributed to her death. After that, Walton began to question the digital realm she had grown up in, and her own screen addiction.
Walton’s parents made her first Facebook account when she was 10, she says. She has been on
Instagram
since she was 12. Her own feelings of body dysmorphia began when she was 13, sparked by pro-anorexia content her friends were sharing. “I became a consumer of that, then I got immersed in this world,” she says. “Generations like mine thought it was totally normal, having this everyday battle with this addictive thing, having this constant need for external validation. I thought those were things that were just wrong with
me
.”
Adele Zeynep Walton, who became a campaigner after the death of her sister, in the garden of her family home in Southampton.
Photograph: Peter Flude/The Guardian
In researching her book
Logging Off: The Human Cost of our Digital World
, Walton, 26, also became aware of how little control young people have over the content that is algorithmically served up to them. “We don’t really have any choice over what our feeds look like. Despite the fact there are things where you can say, ‘I don’t want to see this type of content’, within a week, you’re still seeing it again.”
Alycia Colijn, 29, another member of Ctrl+Alt+Reclaim, knows something about this. She studied data science and marketing analytics at university in Rotterdam, researching AI-driven algorithms – how they can be used to manipulate behaviour, and in whose interests. During her studies she began to think: “It’s weird that I’m trained to gather as much data as I can, and to build a model that can respond to or predict what people want to buy, but I’ve never had a conversation around ethics.” Now she is researching these issues as co-founder of
Encode Europe
, which advocates for human-centric AI. “I realised how much power these algorithms have over us; over our society, but also over our democracies,” she says. “Can we still speak of free will if the best psychologists in the world are building algorithms that make us addicted?”
The more she learned, the more concerned Colijn became. “We made social media into a social experiment,” she says. “It turned out to be the place where you could best gather personal data from individuals. Data turned into the new gold, and then tech bros became some of the most powerful people in the world, even though they aren’t necessarily known for caring about society.”
Social media companies have had ample opportunities to respond to these myriad harms, but invariably they have chosen not to. Just as McLaren found with Snapchat and the fisha accounts, hateful and racist content is still minimally moderated on platforms such as X, Instagram, Snapchat and YouTube. After Donald Trump’s re-election, Mark Zuckerberg stated at the start of this year that Meta would be
reducing factcheckers
across Facebook and Instagram, just as X has under Elon Musk. This has facilitated the free flow of misinformation.
Meta
,
Amazon
and
Google
were also among the companies announcing they were rolling back their diversity, equity and inclusion initiatives, post-Trump’s election. The shift towards the right politically, in the US and Europe, has inevitably affected these platforms’ tolerance of hateful and racist content, says Yassine. “People feel like now they have more rights to be harmful than rights to be protected.”
All the while, the tech CEOs have become more powerful, economically, politically and in terms of information control. “We don’t believe that power should be in those hands,” says Colijn. “That’s not a true democracy.”
Europe’s politicians aren’t doing much better. Having drafted the
Digital Services Act
in 2023, which threatened social media companies with fines or bans if they failed to regulate harmful content, the European Commission announced last month it
would be rolling back
some of its data privacy laws, to allow big tech companies to use people’s personal data for training AI systems.
“Big tech, combined with the AI innovators, say they are the growth of tomorrow’s economy, and that we have to trust them. I don’t think that’s true,” says Colijn. She also disagrees with their argument that regulation harms innovation. “The only thing deregulation fosters is
harmful
innovation. If we want responsible innovation, we need regulation in place.”
Walton agrees. “Governments and MPs are shooting themselves in the foot by pandering to tech giants, because that just tells young people that they don’t care about our future,” she says. “There’s this massive knowledge gap between the people who are making the decisions, and the tech justice movement and everyday people who are experiencing the harms.”
Ctrl+Alt+Reclaim is not calling for the wholesale destruction of social media. All these activists say they have found community, solidarity and joy in online spaces: “We’re fighting for these spaces to accommodate us,” says Yassine. “We’re not protesting to cancel them. We know how harmful they are, but they are still spaces where we have hope.”
‘The only thing deregulation fosters is harmful innovation’ … Alycia Colijn, co-founder of Encode.
Photograph: Henry Maathuis
Colijn echoes this. “Social media used to be a fun place with the promise of connecting the world,” she says. “That’s where we started.” And that’s what they want it to be again.
Will big tech pay attention? They might not have a choice, as countries and legislators begin to take action. This week Australia will become the first country to
ban social media accounts
for under-16s on major platforms including Snapchat, Instagram, TikTok and X. Last week, after a two-year deliberation, X was
fined €120m
(£105m) by the EU for breaching data laws. But these companies continue to platform content that is hateful, racist, harmful, misleading or inflammatory, with impunity.
Meanwhile, Ctrl+Alt+Reclaim is just getting started. Other discussions on the table include campaigning for an EU-funded social media platform, an alternative to the big tech oligopoly, created by and for the public. Another alternative is direct action, either protest or consumer activism such as coordinated boycotts. “I think it’s lazy for us to be like: we don’t have any power,” says Walton. “Because we could literally say that about anything: fast fashion, fossil fuels … OK, but how do we change things?”
The other alternative is simply to log off. “The other side of the coin to this movement of tech justice, and a sort of liberation from the harms that we’ve experienced over the past 20 years, is reducing our screen time,” says Walton. “It is spending more time in community. It is connecting with people who maybe you would have never spoken to on social media, because you’d be in different echo chambers.”
Almost all the activists in Ctrl+Alt+Reclaim attest to having had some form of screen addiction. As much as social media has brought them together, it has also led to much less face-to-face socialising. “I’ve had to sort of rewire my brain to get used to the awkwardness and get comfortable with being in a social setting and not knowing anyone,” says Walton. “Actually, it would be really nice to return to proper connection.”
We're excited to announce Optique 0.8.0! This release introduces powerful new features for building sophisticated CLI applications: the
conditional()
combinator for discriminated union patterns, the
passThrough()
parser for wrapper tools, and the new
@optique/logtape
package for seamless logging configuration.
Optique
is a type-safe combinatorial CLI parser for TypeScript, providing a functional approach to building command-line interfaces with composable parsers and full type inference.
New conditional parsing with
conditional()
Ever needed to enable different sets of options based on a discriminator value? The new
conditional()
combinator makes this pattern first-class. It creates discriminated unions where certain options only become valid when a specific discriminator value is selected.
Explicit discriminator option determines which branch is selected
Tuple result
[discriminator, branchValue]
for clear type narrowing
Optional default branch for when discriminator is not provided
Clear error messages indicating which options are required for each discriminator value
The
conditional()
parser provides a more structured alternative to
or()
for discriminated union patterns. Use it when you have an explicit discriminator option that determines which set of options is valid.
Building wrapper CLI tools that need to forward unrecognized options to an underlying tool? The new
passThrough()
parser enables legitimate wrapper/proxy patterns by capturing unknown options without validation errors.
The new
@optique/logtape
package provides seamless integration with
LogTape
, enabling you to configure logging through command-line arguments with various parsing strategies.
Fixed an issue where the
integer()
value parser rejected negative integers when using
type: "number"
. The regex pattern has been updated from
/^\d+$/
to
/^-?\d+$/
to correctly handle values like
-42
. Note that
type: "bigint"
already accepted negative integers, so this change brings consistency between the two types.
Optique 0.8.0 continues our focus on making CLI development more expressive and type-safe. The
conditional()
combinator brings discriminated union patterns to the forefront,
passThrough()
enables new wrapper tool use cases, and the LogTape integration makes logging configuration a breeze.
As always, all new features maintain full backward compatibility—your existing parsers continue to work unchanged.
We're grateful to the community for feedback and suggestions. If you have ideas for future improvements or encounter any issues, please let us know through
GitHub Issues
. For more information about Optique and its features, visit the
documentation
or check out the full
changelog
.
(Toshiba) Wireless cassette player with Bluetooth, you can enjoy cassettes with wireless earphones. Equipped with virtual surround sound, you can enjoy realistic sound. It can play cassette tapes for about 16 hours (2*AA alkaline batteries). It can also be powered, and played from a USB port. Weight 230g. Selling primarly in Japan.
Bring out the soundtrack of past memories on Your cherished cassettes. FM/AM Radio playback. Voice Activation System. Automatic Stop System. 2AA Battery or USB power supply. KCS-315
The personal cassette player looks the part with its retro silver casing and comes complete with earphones for your private listening. With bluetooth function, can transmit the music to other bluetooth receivers and let everyone enjoy the music.
Achieving ultra-low Wow and Flutter. Oversized pure copper flywheel. 100% pure analog sound & custom balanced amplification head. Classic audiophile op-amp JRC5532. High voltage motor power supply. Dual-color all-aluminum alloy chassis and a long-lasting 13 hours of battery life.
Battery powered and with built in speakers, just plug in your cassette and you're ready to go. The portable cassette player was an iconic piece of kit for music fans in the 80s and 90s. Play tapes or use the FM radio and listen through your headphones.
It is the world’s first cassette player with Bluetooth 5.0 capability that not only supports traditional 3.5mm headphones but is also compatible with Bluetooth 5.0 headphones or speakers. Whether you are alone or in an open space, you can freely enjoy the penetrating voice and warm sound from the cassette tape.
(+)
Nice translucent design. Bluetooth connection. Built-in microphone.
(-)
No autoreverse, or any convenience function. No headphone.
Jensen Portable Compact Lightweight Slim Design Stereo, AM/FM Radio Cassette Player.
Pop in that favorite cassette or relive the magic of the mixed tape with Jensen's Portable Stereo Cassette Player AM/FM Stereo Cassette Player. When you're feeling more like the radio, tune into the AM or FM dial. You can also get up to the minute weather info with local Weather Band broadcasts. And, in the name of keeping things economical, just 2 'AA' battery has the Walkman up and running for hours on end.
It supports Bluetooth v5.4 , which provides high communication quality and low power consumption. Brass flywheel adopted. Reduces rotational irregularities and provides high quality sound. Bultin battery, accumulator. Playback time is around 9 hours. Weight 210g.
Affordable modern portable cassette tape player & recorder with 2 track, stereo playback. Good sound quality, plays all (I-IV) cassettes. Frequency response : 40Hz-11KHz (Type I), Signal-to-noise ratio 50dB, Distorsion 1%, Wow & Flutter 0.3%, Headphone output power: 2x2 mW into 32 ohms
Entry level portable cassette player. F116/F113
As you will have understood, this cassette player is the best of the best! The "crème de la crème" as they say in French. An object that is both cult and essential for any self-respecting music lover.
IDF Chief Says Ceasefire Line Is a ‘New Border,’ Suggesting Goal To Annex More Than Half of Gaza
Portside
portside.org
2025-12-09 04:29:48
IDF Chief Says Ceasefire Line Is a ‘New Border,’ Suggesting Goal To Annex More Than Half of Gaza
Mark Brody
Mon, 12/08/2025 - 23:29
...
A general view of a concrete block marking the “yellow line” drawn by the Israeli military in Bureij, central Gaza Strip, on November 4, 2025. | (Photo by Bashar Taleb/AFP)
The top-ranking officer in the
Israel
Defense Forces suggested that Israel may plan to permanently take over more than half of
Gaza
, which it currently occupies as part of a temporary arrangement under the latest
“ceasefire”
agreement.
That agreement, signed in early October, required Israel to withdraw its forces behind a so-called “yellow line” as part of the first phase, which left it occupying over half of the territory on its side. Gaza’s nearly 2 million inhabitants, meanwhile, are crammed into a territory of about 60 square miles—the vast majority of them displaced and living in makeshift structures.
The
deal
Israel agreed to in principle says this is only a temporary arrangement. Later phases would require Israel to eventually pull back entirely, returning control to an “International Stabilization Force” and eventually to
Palestinians
, with only a security buffer zone between the territories under Israel’s direct control.
But on Sunday, as he spoke to troops in
Gaza
, IDF Chief of Staff Lt. Gen. Eyal Zamir
described
the yellow line not as a temporary fixture of the ceasefire agreement, but as “a new border line” between Israel and Gaza.
Zamir stated that Israel has “operational control over extensive parts of the Gaza Strip and we will remain on those defense lines,” adding that “the yellow line is a new border line—serving as a forward defensive line for our communities and a line of operational activity.”
The IDF chief did not elaborate further on what he meant, but many interpreted the comments as a direct affront to the core of the ceasefire agreement.
“The Israeli chief of staff said today that the yellow line in Gaza is the new border between Israel and Gaza,”
said
Dr. Mustafa Barghouti, who serves as general secretary of the Palestinian National Initiative, a political party in the
West Bank
. He said it “indicates dangerous Israeli intentions of annexing 53% of the little Gaza Strip, and to prevent reconstruction of what Israel destroyed in Gaza.”
Zamir’s statement notably comes shortly after a report from the Euro-Mediterranean
Human Rights
Monitor last week provided
new details
on a US-led proposal to resettle tens of thousands of Palestinians at a time into densely packed “'cities’ of prefabricated container homes” on the Israeli-controlled side of the yellow line that they would not be allowed to leave without consent from Israel. The group likened the plan to “the historical model of ghettos.”
The statement also notably came on the same day that Prime Minister
Benjamin Netanyahu
told
German Chancellor Friedrich Merz at a joint press conference that Israel’s annexation of the West Bank “remains a subject to be discussed.” This year has seen a
historic surge
of violence by Israeli settlers in the illegally occupied territory, which ramped up following the ceasefire.
Israel has already been
accused
by Gaza authorities of violating the ceasefire several hundred times by routinely launching strikes in Gaza. On Saturday, the UN
reported
that at least 360 Palestinians have been killed since the truce went into effect on October 10, and that 70 of them have been
children
.
The IDF often claims that those killed have been Palestinians who crossed the yellow line. As
Haaretz
reported
last week: “In many cases, the line Israel drew on the maps is not marked on the ground. The IDF’s response policy is clear: Anyone who approaches the forbidden area is shot immediately, even when they are
children
.”
On Sunday,
Al Jazeera
and the
Times of Israel
reported
, citing local medics, that Israeli forces had shot a 3-year-old girl, later
identified
as Ahed al-Bayok, in southern Gaza’s coastal area of Mawasi, near Khan Younis. The shooting took place on the Hamas-controlled side of the yellow line.
Within the same hour on Sunday, the IDF
posted
a statement on
social media
: “IDF troops operating in southern Gaza identified a terrorist who crossed the yellow line and approached the troops, posing an immediate threat to them. Following the identification, the troops eliminated the terrorist.” It remains unconfirmed whether that statement referred to al-Bayok, though the IDF has used
similar language
to describe the shootings of an 8- and 11-year-old child.
Until recently, Israel has also
refused
to allow for the opening of the
Rafah
Crossing, the most significant entry point for desperately needed
humanitarian aid
, which has been
required
to enter the strip “without interference” as part of the ceasefire agreement.
Israel agreed to open the crossing last week, but only to facilitate the exit of Palestinians from Gaza. In response, eight Arab governments
expressed
their “complete rejection of any attempts to displace the Palestinian people from their land.”
Zamir’s comments come as the
ceasefire limps
into its second phase, where US President
Donald Trump
and Israeli Prime Minister Benjamin Netanyahu will push for the full demilitarization of
Hamas
, which Israel has said would be a precondition for its complete withdrawal from Gaza.
“Now we are at the critical moment,” said Qatari Premier and Foreign Minister Sheikh Mohammed bin Abdulrahman Al Thani, at a conference in Doha on Saturday. “A ceasefire cannot be completed unless there is a full withdrawal of the Israeli forces there is stability back in Gaza.”
Stephen Prager is a staff writer for Common Dreams.
Metacode: The new standard for machine-readable comments for Python
Lobsters
github.com
2025-12-09 03:12:55
In the Python ecosystem, there are many tools dealing with source code: linters, test coverage collection systems, and many others. Many of them use special comments, and as a rule, the style of these comments is very similar.
But you know what? There is no single standard for such comments. Serious...
Many source code analysis tools use comments in a special format to mark it up. This is an important part of the Python ecosystem, but there is still no single standard around it. This library offers such a standard.
In the Python ecosystem, there are many tools dealing with source code: linters, test coverage collection systems, and many others. Many of them use special comments, and as a rule, the style of these comments is very similar. Here are some examples:
But you know what?
There is no single standard for such comments
. Seriously.
The internal implementation of reading such comments is also different. Someone uses regular expressions, someone uses even more primitive string processing tools, and someone uses full-fledged parsers, including the Python parser or even written from scratch.
As a result, as a user, you need to remember the rules by which comments are written for each specific tool. And at the same time, you can't be sure that things like double comments (when you want to leave 2 comments for different tools in one line of code) will work in principle. And as the creator of such tools, you are faced with a seemingly simple task — just to read a comment — and find out for yourself that it suddenly turns out to be quite difficult, and there are many possible mistakes.
This is exactly the problem that this library solves. It describes a simple and intuitive standard for action comments, and also offers a ready-made parser that creators of other tools can use. The standard offered by this library is based entirely on a subset of the Python syntax and can be easily reimplemented even if you do not want to use this library directly.
The language
So, this library offers a language for action comments. Its syntax is a subset of Python syntax, but without Python semantics, as full-fledged execution does not occur. The purpose of the language is simply to provide the developer with the content of the comment in a convenient way, if it is written in a compatible format. If the comment format is not compatible with the parser, it is ignored.
From the point of view of the language, any meaningful comment can consist of 3 elements:
Key
. This is usually the name of the specific tool for which this comment is intended, but in some cases it may be something else. This can be any string allowed as an
identifier
in Python.
Action
. The short name of the action that you want to link to this line. Also, only the allowed Python identifier.
List of arguments
. These are often some kind of identifiers of specific linting rules or other arguments associated with this action. The list of possible data types described below.
Consider a comment designed to ignore a specific mypy rule:
↑ The key here is the word
type
, that is, what you see before the colon. The action is the
ignore
word, that is, what comes before the square brackets, but after the colon. Finally, the list of arguments is what is in square brackets, in this case, there is only one argument in it:
error-code
.
Simplified writing is also possible, without a list of arguments:
# type: ignore
└-key-┘└action┘
↑ In this case, the parser assumes that there is an argument list, but it is empty.
The number of arguments in the list is unlimited, they can be separated by commas. Here are the valid data types for arguments:
Two valid Python identifiers, separated by the
-
symbol, like this:
error-code
. There can also be any number of spaces between them, they will be ignored. Interpreted as a single string.
Any other Python-compatible code. This is disabled by default, but you can force the mode of reading such code and get descriptions for any inserts of such code in the form of
AST
objects
, after which you can somehow process it yourself.
The syntax of all these data types is completely similar to the Python original (except that you can't use multi-line writing options). Over time, it is possible to extend the possible syntax of
metacode
, but this template will always be supported.
There can be several comments in the
metacode
format. In this case, they should be interspersed with the
#
symbol, as if each subsequent comment is a comment on the previous one. You can also add regular text comments, they will just be ignored by the parser if they are not in
metacode
format:
# type: ignore # <- This is a comment for mypy! # fmt: off # <- And this is a comment for Ruff!
If you scroll through this text
above
to the examples of action comments from various tools, you may notice that the syntax of most of them (but not all) is it can be described using
metacode
, and if not, it can be easily adapted to
metacode
. Read on to learn how to use a ready-made parser in practice.
Installation
Install it:
You can also quickly try out this and other packages without having to install using
instld
.
Usage
The parser offered by this library is just one function that is imported like this:
frommetacodeimportparse
To use it, you need to extract the text of the comment in some third-party way (preferably, but not necessarily, without the
#
symbol at the beginning) and pass it, and the expected key must also be passed as the second argument. As a result, you will receive a list of the contents of all the comments that were parsed:
↑ Please note that you are transmitting a key, which means that the result is returned filtered by this key. This way you can read only those comments that relate to your tool, ignoring the rest.
By default, an argument in a comment must be of one of the strictly allowed types. However, you can enable reading of arbitrary other types, in which case they will be transmitted in the
AST
node
format. To do this, pass
allow_ast=True
:
↑ If you do not pass
allow_ast=True
, a
metacode.errors.UnknownArgumentTypeError
exception will be raised. When processing an argument, you can also raise this exception for an AST node of a format that your tool does not expect.
⚠️
Be careful when writing code that analyzes the AST. Different versions of the Python interpreter can generate different AST based on the same code, so don't forget to test your code (for example, using
matrix
or
tox
) well. Otherwise, it is better to use standard
metacode
argument types.
You can allow your users to write keys in any case. To do this, pass
ignore_case=True
:
Prediction: AI will make formal verification go mainstream
Simon Willison
simonwillison.net
2025-12-09 03:11:19
Prediction: AI will make formal verification go mainstream
Martin Kleppmann makes the case for formal verification languages (things like Dafny, Nagini, and Verus) to finally start achieving more mainstream usage. Code generated by LLMs can benefit enormously from more robust verification, and LLMs ...
Prediction: AI will make formal verification go mainstream
(
via
) Martin Kleppmann makes the case for formal verification languages (things like
Dafny
,
Nagini
, and
Verus
) to finally start achieving more mainstream usage. Code generated by LLMs can benefit enormously from more robust verification, and LLMs themselves make these notoriously difficult systems easier to work with.
Palantir: the world’s ‘scariest company’? – podcast
Guardian
www.theguardian.com
2025-12-09 03:00:55
How far will tech firm Palantir go to ‘save the West’? With Michael Steinberger and Johana Bhuiyan Why do some consider Palantir the world’s ‘scariest company’ and who is its chief executive, Alex Karp? Michael Steinberger, the author of The Philosopher in the Valley: Alex Karp, Palantir and the Ris...
Why do some consider
Palantir
the world’s ‘scariest company’ and who is its chief executive, Alex Karp?
Michael Steinberger
, the author of The Philosopher in the Valley: Alex Karp, Palantir and the Rise of the Surveillance State, describes Karp’s origin story to
Nosheen Iqbal
and the way that his political positions have changed over the years. The pair also discuss how Palantir was established as a company, the services that it offers, its close relationship to the US military and how Karp has been navigating the second Trump presidency.
Johana Bhuiyan
, a senior tech reporter and editor at Guardian US, outlines what we know about Palantir’s relationship with Immigration and Customs Enforcement (ICE) in the US and NHS data in the United Kingdom.
DECEMBER 10 IS THE 60TH ANNIVERSARY
of the day that 13-year-old junior high student Mary Beth Tinker and some of her schoolmates in Des Moines, Iowa, decided to show their opposition to the constantly escalating U.S. war against Vietnam. They decided to protest by wearing black armbands to school.
Several days later, more than two dozen armband-wearing students showed up at several Des Moines schools. None carried signs or made speeches or caused any disruption, but school administrators declared that silently wearing a black armband as a means of expression was a violation of discipline. They singled out five students, including Tinker, and suspended them.
That was the beginning of a 4-year legal struggle over the First Amendment rights of public school students. With the help of the American Civil Liberties Union, the Des Moines Five took their case all the way to the Supreme Court, where they established the precedent that, as the high court put it, neither "students or teachers shed their constitutional rights to freedom of speech or expression at the schoolhouse gate."
Today, more than 55 years later, the Supreme Court’s decision that “In our system, state-operated schools may not be enclaves of totalitarianism,” remains the law of the land. Of course, the Supreme Court has recently made a regular practice of throwing precedents out, but one can always hope. For much more information about the case that established students’ rights, visit
https://www.zinnedproject.org/news/tdih/constitutional-rights-of-students/
A Message from a Great American
DECEMBER 11 IS THE 90TH ANNIVERSARY
of New York City Parks Commissioner Robert Moses’ decision to remind the thousands of Parks Department workers and the additional thousands of federal Works Project Administration workers on the city payroll that their continued employment depended on the continued good-will of their boss.
Commissioner Moses arranged to have hundreds of copies of large placards printed and posted at every Parks Department work site bearing a handsome portrait of Abraham Lincoln over this headline: "A Message to Park Workers from a Great American”.
Under the headline appeared this text: “The habits of our whole species fall into three great classes, useful labor, useless labor and idleness. Of these, the first only is meritorious and to it all the products of labor rightfully belong; but the two latter, while they exist, are heavy pensioners upon the first, robbing it of a large portion of its just rights. The only remedy for this is to, so far as possible, drive useless labor and idleness out of existence."
https://www.nytimes.com/1935/12/11/archives/lincoln-message-to-spur-wpa-men-park-bureau-placards-quote-attack.html
A Pandemic Meets Its Match
DECEMBER 14 IS THE FIFTH ANNIVERSARY
of the first non-experimental use a Covid-19 vaccine in the U.S. The first U.S. vaccine recipient was Sandra Lindsay, an African-American nurse who was head of critical-care nursing at a large hospital in Queens, New York. The choice of Lindsay was fitting, because critical-care health-care workers were one of the pandemics’ worst-hit occupational groups, as were African-Americans in general.
By the time Lindsay got her jab, more than 15 million people in the U.S. had been sickened by Covid-19 and at least three hundred thousand had died. Of course, at first the vaccine could only retard the rate of increase in Covid-19 cases and deaths. It was estimated that during the first 26 months of the vaccine’s availability, its use prevented 120 million infections, 18.5 million hospitalizations and 3.2 million deaths in the U.S.
https://annalsofglobalhealth.org/articles/10.5334/aogh.4484
The Death of an Unarmed Prisoner
DECEMBER 15 IS THE 135TH ANNIVERSARY
of the death of Hunkpapa Lakota leader Sitting Bull, who was killed by a federal law enforcement official who was attempting to take Sitting Bull, who was unarmed, into custody.
Sitting Bull was one of the most successful defenders of the rights of Native Americans both by military means and otherwise. He had been one of the leaders of the most successful (for the Native Americans) extended period of open warfare between Native Americans and an experienced, fully-manned, well-equipped, U.S. Army occupation force in the Great Plains.
That period, which U.S. forces called Red Cloud’s War, ended when the U.S. government sued for peace after nearly two years of intermittent heavy fighting. The Army’s suing for peace was, for practical purposes, the Army’s admission of defeat. The treaty that ended “Red Cloud’s War” was a capitulation to the demands the Native Americans had been fighting for; it is said to have been the worst defeat ever for the U.S. Army fighting in North America, with the exception of Confederate victories during the Civil War.
The 1890 incident during which Sitting Bull was shot to death, was, at best, according to the U.S. government’s own testimony, a case of manslaughter committed by a U.S.official, and could reasonably be described as a case of premeditated murder that went awry. What occurred, according to U.S. officials, was that a large party of federal lawmen arrived, unannounced, to arrest Sitting Bull at his cabin in Grand River, South Dakota, but lacking an essential piece of equipment: a wheeled vehicle to carry Sitting Bull to police headquarters. The arrest party that neglected to bring a wagon with them knew that the only way they could transfer Sitting Bull to jail would be to force an uncooperative Sitting Bull onto the back of a horse.
During the noisy effort to force Sitting Bull to mount a horse, a crowd of his Sioux neighbors gathered to demand the release of their leader. Eventually a member of the crowd fired a shot at the leader of the arresting party. As soon as the shot hit the arresting officer, he turned, aimed at the unarmed Sitting Bull, and fired the shot that killed him. The arresting officer never explained why he shot Sitting Bull, but it clearly could not have been in self-defense.
DECEMBER 16 IS THE FIFTH ANNIVERSARY
of Major League Baseball’s announcement that it would henceforth consider the Negro Leagues to have been Major Leagues just like the American and National Leagues, and would meld statistics of Negro League players into Major League statistics.
Three-and-a-half years later, when the statistics had been combined, many new names had been added to the record books, including Josh Gibson, Oscar Charleston, Satchel Paige and Charlie “Chino” Smith. Slugger Gibson, who died before the first Black player joined the National League in 1947, not only received official recognition, but he had the highest career batting average (.372), the highest career slugging percentage (.718) and the highest career on-base-plus-slugging percentage (1.177).
https://missouriindependent.com/2024/05/29/america-at-her-best-negro-leagues-museum-president-says-stat-recognition-is-bigger-than-baseball
Nearly 70% of people said they believe the American dream no longer holds true or never did, the highest level in nearly 15 years of surveys.— Lindsay Ellis and Aaron Zitner, “Americans’ Hopes for Getting Ahead Dim,”
Wall Street Journal
, September 2, 2025.
A research team found an enormous decline from the early 1970s, when the incomes of nearly all offspring outpaced their parents.—Bob Davis, “Barely Half of 30-Year-Olds Earn More Than Their Parents,”
Wall Street Journal
, December 8, 2016.
Americans still think of their land as a place of exceptional opportunity—in contrast to class-bound Europe—the evidence suggests otherwise.—David Wessel, “As Rich-Poor Gap Widens in the U.S., Class Mobility Stalls,”
Wall Street Journal
, May 13, 2005.
If Americans’ hopes of getting ahead have dimmed, as the
Wall Street Journal
reports yet again, it could only be because the lid of the coffin in which the “American Dream” was long ago laid to rest has finally been sealed shut.
The promise that if you work hard and play by the rules, you will get ahead, or if you don’t, surely your children will, was broken long ago. And today’s economic hardships have left young adults distinctly worse off than their parents, and especially their grandparents.
This long decline has stripped away much of what there was of U.S. social mobility, which never did measure up to its mythic renderings. Let’s look closely at what the economic evidence, compiled in many meticulous studies, tells us about what passed for the American Dream, its demise, and what it would take to make its promised social mobility a reality.
The Long Decline
For at least two decades now, the
Wall Street Journal
has reported the dimming prospects of Americans getting ahead, each time with apparent surprise. In 2005, David Wessell presented the mounting evidence that had punctured the myth that social mobility is what distinguishes the United States from other advanced capitalist societies. A study conducted by economist Miles Corak put the lie to that claim. Corak found that the United States and United Kingdom were “the least mobile” societies among the rich countries he studied. In those two countries, children’s income increased the least from that of their parents. By that measure, social mobility in Germany was 1.5 times greater than social mobility in the United States; Canadian social mobility was almost 2.5 times greater than U.S. social mobility; and in Denmark, social mobility was three times greater than in the United States.
That U.S. social mobility lagged far behind the myth of America as a land of opportunity was probably no surprise to those who populated the work-a-day world of the U.S. economy in 2005. Corrected for inflation, the weekly wages of nonsupervisory workers in 2006 stood at just 85% of what they had been in 1973, over three decades earlier. An unrelenting increase in inequality had plagued the U.S. economy since the late 1970s. A Brookings Institution study of economic mobility published in 2007 reported that from 1979 to 2004, corrected for inflation, the after-tax income of the richest 1% of households increased 176% and increased 69% for the top one-fifth of households—but just 9% for the poorest fifth of households.
The Economist
also found this increasing inequality worrisome. But its 2006 article, “Inequality and the American Dream,” assured readers that while greater inequality lengthens the ladder that spans the distance from poor to rich, it was “fine” if it had “rungs.” That is, widening inequality can be tolerated as long as “everybody has an opportunity to climb up through the system.”
Definitive proof that increasing U.S. inequality had not provided the rungs necessary to sustain social mobility came a decade later.
The American Dream Is Down for the Count
In late 2016, economist Raj Chetty and his multiple coauthors published their study, “The Fading American Dream: Trends.” They documented a sharp decline in mobility in the U.S economy over nearly half a century. In 1970, the household income (corrected for inflation) of 92% of 30-year-olds (born in 1940) exceeded their parents’ income at the same age. By 1990, just three-fifths (60.1%) of 30-year-olds (born in 1960) lived in households with more income than their parents earned at age 30. By 2014, that figure had dropped to nearly one-half. Only 50.3% of children born in 1984 earned more than their parents at age 30. (The figure below depicts this unrelenting decline in social mobility. It shows the relationship between a cohort’s birth year, on the horizontal axis, and the share of the cohort whose income exceeded that of their parents at age 30.)
The study from Chetty and his co-authors also documented that the reported decline in social mobility was widespread. It had declined in all 50 states over the 44 years covered by the study. In addition, their finding of declining social mobility still held after accounting for the effect of taxes and government transfers (including cash payments and payments in kind) on household income. All in all, their study showed that, “Severe Inequality Is Incompatible With the American Dream,” to quote the title of an
Atlantic
magazine article published
at the time. Since then, the Chetty group and others have continued their investigations of inequality and social mobility, which are available
on the Opportunity Insights website (opportunityinsights.org).
The stunning results of the Chetty group’s study got the attention of the
Wall Street Journal
. The headline of Bob Davis’s December 2016
Journal
article summed up their findings succinctly: “Barely Half of 30-Year-Olds Earn More Than Their Parents: As wages stagnate in the middle class, it becomes hard to reverse this trend.”
Davis was correct to point to the study’s emphasis on the difficulty of reversing the trend of declining mobility. The Chetty group was convinced “that increasing GDP [gross domestic product] growth rates alone” would not restore social mobility. They argued that restoring the more equal distribution of income experienced by the 1940s cohort would be far more effective. In their estimation, it would “reverse more than 70% of the decline in mobility.”
Since 2014, neither U.S. economic growth nor relative equality has recovered, let alone returned to the levels that undergirded the far greater social mobility of the 1940s cohort. Today, the economic position of young adults is no longer improving relative to that of their parents or their grandparents.
President Donald Trump was fond of claiming that he oversaw the “greatest economy in the history of our country,” during his first term (2017–2020). But even before the onset of the Covid-19-induced recession, his economy was neither the best nor good, especially when compared to the economic growth rates enjoyed by the 1940s cohorts who reached age 30 during the 1970s. During the 1950s and then again during the 1960s, U.S. economic growth averaged more than 4% a year corrected for inflation, and it was still growing at more than 3% a year during the 1970s. From 2015 to 2019, the U.S. economy grew a lackluster 2.6% a year and then just 2.4% a year during the 2020s (2020–2024).
Also, present-day inequality continues to be far worse than in earlier decades. In his book-length telling of the story of the American Dream,
Ours Was the Shining Future
, journalist David Leonhardt makes that clear. From 1980 to 2019, the household income of the richest 1% and the income of the richest 0.001% grew far faster than they had from 1946 to 1980, while the income of poorer households, from the 90th percentile on down, grew more slowly than they had during the 1946 to 1980 period. As a result, from 1980 to 2019, the income share of the richest 1% nearly doubled from 10.4% to 19%, while the income share of the bottom 50% fell from 25.6% to 19.2%, hardly more than what went to the top 1%. Beyond that, in 2019, the net worth (wealth minus debts) of median, or middle-income, households was less than it had been in 2001, which, as Leonhardt points out, was “the longest period of wealth stagnation since the Great Depression.”
No wonder the American Dream took such a beating in the July 2025
Wall Street Journal
-NORC at the University of Chicago poll. Just 25% of people surveyed believed they “had a good chance of improving their standard of living,” the lowest figure since the survey began in 1987. And according to 70% of respondents, the American Dream no longer holds true or never did. That figure is the highest in 15 years.
In full carnival barker mode, Trump is once again claiming “we have the hottest economy on Earth.” But the respondents to the
Wall Street Journal
-NORC poll aren’t buying it. Just 17% agreed that the U.S. economy “stands above all other economies.” And more than twice that many, 39%, responded that “there are other economies better than the United States.” It’s a hard sell when the inflation-adjusted weekly wages of nonsupervisory workers are still lower than what they had been in 1973, now more than half a century ago.
And economic worries are pervasive. Three-fifths (59%) of respondents were concerned about their student loan debt, more than two-thirds (69%) were concerned about housing, and three-quarters (76%) were concerned about health care and prescription drug costs.
Rising housing costs have hit young adults especially hard. The median price of a home in 1990 was three times the median household income. In 2023, that figure had reached nearly five times the median household income. And the average age of a first-time homebuyer had increased from 29 in 1980 to 38 in 2024.
Finally, in their 2023 study, sociologists Rob J. Gruijters, Zachary Van Winkle, and Anette E. Fasang found that at age 35, less than half (48.8%) of millennials (born between 1980 and 1984) owned a home, well below the 61.6% of late baby boomers (born between 1957 and 1964) who had owned a home at the same age.
Dreaming Big
In their 2016 study, the Chetty group writes that, “These results imply that reviving the ‘American Dream’ of high rates of absolute mobility would require economic growth that is spread more broadly across the income
distribution.”
That’s a tall order. Fundamental changes are needed to confront today’s economic inequality and economic woes. A progressive income tax with a top tax rate that rivals the 90% rate in the 1950s and early 1960s would be welcomed. But unlike the top tax rate of that period, the income tax should tax all capital gains (gains in wealth from the increased value of financial assets such as stocks) and tax them as they are accumulated and not wait until they are realized (sold for a profit). Also, a robust, fully refundable child tax credit is needed to combat childhood poverty, as are publicly supported childcare, access to better schooling, and enhanced access to higher education. Just as important is enacting universal single-payer health care and increased support for first-time homebuyers.
The belief that “their kids could do better than they were able to,” was what Chetty told the
Wall Street Journal
motivated his parents to emigrate from India to the United States. These fundamental changes could make the American Dream the reality that it never was.
Sources:
David Wessel, “As Rich-Poor Gap Widens in the U.S., Class Mobility Stalls. Those in Bottom Rung Enjoy Better Odds in Europe; How Parents Confer an Edge,”
Wall Street Journal
, May 13, 2005 (wsj.com); Miles Corak, “Do Poor Children Become Poor Adults? Lessons from a Cross-Country Comparison of Generational Earnings Mobility,” IZA DP Discussion Paper Series, no. 193, 2006; “Inequality and the American Dream,”
The Economist
, June 15, 2006 (economist.com); “The rich, the poor and the growing gap between them,”
The Economist
, June 15, 2006 (economist.com); Isabell Sawhill et al., “Economic Mobility: Is the American Dream Alive or Well?” The Economic Mobility Project, An Initiative of the Pew Charitable Trust, May 2007 (pew.org); Raj Chetty et al., “The Fading American Dream: Trends in Absolute Income Mobility Since 1940,” National Bureau of Economic Research Working Paper 22910, December 2016 (nber.org); Bob Davis, “Barely Half of 30-Year-Olds Earn More Than Their Parents,”
Wall Street Journal
, December 8, 2016 (wsj.com); Raj Chetty et al., “Only Half of 30-year-old Americans earn more than their parents,” Washington Center for Equitable Growth, December 9, 2016 (equitablegrowth.org); Alana Semuels, “Severe Inequality Is Incompatible With the American Dream,” The Atlantic, December 10, 2016 (theatlantic.com); David Leonhardt,
Ours Was the Shining Future: The Story of American Dream
(Random House, 2023); Lindsay Ellis and Aaron Zitner, “Americans’ Hopes for Getting Ahead Dim,”
Wall Street Journal
, September 2, 2025 (wsj.com); Steve Rattner, “American Dream Charts,” Morning Joe, MSNBC, September 4, 2025 (stevenrattner.com); Rob J. Gruijters, Zachary Van Winkle, and Anette E. Fasang, “Life Course Trajectories and Wealth Accumulation in the United States: Comparing Late Baby Boomers and Early Millennials,”
American Journal of Sociology
, September 2023.
No preview for link for known binary extension (.pdf), Link: https://pallais.scholars.harvard.edu/sites/g/files/omnuum5926/files/2025-11/Power%20of%20Proximity%20to%20Coworkers%20November%202025.pdf.
Deprecations via warnings don’t work for Python libraries
Simon Willison
simonwillison.net
2025-12-09 01:13:39
Deprecations via warnings don’t work for Python libraries
Seth Larson reports that urllib3 2.6.0 released on the 5th of December and finally removed the HTTPResponse.getheaders() and HTTPResponse.getheader(name, default) methods, which have been marked as deprecated via warnings since v2.0.0 in Apri...
My conclusion from this incident is that
DeprecationWarning
in its current state does not work for deprecating APIs, at least for Python libraries. That is unfortunate, as
DeprecationWarning
and the
warnings
module
are easy-to-use, language-"blessed", and explicit without impacting users that don't need to take action due to deprecations.
Something I always encourage people to do, and try to get implemented anywhere I work, is running Python test suites with
-Wonce::DeprecationWarning
. This doesn't spam you with noise if a deprecated API is called a lot, but still makes sure you see the warning so you know there's something you need to fix.
I didn't know about the
-Wonce
option -
the documentation
describes that as "Warn once per Python process".
Johny Srouji, in Memo Responding to Gurman Report: ‘I Love My Team, and I Love My Job at Apple, and I Don’t Plan on Leaving Anytime Soon’
Daring Fireball
www.cnbc.com
2025-12-09 00:52:37
CNBC:
Apple chip leader Johny Srouji addressed rumors of his impending
exit in a memo to staff on Monday, saying he doesn’t plan on
leaving the company anytime soon. “I love my team, and I love my
job at Apple, and I don’t plan on leaving anytime soon,” he wrote.
Bloomberg reported on Saturday ...
Johny Srouji, senior vice president of hardware technologies at Apple Inc., speaks during the Peek Performance virtual event in New York, U.S., on Tuesday, March 8, 2022.
Gabby Jones | Bloomberg | Getty Images
Apple
chip leader Johny Srouji addressed rumors of his impending exit in a memo to staff on Monday, saying he doesn't plan on leaving the company anytime soon.
"I love my team, and I love my job at Apple, and I don't plan on leaving anytime soon," he wrote.
Bloomberg
reported on Saturday that Srouji had told CEO Tim Cook that he was considering leaving, citing people with knowledge of the matter.
Srouji is seen as one of the most important executives at the company and he's been in charge of the company's hardware technologies team that includes chip development. At Apple since 2008, he has led teams that created the M-series chips used in Macs and the A-series chips at the heart of iPhones.
The memo confirming that he plans to stay at Apple comes as the company has seen several high-profile executive exits in the past weeks, raising questions about the stability of Apple's top leadership.
In addition to developing the chips that enabled Apple to drop Intel from its laptops and desktops, in recent years Srouji's teams have developed a cellular modem that will replace Qualcomm's modems in most iPhones.
Srouji frequently presents at Apple product launches.
"I know you've been reading all kind of rumors and speculations about my future at Apple, and I feel that you need to hear from me directly," Srouji wrote in the memo. "I am proud of the amazing Technologies we all build across Displays, Cameras, Sensors, Silicon, Batteries, and a very wide set of technologies, across all of Apple Products."
Last week, Apple announced that its head of artificial intelligence,
John Giannandrea
, was stepping down.
Two days later, the company announced the departure of
Alan Dye
, the head of user interface design. Dye, who was behind the "Liquid Glass" redesign, is joining
Meta
.
A day after Dye's departure, Apple announced the retirement of general counsel Kate Adams and vice president for environment, policy, and social initiatives
Lisa Jackson
. Both Adams and Jackson reported directly to Cook.
“Automats were right up there with the
Statue of Liberty
and Madison Square Garden,” Kent L. Barwick, former president of the Municipal Art Society, lamented to the
New York Times
in 1991 when the country’s last automat closed. The automat, a precursor to today’s fast food chains, was a staple of the New York City dining scene in the first half of the 20th century. Originally conceived in Germany, the self-service restaurant featured coin-operated vending machines from which patrons could buy fresh coffee, simple meals, and desserts for an affordable price.
Along with automats, self-service cafeterias changed the way New Yorkers ate and socialized. In her book,
Kibbitz and Nosh: When We All Met at Dubrow’s Cafeteria
(Three Hills, May 2023), photographer
Marcia Bricker Halperin
revisits one of New York City’s most popular self-service cafeterias on Kings Highway in Brooklyn. Through Halperin’s photographs from the 1970s and 80s and essays by Donald Marguiles and Deborah Dash Moore, the book explores the story of Dubrow’s Cafeteria and the culture that sprang up around these New York City eateries. Check out our book talk with Halperin in our
video archive
!
Here, we take a look at 8 of the city’s lost automats and self-service cafeterias:
1. Horn & Hardart
Recreation of Horn & Hardart for The Marvelous Mrs. Maisel
Automats are synonymous with
Horn & Hardart
. Business partners Joseph Horn and Frank Hardart opened the first automat in the United States in Philadelphia in 1902. They expanded into New York City in 1912, opening the first location in
Times Square
. Eventually, there would be more than forty Horn & Hardart locations in New York. One former Horn & Hardart building that still stands can be found at
2710-2714 Broadway
, on the southeast corner of Broadway and 104th Street. It was occupied by the automat until 1953. A
ghost sign
at 146 West 48th Street marks another former location. At its height, the company had more than 150 automats and retail shops throughout Philadelphia, New York, and Baltimore.
In the beginning, automats served simple foods like buns, fish cakes, and beans. Diners could also get hot coffee, brewed fresh every twenty minutes, for just five cents. In addition to having the best cup of coffee in town, the automats were also known for their striking Art Deco decor. As the company continued to grow,
its menu expanded
to include lunch and dinner foods like mac and cheese. pot pies, and steaks. The company even opened up retail locations where they sold packaged “to-go” foods.
Berenice Abbott in 1936, Image from New York Public Library
The last Horn and Hardart automat, located at 200 East 42nd Street at 3rd Avenue, closed on April 8, 1991. Automats continues to be part of New York City culture today as it was
recreated as a set
for the fifth and final season of Amazon’s hit series
The Marvelous Mrs. Maisel
.
In Brooklyn,
The Brooklyn Dumpling Shop
is bringing back the automat format of dining with new technology.
Like automats, cafeterias were waiter-less establishments. Customers would first receive a ticket with the menu items and prices. They would then approach the food counter and make selections as the server on the other side hole-punched the ticket. Taking their tray full of food, patrons then searched for a table, which was usually shared.
Cafeterias started on Wall Street in the late 19th century as a way for busy brokers to grab a quick lunch. They soon spread throughout the city and beyond. In 1929, Belarusian immigrant Benjamin Dubrow opened Dubrow’s Pure Food, a full-service restaurant in
Crown Heights
at the intersection of Eastern Parkway and Utica Avenue. When the Great Depression hit, however, he needed to try a new business model. Dismissing all of his waitstaff in 1931, he converted the restaurant into a cafeteria “with refinement.” In 1939, he opened another cafeteria at 1521 Kings Highway and another in Manahttan’s Garment District in 1952. Dubrow’s Cafeteria served a wide variety of dishes including Jewish staples like blintzes with applesauce and sour cream, kugels, and gefilte fish.
The self-service cafeterias of New York City offered a unique “third place,” a place outside of work and home, where New Yorkers could comfortably socialize with their neighbors, all “for the price of a cup of coffee.” In Halperin’s book,
Kibbitz and Nosh: When We All Met at Dubrow’s Cafeteria
, Deborah Dash Moore writes about how while the cafeterias attracted a diverse clientele, “New York Jews particularly embraced cafeterias, less as a fast-food option than as a place to sit and schmooze.”
Halperin
reminisces about the people she met and photographed at Dubrow’s, writing, “I met amazing people at Dubrow’s. Most were people I ordinarily would never have had a conversation with over a cup of coffee—ex-vaudeville performers, taxi drivers, Holocaust survivors, ex-prizefighters, and bookies. Women named Gertrude, Rose, and Lillian all had sad love stories to tell and big hearts.”
The Kings Highway location of Dubrow’s Cafeteria hosted a few historic moments.
John F. Kennedy
held a large campaign rally outside the restaurant in 1960. Senator
Robert F. Kennedy
and
Jimmy Carter
also made appearances at the cafeteria during their own presidential campaigns. It was also where Sandy Koufax announced his decision to join the
Brooklyn Dodgers
. The Eastern Parkway location closed in the early 1960s while the Kings Highway cafeteria stayed open until 1978. The Manhattan location shut down in 1985.
3. Garden Cafeteria, Lower East Side
Garden Cafeteria, Lower East Side, NYC 1977, Photo by Marcia Bricker Halperin
The Garden Cafeteria was a hotspot for Jewish intellectuals and writers at 165 East Broadway, on the corner of Rutgers Street. Established by Austrian immigrant Charles Metzger in 1941, the eatery has a storied history on the
Lower East Side
. Located next to the offices of The Forvertz/The Jewish Daily Forward, the cafeteria was frequented by the paper’s writers. Nobel laureate Isaac Bashevis Singer and photographer Bruce Davidson were among its patrons. Singer set his short story ”The Cabalist of East Broadway” at the Garden Cafeteria.
The cafeteria closed in 1983 and became a Chinese restaurant. When construction work in 2005 revealed one of the original signs, it was given to the
Museum at Eldridge Street
for safe keeping. The sign has appeared on display at the Museum and in an exhibit on The Jewish Daily Forward at Museum of the City of New York.
The Belmore Cafeteria
once stood at 28th Street and Park Avenue South. Opened in 1929, it was founded by Philip Siegel and run by his family until it closed in 1981. Billed as “New York’s Most Fabulous Self-Service Restaurant,” the establishment attracted some interesting characters.
Members of the notorious Murder Inc. gang reportedly ate there, but the clientele the cafeteria was known for was taxi drivers. It was a common sight to see a row of taxis lined up at the curb outside. Fittingly, the cafeteria appears as a location in the 1976 Robert DiNero film,
Taxi Driver
. An estimated 5,000 people a day passed under the cafeteria’s glowing red neon sign and through its turnstile each weekday. In 1981, the Siegel’s sold their building and a condominium tower was built at the site.
5. Garfield’s Cafeteria, Flatbush
In a 1971 New York Times article, Garfield’s Cafeteria on
Flatbush
Avenue was described as a “grand old cafeteria” where you could “stop in at midnight for a nosh, or something to nibble on after leaving the Alebrmarle dance parlor or to recover from the hilarity of vaudeville at the Flatbush Theater.” Like Dubrow’s, the cafeteria served blintzes, bialys, matzoh-ball soup, and more.
Since the cafeteria was open in the morning and late at night, it attracted different crowds at different times of the day. Families and old-timers usually came for breakfast and lunch, while the nighttime brought the after-theater crowds. The
Times
wrote that some elderly patrons would even bring their own food and sit at the cafeteria purely for the social aspect as they nursed a cup of coffee and chatted with their neighbors for hours.
6. Hoffman’s Cafeteria, Brownsville
Another famous Brooklyn cafeteria was Hoffman’s Cafeteria on Pitkin and Saratoga Avenues in Brownsville. This cafeteria is often mentioned alongside Dubrow’s and Garfield’s as one of the most popular. Like Dubrow’s and Garfield’s it closed in the 1970s. Hoffman’s
made news in the 1940s
for a butter heist. It was discovered that two of the countermen were stealing food, mostly butter, from the establishment for a period of three months. The stolen goods amounted to $15,000!
There were multiple locations of Hector’s Cafeteria in Times Square since the 1930s. The last remaining cafeteria was inside the Claridge Hotel building on Broadway at 44th Street. It
lasted until 1970
.
Before Hector’s closed, it made its way into pop culture. The cafeteria is mentioned in Jack Kerouac’s novel
On the Road
when Dean Moriarty first arrives in New York and “looking for a place to eat,” “went right to Hector’s, and since then Hector’s Cafeteria has always been a big symbol of New York for Dean.” You can also see a bit of Hector’s
in this Dennis Stock photograph
of actor James Dean.
8. Stewart’s Cafeteria, Greenwich Village
Stewart’s Cafeteria occupied the first floor of an Art Deco building at 116 Seventh Avenue South in
Greenwich Village
. Opened in 1933, it was part of a chain of cafeterias. Stewart’s was only open for a few years before closing and re-opening as Life Cafeteria. The building still exists today (it houses a Bank of America and CVS Pharmacy) and is regarded as an
LGBTQ+ history site.
Life Cafeteria attracted a bohemian clientele including gay and lesbian patrons. Unlike most places in the city at the time where homosexuality was hidden, the large windows of Life Cafeteria put everything that happened inside on display.
Crowds of tourists often formed outside the windows
to peer in. Tennessee Williams and Marlon Brando were known to visit, and the scenes inside have been captured in paintings by
Paul Cadmas
and
Vincent La Gambina
.
Today marks 10 years since I wrote the
first post in this blog
. It was a very basic and brief post about me decoding the European FreeDV net over a WebSDR. I mainly wrote it as a way of getting the ball rolling when I decided to start a blog back in October 2015. Over the 10 years that I have been blogging, the style, topics, length and depth of the posts have kept shifting gradually. This is no surprise, because the contents of this blog are a reflection of my interests and the work I am doing that I can share freely (usually open source work).
Since I started the blog, I have tried to publish at least one post every month, and I have managed. Sometimes I have forced myself to write something just to be up to the mark, but more often than not the posts have been something I really wanted to write down and release to the world regardless of a monthly tally. I plan to continue blogging in the same way, and no doubt that the contents will keep evolving over time, as we all evolve as persons during our lifetime. Who knows what the future will bring.
I wanted to celebrate this occasion by making a summary of the highlights throughout these 10 years. I have written 534 posts, and although Google search is often useful at finding things, for new readers that arrive to this blog it might be difficult to get a good idea of what kind of content can be found here. This summary will be useful to expose old content that can be of interest, as well as serve me to reflect on what I have been writing about.
[Sponsor] Jaho Coffee Roaster
Daring Fireball
www.jaho.com
2025-12-09 00:43:00
Great coffee changes the day. Since 2005, our family-owned roastery has taken the slow and careful approach, sourcing small-lot coffees, roasting in small batches and shipping every bag fresh. Award-winning coffee delivered to your home or office.
Holiday gifts? Fresh coffee is a gift that never mi...
Reconciliation Slandered by Marco Rubio, Mandela’s Legacy Remains Strong 12 Years After His Passing: ‘Compassion and Forgiveness Set Him Free’
Portside
portside.org
2025-12-09 00:36:48
Reconciliation Slandered by Marco Rubio, Mandela’s Legacy Remains Strong 12 Years After His Passing: ‘Compassion and Forgiveness Set Him Free’
Kurt Stand
Mon, 12/08/2025 - 19:36
...
Despite the contempt and manipulation from Marco Rubio, U.S. Secretary of State under Donald Trump, Nelson Mandela’s legacy of peace and respect for diversity remains deeply rooted in South Africa. As the country marks 12 years since his death this Friday (5),
BdF
spoke with members of South African civil society attending the first
People’s Brics Summit
in Rio de Janeiro.
“I think the most important part of legacy is compassion, as well as reconciliation and forgiveness. That is what set him free. He understood the need for true liberation of the soul,” said Corlett Letlojane, executive director of the South Africa Human Rights Institute.
“The legacy of Nelson Mandela is one of the greatest a human being can leave behind. He fought for everyone, not only for Black people, and he taught us to love one another,” said Moses Mokgatlhane, a cultural representative at the summit. At 28, the music producer was a teenager when Mandela passed away and just two years old when “Madiba”, Mandela’s affectionate nickname, completed his presidential term.
In 1993, Mandela received the Nobel Peace Prize for preventing South Africa’s transition from
a racist apartheid regime
to an equal democracy from descending into bloodshed, after decades of oppression imposed by the white minority ruling over the country’s Black majority. The world’s most famous political prisoner, Mandela spent 27 years behind bars before being released in 1990 under global pressure. He then dedicated himself to ensuring a peaceful transition.
Since then, his name has become synonymous with moral greatness, commitment to popular struggles, and political wisdom. The United Nations established July 18, his birthday, as Nelson Mandela International Day in recognition of his contributions to peace and freedom.
Rubio spreads misinformation
Praising Mandela is easy. Using Mandela’s name to attack the current government led by his own party, the African National Congress (ANC), is what analysts describe as “convenient dishonesty,” which is exactly what Marco Rubio did.
On Wednesday (3), Trump’s top diplomat released a statement invoking Mandela to criticize President Cyril Ramaphosa’s administration. Rubio claimed South Africa had entered the post–Cold War era with “strong institutions, excellent infrastructure, and global goodwill,” along with valuable natural resources and key agricultural land.
“And, in Nelson Mandela, South Africa had a leader who understood that reconciliation and private sector–driven economic growth were the only path for all citizens to prosper. Unfortunately, Mandela’s successors have replaced reconciliation with redistributive policies,” he alleged.
Rubio went further, falsely claiming that
South Africa no longer belongs in the G20
, which will be chaired by the U.S. next year, and repeating a baseless narrative that the Ramaphosa government is allowing a “genocide” of white Afrikaners, the same group that enforced apartheid.
“South Africa is being punished for
taking Israel to the International Court of Justice
and for its anti-American stance,” summarized political analyst Zakhele Ndlovu, speaking to South African news outlet IOL. In January 2024, Pretoria formally accused Israel of committing genocide in Gaza and urged the UN’s top court to order an end to its attacks, a stance rooted in South Africans’ lived experience with racial oppression.
This position earned Ramaphosa hostility in Washington, including public humiliation during a visit to Trump, but remained faithful to Mandela’s principles.
“That legacy of peace, prosperity, respect, and nonviolence will live forever,” said Mokgatlhane.
During the Brics People’s Summit, Corlett Letlojane also spoke to
BdF
about Mandela’s life and legacy. Read the interview below:
BdF: What remains most important from Nelson Mandela’s legacy?
Corlett Letlojane:
I believe the most important things are compassion, reconciliation, and forgiveness. You know, that was something that freed him. He saw the need for a true liberation of his soul.
Because Nelson Mandela carried the weight of anguish, frustration, and the suffering he endured, and he needed to rise above that. He understood that holding onto trauma would be toxic. So he handled it in a way that brought him peace. He was able to convey that message of peace, reconciliation, and forgiveness, and move on with his life.
It was not an easy message. Even though he championed it, I think it played a crucial role, because the violence that many expected never came. We could have descended into civil war. The worst could have happened. Just as we now hear harmful claims about “genocide against white people” in South Africa.
What was it like to live through that period, and what do you think about it after all these years?
Perhaps we have seen his message in practice. I believe Nelson Mandela was truly a gift to the people of South Africa and to the world. He was able to look his adversaries in the eye and gave us the ability to truly set the enemy aside. We can overcome the enemy by cultivating peace, love, and compassion.
That alone is transformative. We saw people who were deeply rooted in anger and hatred transform, realizing they could not continue living like that. Nelson Mandela’s message of peace, compassion, and forgiveness is real, and everyone should try to practice it.
If we fail, but still fight against injustice, then we remain balanced. That alone is a form of personal transformation.
Was it difficult to implement this message in the 1990s?
Yes, the wounds carried by South Africans from apartheid were deep, and helping communities understand forgiveness and move forward was one of the most difficult challenges. And the adversary, the enemy, their descendants, and the apparatus, was still present, and attempts to restore that system remained strong. So, it was not a simple process.
There was a constitution, a constitution he left us. Laws, mechanisms, and committees to help guide the process. Other efforts contributed as well. It was certainly not easy.
The positive side is that many oversight mechanisms emerged, many committees were created, and people who had suffered in exile, who had seen the worst, were returning. South Africa took on a leadership role internationally, and that gave us courage: it showed that we could lead the world in this way.
It has been a gradual journey to ensure we are on the right path.
Corlett Letlojane é uma das maiores autoridades em direitos humanos na África do Sul | Crédito: Priscila Ramos/MS
|
Crédito: Priscila Ramos/MS
On a personal level, what does Mandela represent to you?
For me, Nelson Mandela was an inspiration. As a child, I understood very little. But by the age of 12, I already knew I needed to be in the streets. I needed to fight against injustice. I left and lived a life of sacrifice. I was ready to die, willing to die, because of what I saw in my country and because of the messages and teachings we received from Nelson Mandela.
So I knew exactly where I was going and how I would fight. It was not easy. We lived with great insecurity and no freedom. It was dangerous. My parents took me to Lesotho so I could continue my studies. They traveled to that small neighboring country. It was traumatic, and they were taking risks.
When I returned home, I continued to face injustice, apartheid laws, arrests, and repression. It was not an easy life.
How is Mandela’s government viewed by young people today?
Many young people did not live through that time and feel the government back then did not do a good enough job, or that negotiations did not fully resolve issues like land, natural resources, and economic power, which remain concentrated in the hands of a few.
These are things they must now address themselves, because our generation built the foundations, and they can continue this process with better education. They have access to accurate information, the internet is available, and they can engage in this process by observing what happens in their communities, claiming their rights, and becoming defenders of the future.
Edited by:
Luís Indriunas
Translated by:
Giovana Guedes
Horses: AI progress is steady. Human equivalence is sudden
So after all these hours talking about AI, in these last five minutes I am going to talk about: horses.
Engines, steam engines, were invented in 1700.
And what followed was 200 years of steady improvement, with engines getting 20% better a decade.
For the first 120 years of that steady improvement, horses didn't notice at all.
Then, between 1930 and 1950, 90% of the horses in the US disappeared.
Progress in engines was steady. Equivalence to horses was sudden.
But enough about horses. Let's talk about chess!
Folks started tracking computer chess in 1985.
And for the next 40 years, computer chess would improve by 50 Elo per year.
That meant in 2000, a human grandmaster could expect to win 90% of their games against a computer.
But ten years later, the same human grandmaster would lose 90% of their games against a computer.
Progress in chess was steady. Equivalence to humans was sudden.
Enough about chess! Let's talk about AI.
Capital expenditure on AI has been pretty steady.
Right now we're - globally - spending the equivalent of 2% of US GDP on AI datacenters each year.
That number seems to have steadily been doubling over the past few years.
And it seems - according to the deals signed - likely to carry on doubling for the next few years.
But from my perspective, from equivalence to me, it hasn't been steady at all.
I was one of the first researchers hired at Anthropic.
This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
Then in December, Claude finally got good enough to answer some of those questions for us.
In December, it was some of those questions. Six months later, 80% of the questions I'd been being asked had disappeared.
Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.
Now. Answering those questions was only part of my job.
But while it took horses decades to be overcome, and chess masters years, it took me all of six months to be surpassed.
Surpassed by a system that costs one thousand times less than I do.
A system that costs less, per word thought or written, than it'd cost to hire the cheapest human labor on the face of the planet.
And so I find myself thinking a lot about horses, nowadays.
In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did.
But looking at how fast Claude is automating my job, I think we're getting a lot less.
This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.
All opinions are my own and not those of my employer.
Abstract:
We show that deep neural networks trained across diverse tasks exhibit remarkably similar low-dimensional parametric subspaces. We provide the first large-scale empirical evidence that demonstrates that neural networks systematically converge to shared spectral subspaces regardless of initialization, task, or domain. Through mode-wise spectral analysis of over 1100 models - including 500 Mistral-7B LoRAs, 500 Vision Transformers, and 50 LLaMA-8B models - we identify universal subspaces capturing majority variance in just a few principal directions. By applying spectral decomposition techniques to the weight matrices of various architectures trained on a wide range of tasks and datasets, we identify sparse, joint subspaces that are consistently exploited, within shared architectures across diverse tasks and datasets. Our findings offer new insights into the intrinsic organization of information within deep networks and raise important questions about the possibility of discovering these universal subspaces without the need for extensive data and computational resources. Furthermore, this inherent structure has significant implications for model reusability, multi-task learning, model merging, and the development of training and inference-efficient algorithms, potentially reducing the carbon footprint of large-scale neural models.
Submission history
From: Prakhar Kaushik [
view email
]
[v1]
Thu, 4 Dec 2025 18:59:58 UTC (14,316 KB)
[v2]
Sat, 6 Dec 2025 04:42:07 UTC (14,321 KB)
Space (whitespace) is a whole group of glyphs, one of the most important and frequently-used. Any computer user knows space as the widest key on their keyboard, however the notion itself is much bigger and comprises multiple important typographic terms and ideas.
Space in general is a blank unprinted area, a counterform that separates letters, words, lines etc. In typography, there are several types of spaces: sinkage (space on a page above a textblock), indent (space before the paragraph), leading (vertical space), word spacing, and letter spacing. In this article, we will primarily focus on word spacing, i.e. the space as a glyph.
European languages did not use word spacing for a long time, it was not until the 7th century that word spacing entered Latin script. In the age of metal type, the space was a material, tangible object — a piece of metal that left no print. In the pre-digital era, most text blocks were justified, which required several spaces of different width. Those types of spacing were defined by the notion of
em
(or
point size
), which is
height of the piece of metal
used for printing a character. For example, one em in a 12-point typeface is 12 points, whereas its en (half-em) spaces’ width is 6pt, third space (of an em) equals 4pt, and so on.
Whitespace characters in
Gauge
. Widths and correlations between spaces differ depending on the typeface
These types of spaces are still existent in the digital age, but they are mostly used by advanced typographers. Messengers, text editors, and other programs and applications most typically use only regular space.
Word space
Standard space, word space, space
per se,
is the symbol typed using the widest key on the keyboard.
In metal type, the size of standard space varied depending on the typographic tradition, in most cases the space was rather wide.
As a standard word space, metal composition used an en space, half the height of the point size, or em-square (in Cyrillic typography), while Latin space was equal to the third of the em space.
Living Typography (2012)
In the early digitised fonts one often sees excessively wide spaces; probably, it was an attempt to imitate en space, or three-per-em space, which were used as the main spacing material in metal type. Such a space width can affect the typesetting rhythm and would seem redundant in modern typography.
Wide spacing is both physiologically unnecessary and makes the whole typeset structure reticulate, aesthetically ruining the page’s layout. If for some reason you can’t stick to en space size in this particular line, it’s better to scale down spacing using three-per-em spaces (that equal to the third of an em), or spaces of 3, or even 2 points.
M. I. Schelkunov
History, Technique, Art of Printing (1926)
A wide word spacing seems weird to an eye of the modern reader, and it is way too visible in texts
Today, word space width is specified by the typeface’s designer themselves, and it is one of the defining moments in designing a typeface, along with spacing, — texture and rhythm of the typeset are heavily dependent on word space width.
Many modern typographers are seeking to subject the space width to certain rules. For example, some type designers claim that the space should be equal to the bounding box of lowercase letter
i
. However, this rule can’t be universal: specifically, it definitely won’t work for typefaces where letter
i
is of unconventional design and proportions. In super large point sizes, spacing and word spaces are often intentionally reduced, as in such cases even the bounding box of the
i
can be too wide.
It used to be a rule of thumb for headline settings to leave a space between words that is just wide enough to fit in a lowercase
i
. For comfortable reading of long lines, the space between words should be much wider.
Erik Spiekermann
Stop stealing sheep & find out how type works (1993)
Depending on whether your typeface is serif or sans serif, it makes sense to take, or not to take, in consideration sidebearings of the glyph. It can be very different depending on style, too: with wide and light weights, there will be more unprinted area than with narrow and heavy weights, and this also applies to the space width.
There is no question but that wordspaces may not be too large, or that the line must appear to be an even, well-balanced whole. What applies to letterspaces also applies to wordspaces: they too are a function of the counters of the individual letters: the smaller these are, the smaller the wordspaces; the larger the counters, the larger the wordspaces.
Jost Hochuli
Detail in Typography (2008)
Blank space between words should be such as to ensure that words are visibly separated from each other — if spacing is wider, there will be holes between words, if smaller, it will be difficult to tell one word from another. You can’t measure space with a ruler, as everything depends on specific design or typeface.
Word spaces as set in
Kazimir Text
. The space width is good: words are separated from one another, the hierarchy of white space is maintained
If you increase word spacing, word spaces would conflict with leading, which makes it hard to navigate through the text
If you decrease the width of word space, it will affect legibility, as the words will blend together
Using double spaces is a technique inherited from the age of typewriters. It is strongly advisable to check a document for double spaces and replace those by single spaces.
Some of the recommendations learned by the educated typist are still now acquired habits wrongly used in digital documents; for instance, the use of three spaces after a period or two after the comma. There was just one space width available in the typewriter, so words and sentences were separated by the same distance. The double space was used to differentiate sentences and improve the readability of the text.
María Ramos Silva
Type design for typewriters: Olivetti (2015)
Additional spacing after a period is a questionable method in terms of readability. It can be assumed that in the age of typewriters additional space could have better separated sentences from one another in the context of monowidth typeface, yet monowidth period and space already form a larger gap than any space within the sentence. Since typewriters, typesetting tools have significantly improved over time, and today nobody will typeset in a monowidth typeface, unless it is absolutely necessary. So, currently, the use of double spaces is considered mauvais ton, i.e. bad manners, regardless of typeface.
American lawyer Matthew Butterick wrote a book on typography for lawyers, writers, and anyone who works with text. In the US, it is still very common among the older generation to use double spaces, so Matthew dedicated two entire chapters of his Practical Typography to this issue. Butterick tried to convince his audience by imaginary dialogues:
“If you approve of smaller word spaces in some situations, why do you insist on only one space between sentences, where a larger gap might be useful?” Because you’re already getting a larger gap. A sentence-ending word space typically appears next to a period. A period is mostly white space. So visually, the space at the end of a sentence already appears larger than a single word space. No need to add another.
Matthew Butterick
Butterick’s Practical Typography (2013)
Non-breaking Space
Non-breaking space is a space character that prevents an automatic line break at its position. For instance, in Russian and a number of other Central and Eastern European languages, non-breaking space serves to stick together a preposition and a word next to it, numbers and units of measurements, name and surname, etc.
Non-breaking space is supported by almost any text editing program, graphic design software, or browser, along with a standard space, so one shouldn’t forget to utilise it according to the typesetting rules of any given language.
In Russian language, non-breaking space shall connect the dash and its previous word (except for direct speech), prepositions with following words, initials with surname, abbreviations (such as i.e.), numero sign with numbers, numbers and units of measurements.
In English it is considered good manners to stick together not prepositions, but pronouns and articles with the following word. However, this rule is often neglected, especially when it comes to newspapers and magazines.
Professional typesetting software have spaces of non-standard widths. In InDesign, all additional spaces — em space, en space, thin space, etc. — are non-breaking.
Additional spaces
Standard space is used everywhere; it is supported by any word, text, or code processing app. Non-breaking space is supported almost anywhere as well. However, computer typesetting still possesses a number of spaces dating back to metal type, allowing for finer adjustment of white space if necessary.
If a font supports additional spaces, those can be fetched via glyphs palette or using clipboard. Most graphic software do not support those spaces; for example, Adobe Illustrator 2020 includes only four additional spaces: em space, en space, thin space, and hair space.
And there is a reason for that: neither Illustrator, nor Photoshop were designed for advanced typesetting and laying out books. However, in InDesign you can easily set any kind of space, and a skilled typographer will use those.
Em Space
A space equal to the height of the em square (point size.) In early serifs, the metal face of the capital
М
tended to be square — probably, thus the English name. Metal type often used em space as paragraph indent.
En Space
Half of the width of an em. Russian-language metal type composition considered it the main type of space, even though in word spacing, especially if the text is aligned to the left or right, it is excessively wide.
Three-per-em Space, Third Space
One third of an em space. Historically considered as the main space in Western European typography.
The first obligation of a good typesetter is to achieve a compact line image, something best accomplished by using three-to-em or three-space word spacing. In former times even roman was set much tighter than we do it today; the specimen sheet that contains the original of Garamond’s roman of 1592, printed in 14-point, shows a word spacing in all lines of 2 points only, which is one-seventh of an em! This means that we cannot call three-to-em word spacing particularly tight.
Jan Tschichold
The Form Of The Book (1975)
Quarter Space
One fourth of an em space. Some authors believe quarter space to be the primary word space.
For a normal text face in a normal text size, a typical value for the word space is a quarter of an em, which can be written M/4. (A quarter of an em is typically about the same as, or slightly more than, the set-width of the letter t.)
Robert Bringhurst
The Elements of Typographic Style (1992)
Thin Space
⅕ of an em space. It is common that thin space equals about half the standard one, which is why thin space is used where standard word space would be too wide. For example, thin space is often utilised for spacing a dash in cases where standard space is too wide. Thin space is also used for spacing initials, from each other and from the surname:
Standard space in
Spectral
is too wide to be used for spacing initials and dashes
Thin spaces look more neat, better connecting initials with a surname and two parts of a sentence with each other
French typographic tradition prescribes the use of either thin or hair spaces to space any two-part symbols: exclamation mark, question mark, semicolon, etc.
Regardless of the language, such glyphs as question mark and exclamation mark typically are very visible in lowercase, but they can get lost in an all-caps typeset — in this case, one should finely space them.
Sixth Space
The sixth space is used when the thin space is too large.
Hair Space
The narrowest of spaces. In metal type, it was equal to 1/10 of an em space, in the digital age it is mostly 1/ 24 of an em. It might be useful if a certain typeface’s punctuation marks have too tight sidebearings, but a thin space would be too wide. For example, you can use hair space to space dashes instead of thin one — everything depends on the sidebearings and the design of the particular typeface.
You should keep in mind that after you change font, selected space glyphs will remain, but their width can change, — and this will affect the texture.
Isn’t it ridiculous when a punctuation mark, relating to the entire preceding phrase, is tied to one last word of the said phrase? And, vice versa, how unfortunately it looks when there is a large gap between this mark and the previous word. As a matter of fact, it is about time type foundry workers started thinking about it and cast the punctuation mark with an extra sidebearing on its left. However, typefounders are not always, or rather rarely, that forethoughtful, and also they are used to cast all letters without generous sidebearings. During punching of matrices, the beauty of spacing punctuation marks is also barely remembered. Therefore, it is your burden and responsibility to fix this
problem —
and even more it is the one of compositors. These latter dislike 1-pt spaces, however it is this very thin space that can save the typeset beauty in these situations. That is why, with punctuation marks
, ;. … : ! ?
you should insist on putting 1-pt (hair) space before those symbols — but only when those don’t have an extra sidebearing on their left. If you are in charge of purchasing a typeface for the printing establishment, regard this issue when ordering typefaces, make the foundry give consideration to the beauty of their work and this particular detail.
M. I. Schelkunov
History, Technique, Art of Printing (1926)
Spacing in justified texts
Full justification — that is, alignment of text to its both margins, — is still commonly used in books and magazines. When the text is justified, the width of word spaces is not constant, it is changing to distribute words to the entire width of the line. In this situation, the uniformity of spacing could be even more important than the very width of these spaces: evenly large spaces in the entire page are better than large spaces in only one line. That is why, no matter how optimised the typeface’s word spacing in terms of its width is, it will not be enough for typesetting a justified text. While in metal type all spaces were set manually, and a typesetter knew what space they should add for even typesetting, nowadays it’s a computer that defines the length of spaces for justified texts. The algorithm divides the remaining space into equal parts and adds them to regular spaces. In doing so, the algorithm ignores letters, syntax, and punctuation, which is why when typesetting justified texts one should always double-check and adjust spacing manually.
In Indesign, it is possible to set minimum and maximum word spacing width for fully justified text typesetting: the width of standard space is used as a basis 100 %, maximum is normally about 120 %, minimum is about 80 %.
If the text is justified, a reasonable minimum word space is a fifth of an em (M/5), and M/4 is a good average to aim for. A reasonable maximum in justified text is M/2. If it can be held to M/3, so much the better. But for loosely fitted faces, or text set in a small size, M/3 is often a better average to aim for, and a better minimum is M/4. In a line of widely letterspaced capitals, a word space of M/2 or more may be required.
Robert Bringhurst
The Elements of Typographic Style (1992)
Robert Bringhurst recommends choosing appropriate spaces based on an em. However, space is a relative value, so in justified texts you should consider not the width of some abstract em, but rather the width of space in particular font.
The optimal word space width in justified texts is ephemeral and changes depending on typeface, point size, line width, line spacing, and many other factors. That is why in Indesign you can’t set maximum and minimum values once and for all cases — you will have to choose the best possible options manually.
In setting justified texts, standard word space width becomes a fluctuating value. The fixed width space and all additional spaces with constant width can help better control the setting.
The more even are the gaps between words, the better <…>. In no case shall you allow a considerable disparity in space widths, while an insignificant difference won’t ruin the beauty of typesetting.
Pyotr Kolomnin
A Concise Account of Typography (1899)
Figure Space
Figure space, or numeric space, is used for typesetting tables and sheets. If a typeface is fitted with tabular figures, its figure space will be equal to the width of tabular figures. Figure space is a non-breaking one.
Normally, figure space is significantly wider than standard space, it will be helpful when you need to even a large amount of multi-digit numbers
Punctuation Space
In most cases, the width of this space is equal to the glyph width of a period or a colon. May be of use in making up numbers in tables where digits are defined by a spacing element instead of period or colon.
Narrow No-break Space
A thin space that prevents an automatic line break. The name of this symbol in Unicode causes additional confusion: Narrow in this case is the same thing as Thin, and Narrow Space has the same width as Thin Space does.
In some applications, such as InDesign, the simple regular thin space is non-breaking by default and is called with Thin Space. In other cases it’s a separate symbol, for example, the Web uses Narrow No-break Space.
Spaces in layout
The distribution of white space in text setting is a highly important factor, responsible for the neat design and the content’s clear structure. Many designers keep in mind correlation between point size, line width, and margins, but some tend to forget that word spacing is an equivalent factor of these relations.
Body text font, designed for smaller sizes, would require smaller spacing and word spaces if used to set a large headline. The point size gets more important in determining spacing and white unprinted area in general, than whether it is a text typeface or a display one.
It is also necessary to consider spacing when you’re dealing with particular elements of the text. For instance, small-caps or all-caps fragments quite often should be additionally spaced. Manual spacing is sometimes necessary in bold or italic styles, or even if no additional styles are applied at all.
Small-caps spacing in Charter is too tight by default, more white space is needed
In William, small caps are taken care of, this generous spacing doesn’t require additional adjustment
A text set in a quality typeface sometimes needs manual adjustment: standard word space in Guyot is clearly not enough for the
of ‘i’
combination
White spaces in software
Most typically, in non-professional software and web services there are only standard and non-breaking spaces available. You might be able to set additional symbols using clipboard almost anywhere where Unicode is supported. That said, you have to check everytime: for example, at the time of writing this piece, Facebook allows for inserting additional symbols in its input field, but automatically replaces them while posting.
Speaking of the Web, additional spaces are available as HTML special characters: if you use them, your source code might become a bit cluttered, but that would allow you to control the placing of each non-standard space. Please note that different browsers might render spacing differently, and not so long ago some of them even ignored additional spaces, replacing them by regular ones. You should check on the correct display of additional spaces where you use it.
Two industry standards for text formatting and typesetting, InDesign and Quark Xpress, support all kinds of spaces. Today, type designers usually include at least thin and hair spaces. Their width might vary from one typeface to another — but the typographer, at least, has more control over the word spacing.
In InDesign, an additional space not included in the typeface would still be visible, but its width would be defined by the software with no regard to what kind of typeface it is. For example, hair space in 24pt size will be 1pt — both in a display face with tight spacing and in a text face with loose spacing.
Spaces calculated this way are not always suitable for your task. Depending on the typeface, the additional space width suggested by InDesign can be insufficient or excessive. And if you export the text with such spaces from InDesign to Figma, their width will most likely change — every software may have its own algorithms for calculating these values.
Be vigilant and trust your eye: it is not mathematical values that matter, but a convincing, reasonable relationship between the black and the white.
These dashes are spaced by hair spaces provided by the typeface
These dashes are spaced by hair spaces provided by the typeface
The typefaces above have no hair space, therefore its width is set automatically
With x-height and spacing that Arno Pro and RIA Text have, the InDesign’s hair space is good enough. Whereas in IBM Plex we perhaps should put thin space instead of a hair one
Whitespace characters are among the most important typographic elements. Alongside sidebearings, they define text rhythm and organise blocks of information. Disregard for white spaces can ruin relations between them: line and word spacing, word spacing and column-gap. In such case the reader wouldn’t be able to easily track the line and would have to put additional
effort —
unless this is your intended goal, you should always consider how different sorts of white space work with each other.
Summary table
Non-breaking space
MacOS: Alt + Space
Windows: Alt+0160
Unicode:
U00A0
HTML:
Indesign: Type → Insert White Space → Nonbreaking Space or Alt + Cmnd + X
in case you need a space of non-changing width, in a justified text layout:
Type → Insert White Space → Nonbreaking Space (Fixed Width)
Thin space
Unicode: U2009
HTML:  
Indesign: Type → Insert White Space → Thin Space
or
Shift + Alt + Cmnd + M
Thin non-breaking space (for Web)
Unicode: U202F
HTML:  
Em space
Unicode: U2003
HTML:  
Indesign: Type → Insert White Space → Em Space
En space
Unicode: U2002
HTML:  
Indesign: Type → Insert White Space → En Space
Third space
Unicode: U2004
HTML:  
Indesign: Type → Insert White Space → Third Space
Quarter space
Unicode: U2005
HTML:  
Indesign: Type → Insert White Space → Quarter Space
Sixth space
Unicode: U2006
HTML:  
Indesign: Type → Insert White Space → Sixth Space
Hair space
Unicode: U200A
HTML:  
Indesign: Type → Insert White Space → Hair Space
Figure space
Unicode: U2007
HTML:  
Indesign: Type → Insert White Space → Figure Space
Punctuation space
Unicode: U2008
HTML:  
Indesign: Type → Insert White Space → Punctuation Space
Ransomware gangs turn to Shanya EXE packer to hide EDR killers
Bleeping Computer
www.bleepingcomputer.com
2025-12-09 00:00:05
Several ransomware groups have been spotted using a packer-as-a-service (PaaS) platform named Shanya to assist in EDR (endpoint detection and response) killing operations. [...]...
Multiple ransomware gangs are using a packer-as-a-service platform named Shanya to help them deploy payloads that disable endpoint detection and response solutions on victim systems.
Packer services provide cybercriminals with specialized tools to package their payloads in a way that obfuscates malicious code to evade detection by most known security tools and antivirus engines.
The Shanya packer operation emerged in late 2024 and has grown in popularity significantly, with malware samples using it being spotted in Tunisia, the UAE, Costa Rica, Nigeria, and Pakistan, as per telemetry data from Sophos Security.
Among the ransomware groups confirmed to have used it are Medusa, Qilin, Crytox, and Akira, with the latter being the one that uses the packers service most often.
Shanya packer used in ransomware attacks
Source: Sophos
How Shanya works
Threat actors submit their malicious payloads to Shanya, and the service returns a “packed” version with a custom wrapper, using encryption and compression.
The service promotes the singularity of the resulting payloads, highlighting the “non-standard module loading into memory, wrapper over the system loaderStub uniqueization,” with “each customer receiving their own (relatively) unique stub with a unique encryption algorithm upon purchase.”
Junk code in the loader
Source: Sophos
The payload is inserted into a memory-mapped copy of the Windows DLL file ‘
shell32.dll
.’ This DLL file has valid-looking executable sections and size, and its path appears normal, but its header and .text section have been overwritten with the decrypted payload.
While the payload is encrypted inside the packed file, it is decrypted and decompressed while still entirely in memory, and then inserted into the ‘
shell32.dll
’ copy file, never touching the disk.
Sophos
researchers found
that Shanya performs checks for endpoint detection and response (EDR) solutions by calling the ‘
RtlDeleteFunctionTable
’ function in an invalid context.
This triggers an unhandled exception or a crash when running under a user-mode debugger, disrupting automated analysis before full execution of the payload.
Disabling EDRs
Ransomware groups typically seek to disable EDR tools running on the target system before the data theft and encryption stages of the attack.
The execution usually occurs via DLL side-loading, combining a legitimate Windows executable such as ‘
consent.exe
’ with a Shanya-packed malicious DLL like
msimg32.dll
,
version.dll
,
rtworkq.dll
, or
wmsgapi.dll
.
According to the analysis from Sophos, the EDR killer drops two drivers: a legitimately signed ThrottleStop.sys (
rwdrv.sys
) from TechPowerUp, which contains a flaw enabling arbitrary kernel memory writing, and the unsigned
hlpdrv.sys
.
The signed driver is used for privilege escalation, while
hlpdrv.sys
disables security products based on commands received from user mode.
The user-mode component enumerates running processes and installed services, then compares the results against entries in an extensive hardcoded list, sending a “kill” command to the malicious kernel driver for each match.
Partial list of targeted services
Source: Sophos
Apart from ransomware operators focused on EDR disabling, Sophos has also observed recent ClickFix campaigns employing the Shanya service to package the CastleRAT malware.
Sophos notes that ransomware gangs often rely on packer services to prepare EDR killers for being deployed undetected.
The researchers provide a detailed technical analysis of some of the payloads packed with Shanya.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Kroger acknowledges that its bet on robotics went too far
This audio is auto-generated. Please let us know if you have
feedback
.
Kroger’s
announcement on Tuesday
that it will shutter three of its robotic e-commerce fulfillment facilities represents a sharp turnabout for the grocery company, which until recently had expressed confidence in its ability to leverage automation to run a profitable online grocery business.
Less than a year ago, Kroger said it
planned to expand
the fleet of high-tech fulfillment centers it has been developing in partnership with U.K.-based warehouse automation company Ocado. And in mid-2024, Kroger revealed that it would
install new technology
from Ocado to improve the efficiency of the warehouses.
When Kroger
launched its partnership with Ocado
, the company “believed in the relentless drive to innovate way ahead of the market in order to delight our customers and advance our position as one of America’s leading e-commerce companies,” former Kroger CEO Rodney McMullen
said in a video
about improvements to its equipment that the automation company announced last year.
However, Kroger’s projected confidence came even as it was questioning whether the Ocado network was living up to expectations.
Kroger revealed in September 2023 that it had decided to
pause development of the Ocado project
as it waited to see if sites it had already started operating would meet performance benchmarks.
In a further sign that its strategy was faltering, Kroger announced last March it would
close three spoke facilities
that worked in tandem with several of its robotic centers, with a spokesperson noting that the facilities “did not meet the benchmarks we set for success.”
By September 2025, it was clear that depending on automation as the foundation of a money-making grocery delivery business was probably not going to pan out for Kroger. Speaking during an earnings call, interim Kroger CEO Ron Sargent — who
took over in March
after McMullen’s sudden departure following an ethics probe — said the company would
conduct a “full site-by-site analysis
” of the Ocado network.
Sargent also said Kroger would refocus its e-commerce efforts on its fleet of more than 2,700 grocery supermarkets because it believed that its stores gave it a way to “reach new customer segments and expand rapid delivery capabilities without significant capital investments.”
Kroger said on Tuesday that its decision to close the three robotic facilities, along with other adjustments to its e-commerce operations, would provide a $400 million boost as it looks to improve e-commerce profitability. But the course-correction will be expensive, forcing Kroger to incur charges of about $2.6 billion.
Ken Fenyo, a former Kroger executive who now advises retailers on technology as managing partner of Pine Street Advisors, said the changes Kroger is making reflect the broader reality that grocery e-commerce has not reached the levels the industry had predicted when the COVID-19 pandemic supercharged digital sales five years ago.
Fenyo added that Kroger’s decision to locate the Ocado centers outside of cities turned out to be a key flaw.
“Ultimately those were hard places to make this model work,” said Fenyo. “You didn’t have enough people ordering, and you had a fair amount of distance to drive to get the orders to them. And so ultimately, these large centers were just not processing enough orders to pay for all that technology investment you had to make.”
With its automated fulfillment network, Kroger bet that consumers would be willing to trade delivery speed for sensible prices on grocery orders. That model has been highly successful for Ocado in the U.K., but U.S. consumers
have shown they value speed of delivery
, with companies like Instacart and DoorDash expanding rapidly in recent years and rolling out services like 30-minute delivery.
Fenyo pointed out that micro-fulfillment technology has also run into significant headwinds, adding that he thinks that outside of areas with large numbers of shoppers and high online ordering volume, putting automated order-assembly systems in stores probably doesn’t justify the cost.
Kroger’s decision to reduce its commitment to automation also poses a significant setback to Ocado, which has positioned its relationship with Kroger as a key endorsement of its warehouse automation technology. Shares in the U.K.-based robotics company have fallen dramatically and are now
back to their level 15 years ago
, when the company
went public
.
European Council President Warns US Not To Interfere in Europe’s Affairs
Portside
portside.org
2025-12-08 23:37:20
European Council President Warns US Not To Interfere in Europe’s Affairs
barry
Mon, 12/08/2025 - 18:37
...
European Council President Warns US Not To Interfere in Europe’s Affairs
Published
Donald Trump with the European Commission president, Ursula von der Leyen (third left), Emmanuel Macron (second left) and Giorgia Meloni (front), as well as Nato’s Mark Rutte and Ukraine’s Volodymr Zelenskyy | Photo: Ukrainian Presidential Press Service/AFP/Getty Images
The president of the European Council of national leaders, António Costa, has warned Donald Trump’s administration against interfering in Europe’s affairs, as analysts said the US
national security strategy
represented a seismic shift in transatlantic relations.
Released on Friday, the policy paper claims
Europe
faces “civilisational erasure” because of migration and a censorious EU “undermining political liberty and sovereignty”. Confirming not just the Trump administration’s hostility to Europe but its ambition to weaken the bloc, it says the US will “cultivate resistance” in the bloc to “correct its current trajectory”.
Costa said the signal that Washington would back Europe’s nationalist parties was unacceptable. Speaking on Monday, he said there were longstanding differences with Trump on issues such as the climate crisis, but that the new strategy went “beyond that … What we cannot accept is the threat to interfere in European politics,” he said.
“Allies do not threaten to interfere in the domestic political choices of their allies,” the former Portuguese prime minister said. “The US cannot replace Europe in what its vision is of free expression … Europe must be sovereign.”
The strategy document was
welcomed at the weekend by the Kremlin
, which said it “corresponds in many ways to our vision”, while EU-US relations were strained further by a
$120m (£90m) fine
imposed by the EU on Elon Musk’s social media platform X.
Musk said on Sunday the bloc should be “abolished and sovereignty returned to individual countries”. The US deputy secretary of state, Christopher Landau, said the “unelected, undemocratic, and unrepresentative” EU was undermining US security.
Analysts said the document codified a US strategy first
outlined by JD Vance
at this year’s Munich Security Conference in a speech that accused EU leaders of suppressing free speech, failing to halt illegal migration and running from voters’ true beliefs.
“It transposes that doctrine into an officially backed state line,” said Nicolai von Ondarza, the head of European research at the German Institute for International and Security Affairs. “It really represents a fundamental shift in transatlantic relations.”
Von Ondarza said that in particular, “open US backing for regime change” in Europe meant that it was “really no longer possible for EU and national European leaders to deny that US strategy towards its European allies has radically changed”.
Max Bergmann, the director of the Europe, Russia, Eurasia programme at the Washington-based Center for Strategic and International Studies,
said
political meddling in Europe to back far-right nationalists was now “a core part of America’s national strategy”.
Bergmann added: “This isn’t just a speech from a novice vice-president weeks into a new term. It is US policy, and they will try to implement it.” Moreover, he said, it could work: “In a fragmented political landscape, a 1-2% shift can change elections.”
EU leaders “will have to confront the fact that the Trump administration is coming for them politically”, Bergmann said. “Do they just accept that Trump is funding their political downfall? Or does this begin to cause an incredible amount of friction?”
Mujtaba Rahman, of the Eurasia Group risk consultancy, agreed. “The US is now officially committed, alongside Moscow, to interfering in European electoral politics to promote nationalist and anti-EU parties of the far right,” he said.
He said that if the document was US policy, the first election Washington would try to influence would be Hungary’s parliamentary ballot in April next year, in which the nationalist, Moscow-friendly incumbent Viktor Orbán faces a stiff challenge.
Minna Ålander of the Center for European Policy Analysis
said
the policy document was “actually useful. It codifies in policy, in black and white, what has been evident all year long: Trump and his people are openly hostile to Europe.”
Europe’s leaders “cannot ignore or explain the fact away any more”, Ålander said. “Any hope for things to go back to the old normal looks increasingly ludicrous. Europe needs to finally seize the initiative and stop wasting time trying to manage Trump.”
Nathalie Tocci, the director of Italy’s Instituto Affari Internazionale,
said
Europeans had “lulled themselves into the belief” that Trump was “unpredictable and inconsistent, but ultimately manageable. This is reassuring, but wrong.”
The Trump administration had “a clear and consistent vision for Europe: one that prioritises US-Russia ties and seeks to divide and conquer the continent, with much of the dirty work carried out by nationalist, far-right European forces,” she said.
Those forces “share the nationalist and socially conservative views championed by Maga and are also working to divide Europe and hollow out the European project”, Tocci said, arguing that flattering Trump “will not save the transatlantic relationship”.
Germany’s spy chief, Sinan Selen, said on Monday he “would not draw from such a strategy document the conclusion that we should break with America”, and Jana Puglierin, a senior policy fellow at the European Council on Foreign Relations, stressed that Trump remained erratic and the document may not ultimately amount to much.
However, she said, the US clearly wanted to “redefine what Europe means, to Europeans”. The aim was to somehow establish that it is “us who are the aberration, that we have somehow forgotten our true values and heritage, and that European greatness therefore needs to be restored – with the help of ‘patriotic’ parties”, Puglierin said.
She said Europeans needed “to see the relationship much more pragmatically. Realise that endless flattery of Trump, promising to spend 5% of GDP on defence, or offering him breakfast with a king … is just not going to cut it.”
Von Ondarza said appeasement “has not worked on trade, it hasn’t worked on security, and it won’t prevent the US supporting Europe’s far right”. “The bloc needs to articulate a strong strategy of its own.” A summit later this month would be a “decisive test of Europe’s ability to say no” to the US, he said.
is the Guardian's Europe correspondent, based in Paris.
The Guardian
is globally renowned for its coverage of politics, the environment, science, social justice, sport and culture. Scroll less and understand more about the subjects you care about with the Guardian's
brilliant email newsletters
, free to your inbox.
Prediction: AI will make formal verification go mainstream — Martin Kleppmann’s blog
Much has been said about the effects that AI will have on software development, but there is an
angle I haven’t seen talked about: I believe that AI will bring formal verification, which for
decades has been a bit of a fringe pursuit, into the software engineering mainstream.
Proof assistants and proof-oriented programming languages such as
Rocq
,
Isabelle
,
Lean
,
F*
, and
Agda
have been around for a long
time. They make it possible to write a formal specification that some piece of code is supposed to
satisfy, and then mathematically prove that the code
always
satisfies that spec (even on weird
edge cases that you didn’t think of testing). These tools have been used to develop some large
formally verified software systems, such as an
operating system kernel
,
a
C compiler
, and a
cryptographic protocol stack
.
At present, formal verification is mostly used by research projects, and it is
uncommon
for industrial software
engineers to use formal methods (even those working on classic high-assurance software such as
medical devices and aircraft). The reason is that writing those proofs is both very difficult
(requiring PhD-level training) and very laborious.
For example, as of 2009, the formally verified seL4 microkernel consisted of 8,700 lines of C code,
but proving it correct required 20 person-years and
200,000 lines
of Isabelle
code – or 23 lines of proof and half a person-day for every single line of implementation. Moreover,
there are maybe a few hundred people in the world (wild guess) who know how to write such proofs,
since it requires a lot of arcane knowledge about the proof system.
To put it in simple economic terms: for most systems, the expected cost of bugs is lower than the
expected cost of using the proof techniques that would eliminate those bugs. Part of the reason is
perhaps that bugs are a negative externality: it’s not the software developer who bears the cost of
the bugs, but the users. But even if the software developer were to bear the cost, formal
verification is simply very hard and expensive.
At least, that was the case until recently. Now, LLM-based coding assistants are getting pretty good
not only at writing implementation code, but also at
writing
proof scripts
in
various languages
. At present, a human with specialist
expertise still has to guide the process, but it’s not hard to extrapolate and imagine that process
becoming fully automated in the next few years. And when that happens, it will totally change the
economics of formal verification.
If formal verification becomes vastly cheaper, then we can afford to verify much more software. But
on top of that, AI also creates a
need
to formally verify more software: rather than having humans
review AI-generated code, I’d much rather have the AI prove to me that the code it has generated is
correct. If it can do that, I’ll take AI-generated code over handcrafted code (with all its
artisanal bugs) any day!
In fact, I would argue that writing proof scripts is one of the best applications for LLMs. It
doesn’t matter if they hallucinate nonsense, because the proof checker will reject any invalid proof
and force the AI agent to retry. The proof checker is a small amount of code that is itself
verified, making it virtually impossible to sneak an invalid proof past the checker.
That doesn’t mean software will suddenly be bug-free. As the verification process itself becomes
automated, the challenge will move to correctly defining the specification: that is, how do you know
that the properties that were proved are actually the properties that you cared about? Reading and
writing such formal specifications still requires expertise and careful thought. But writing the
spec is vastly easier and quicker than writing the proof by hand, so this is progress.
I could also imagine AI agents helping with the process of writing the specifications, translating
between formal language and natural language. Here there is the potential for subtleties to be lost
in translation, but this seems like a manageable risk.
I find it exciting to think that we could just specify in a high-level, declarative way the
properties that we want some piece of code to have, and then to vibe code the implementation along
with a proof that it satisfies the specification. That would totally change the nature of software
development: we wouldn’t even need to bother looking at the AI-generated code any more, just like we
don’t bother looking at the machine code generated by a compiler.
In summary: 1. formal verification is about to become vastly cheaper; 2. AI-generated code needs
formal verification so that we can skip human review and still be sure that it works; 3. the
precision of formal verification counteracts the imprecise and probabilistic nature of LLMs. These
three things taken together mean formal verification is likely to go mainstream in the foreseeable
future. I suspect that soon the limiting factor will not be the technology, but the culture change
required for people to realise that formal methods have become viable in practice.
If you found this post useful, please
support me on Patreon
so that I can write more like it!
I won't give your address to anyone else, won't send you any spam, and you can unsubscribe at any time.
EU Court Rules That Apple Must Face Dutch Antitrust Lawsuit Regarding App Store Commission Rates
Daring Fireball
www.macrumors.com
2025-12-08 23:13:43
Juli Clover, writing at MacRumors (via a report at Reuters):
Apple could ultimately have to pay up to an estimated 637 million
euros to address the damage suffered by 14 million iPhone and iPad
users in the Netherlands.
That’s about €45/user.
The lawsuit dates back to 2022, when two Dutch ...
Apple is not going to be able to escape a class-action antitrust lawsuit over anticompetitive
App Store
fees in the Netherlands, the Court of Justice of the EU (CJEU) said today. The decision could see Apple facing millions of euros in damages, and it sets a precedent for similar lawsuits in other European countries (via
Reuters
).
Apple could ultimately have to pay up to an estimated 637 million euros to address the damage suffered by 14 million
iPhone
and
iPad
users in the Netherlands.
The lawsuit dates back to 2022, when two Dutch consumer foundations (Right to Consumer Justice and App Store Claims) accused Apple of abusing its dominant market position and charging developers excessive fees. The lawsuit was filed on behalf of Dutch iPhone and iPad users, and it claimed that Apple's 30 percent commission inflated prices for apps and in-app purchases.
Apple argued that the Dutch court did not have jurisdiction to hear the case because the EU App Store is run from Ireland, and therefore the claims should be litigated in Ireland. Apple said that if the Dutch court was able to hear the case, it could lead to fragmentation with multiple similar cases across the EU, plus it argued that customers in the Netherlands could have downloaded apps while in other EU member states.
The District Court of Amsterdam ended up asking the CJEU if it had the jurisdiction to hear the case, and the CJEU said yes. The court decided that the App Store in question was designed for the Dutch market, and it offers Dutch apps for sale to people with an
Apple ID
associated with the Netherlands, giving Dutch courts jurisdiction.
Apple told
Reuters
that it disagrees with the court's ruling, and that it will continue to vigorously defend itself. The District Court of Amsterdam expects to hear the case toward the end of the first quarter of 2026.
The civil App Store fee case that Apple is now facing in the Netherlands is separate from the dating app case that was levied against Apple by ACM, the Dutch competition authority. That case involved regulatory action that led to new alternative purchase options for Dutch dating apps. Apple has also been
fighting that antitrust case
, and racked up
fines of 50 million euros
.
Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our
Political News
forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.
Apple is about to release iOS 26.2, the second major point update for iPhones since iOS 26 was rolled out in September, and there are at least 15 notable changes and improvements worth checking out. We've rounded them up below.
Apple is expected to roll out iOS 26.2 to compatible devices sometime between December 8 and December 16. When the update drops, you can check Apple's servers for the ...
Intel is expected to begin supplying some Mac and iPad chips in a few years, and the latest rumor claims the partnership might extend to the iPhone.
In a research note with investment firm GF Securities this week, obtained by MacRumors, analyst Jeff Pu said he and his colleagues "now expect" Intel to reach a supply deal with Apple for at least some non-pro iPhone chips starting in 2028....
A U.S. appeals court has upheld a temporary restraining order that prevents OpenAI and Jony Ive's new hardware venture from using the name "io" for products similar to those planned by AI audio startup iyO, Bloomberg Law reports.
iyO sued OpenAI earlier this year after the latter announced its partnership with Ive's new firm, arguing that OpenAI's planned "io" branding was too close to its...
Apple's iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models at the same time, which is why we often get rumored features months ahead of launch. The iPhone 18 series is no different, and we already have a good idea of what to expect for the iPhone 18 Pro and iPhone 18 Pro Max.
One thing worth...
Apple is actively testing under-screen Face ID for next year's iPhone 18 Pro models using a special "spliced micro-transparent glass" window built into the display, claims a Chinese leaker.
According to "Smart Pikachu," a Weibo account that has previously shared accurate supply-chain details on Chinese Android hardware, Apple is testing the special glass as a way to let the TrueDepth...
In a statement shared with Bloomberg on Wednesday, Apple confirmed that its software design chief Alan Dye will be leaving. Apple said Dye will be succeeded by Stephen Lemay, who has been a software designer at the company since 1999.
Meta CEO Mark Zuckerberg announced that Dye will lead a new creative studio within the company's AR/VR division Reality Labs.
On his blog Daring Fireball,...
Apple is expected to launch a new foldable iPhone next year, based on multiple rumors and credible sources. The long-awaited device has been rumored for years now, but signs increasingly suggest that 2026 could indeed be the year that Apple releases its first foldable device.
Subscribe to the MacRumors YouTube channel for more videos.
Below, we've collated an updated set of key details that ...
Apple's senior vice president of hardware technologies Johny Srouji could be the next leading executive to leave the company amid an alarming exodus of leading employees, Bloomberg's Mark Gurman reports.
Srouji apparently recently told CEO Tim Cook that he is "seriously considering leaving" in the near future. He intends to join another company if he departs. Srouji leads Apple's chip design ...
There is uncertainty about Apple's head of hardware engineering John Ternus succeeding Tim Cook as CEO, The Information reports. Some former Apple executives apparently hope that a new "dark-horse" candidate will emerge.
Ternus is considered to be the most likely candidate to succeed Cook as CEO. The report notes that he is more likely to become CEO than software head chief Craig Federighi, ...
Tuesday December 2, 2025 11:09 am PST by
Juli Clover
Apple is encouraging iPhone users who are still running iOS 18 to upgrade to iOS 26 by making the iOS 26 software upgrade option more prominent.
Since iOS 26 launched in September, it has been displayed as an optional upgrade at the bottom of the Software Update interface in the Settings app. iOS 18 has been the default operating system option, and users running iOS 18 have seen iOS 18...
It's ~2026 –. ChatGPT still doesn't allow email change
Malicious VSCode extensions on Microsoft's registry drop infostealers
Bleeping Computer
www.bleepingcomputer.com
2025-12-08 22:30:19
Two malicious extensions on Microsoft's Visual Studio Code Marketplace infect developers' machines with information-stealing malware that can take screenshots, steal credentials, and hijack browser sessions. [...]...
Two malicious extensions on Microsoft's Visual Studio Code Marketplace infect developers' machines with information-stealing malware that can take screenshots, steal credentials, crypto wallets, and hijack browser sessions.
The marketplace hosts extensions for the popular VSCode integrated development environment (IDE) to extend functionality or add customization options.
The two malicious extensions, called Bitcoin Black and Codo AI, masquerade as a color theme and an AI assistant, respectively, and were published under the developer name 'BigBlack.'
At the time of writing, Codo AI was still present in the marketplace, although it counted fewer than 30 downloads. Bitcoin Black's counter showed only one install.
Codo AI on VSCode Market
Source: BleepingComputer.com
According to Koi Security, the Bitcoin Black malicious extension features a "*" activation event that executes on every VSCode action. It can also run PowerShell code, something that a theme does not need and should be a red flag.
In older versions, Bitcoin Black used a PowerShell script to download a password-protected archived payload, which created a visible PowerShell window and could have warned the user.
In more recent versions, though, the process switched to a batch script (bat.sh) that calls
'curl'
to download a DLL file and an executable, and the activity occurs with the window hidden.
Malicious payload from bat.sh
Source: Koi Security
Idan Dardikman of Koi Security says that Codo AI has code assistance functionality via ChatGPT or DeepSeek, but also includes a malicious section.
Both extensions deliver a legitimate executable of the Lightshot screenshot tool and a malicious DLL file that is loaded via the DLL hijacking technique to deploy the infostealer under the name
runtime.exe
.
The malicious DLL is flagged as a threat by 29 out of the 72 antivirus engines on Virus Total, the researcher notes in a
report
today.
The malware creates a directory in
'%APPDATA%\Local\
' and creates a directory called
Evelyn
to store stolen data: details about running processes, clipboard content, WiFi credentials, system information, screenshots, a list of installed programs, and running processes.
Evelyn directory created to store stolen data
source: BleepingComputer
To steal cookies and hijack user sessions, the malware launches the Chrome and Edge browsers in headless mode so it can snatch stored cookies and hijack user sessions.
The malware also steals cryptocurrency wallets like Phantom, Metamask, Exodus. It looks for passwords and credentials
BleepingComputer has contacted Microsoft about the presence of the extensions in the marketplace, but a comment wasn't immediately available.
Malicious VS Code extensions have been pushed to platforms providing extensions with VS Code IDEs, such as OpenVSX and Visual Studio Code, one of the most notable campaigns being
Glassworm
.
Developers can minimize the risks of malicious VSCode extensions by installing projects only from reputable publishers.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Show HN: I built a system for active note-taking in regular meetings like 1-1s
A California's SNAP benefits shopper pushes a cart through a supermarket in Bellflower, Calif., Feb. 13, 2023. | Allison Dinner/AP
Nutrition policy expert Marion Nestle says that when she wrote her first book, Food Politics, in 2002, people often asked her what food had to do with politics.
"Nobody asks me that anymore," Nestle says. "When I look at what's happening with food assistance I'm just stunned."
Nestle says the Trump administration's efforts to withhold SNAP benefits from millions of Americans has made clear how fragile our economy is: "We have 42 million people in this country — 16 million of them children — who can't rely on a consistent source of food from day to day and have to depend on a government program that provides them with benefits that really don't cover their food needs, only cover part of their food needs."
Decades of studying the food industry have given Nestle a clear-eyed view of why food has become difficult to afford — including the ways supermarkets contribute to the problem. "The purpose of a supermarket is to sell as much food as possible to as many people as possible, as often as possible at as higher prices they can get away with," she says.
Nestle's 2006 book, What to Eat, became a consumer bible of sorts when it came out, guiding readers through the supermarket while exposing how industry marketing and policy steer our food choices. Now, two decades later, she's back with What to Eat Now, a revised field guide for the supermarket of 2025.
Nestle recommends what she called a "triple duty" diet aimed at preventing hunger, obesity and climate change: "Eat real food, processed as little as possible, with a big emphasis on plants," she says.
The more products you see, the more you're likely to buy. Therefore, the products that are organized so that you cannot miss them are in prime supermarket real estate. And companies pay the supermarkets to place their products at eye level, at the ends of aisles — those have a special name, end caps — and at the cash register. When you see products at the catch register, they're paying fees to the supermarket by the inch of space. And that's how supermarkets make a lot of their money, is through slotting fees. And, of course, what this does is it keeps small producers out, because they can't afford to make those kinds of payments. ... I mean, we're talking about thousands, or in some cases, hundreds of thousands of dollars. And every single product that is in a supermarket is placed where it is for a reason.
On how dollar stores got into the food business
They started out by selling the most popular ultra-processed foods. ... They're going to have chips. They're going to have sugar-sweetened cereals. They're going to have every junk food you could possibly think of. That's what they make their money off of. They will have a few fruits and vegetables, a few sad bananas, a few sad apples, maybe some pears, maybe some green vegetables, but not very many, and they'll be in a case off somewhere because they have to offer those. Because they're taking SNAP benefits, they're required to meet the stocking requirements of the SNAP program, which requires them to have a certain number of fruits and vegetables. … And [dollar stores are] just everywhere. And during the pandemic, particularly, they just proliferated like mad, and they undercut local stores. They're cheaper. They have poorer quality food, but the prices are lower. Price is an enormous issue.
If you want a Trader Joe's or a Whole Foods or a Wegmans in your neighborhood, you've got to have hundreds of thousands of people within walking distance or quick driving distance who make very, very good incomes or the aren't gonna go there. They're going to close the stores that are not performing well, meaning having lots and lots of people spending lots and lots of money at them. And so as the big grocery stores have closed in inner city neighborhoods, the dollar stores moved in.
On food waste in AmericaOur food system in the United States produces 4,000 calories a day for every man, woman and little tiny baby in the country. That's roughly twice what the population needs on average. So waste is built into the system.
Because that's how the subsidies work. The agricultural subsidies encourage food producers to produce as much food as possible because they get paid for the amount of food that they produce.
On initially agreeing with Robert F. Kennedy Jr.'s "Make America Healthy Again" approach to the food industry
I was very hopeful when he was appointed, because he was talking about, let's get the toxins out of the food supply. Let's make America healthy again. Let's make America's kids healthy again. Let's do something about ultra-processed foods. Let's do something about mercury and fish. And a lot of other issues that I thought, "Oh, how absolutely terrific that we're going to have somebody who cares about the same kind of issues I do. This is very exciting."
When President Trump introduced his nomination of Robert F. Kennedy Jr. on social media, President Trump talked about the food industrial complex. I nearly fell off my chair! I thought, "Here's the president sounding just like me. What's going on here?" So then we had the first MAHA report, the first Make America Healthy Again report, which talked about a lot of these issues and put in an aspirational agenda. "We're going to work on this, this and this" — all of that sounded terrific. And then the second report came out and they had backed off on nearly all of the things that I thought were really critically important.
On why she believes the food system needs a revolution
I think it would start with transforming our agricultural production system to one that was focused on food for people instead of animals and automobiles. We would need to change our electoral system so that we could elect officials who were interested in public health rather than corporate health. We would need to fix our economy so that Wall Street favors corporations who have social values and public health values as part of their corporate mission. Those are revolutionary concepts at this point because they seem so far from what is attainable. But I think if we don't work on that now, if we do not do what we can to advocate for a better food system, we won't get it. And it's only if we advocate for it that we have a chance of getting it. And you never know, sometimes you get lucky. …
I tell people that they can't do it on their own, that even the act of going into a grocery store and trying to make healthy choices means that you, as an individual, are up against an entire food system that is aimed at getting you to eat the most profitable foods possible, regardless of their effects on health and the environment. So you have to join organizations. You have to join with other people who are interested in the same issues and concerned about the same problems and get together with them to set some goals for what you'd like to do and then work towards those goals. Because if you don't do it, who will?
Therese Madden and Anna Bauman produced and edited this interview for broadcast. Bridget Bentz, Molly Seavy-Nesper and Meghan Sullivan adapted it for the web.
The price of copper has reached new record highs due to supply constraints. And while the Energy Information Agency expects global copper production to reach an all time high later this decade, they also warn that by 2035 the world will be in a whopping 10 million ton shortfall. Australian mining giant BHP also estimates that the world will produce 15% less copper in 2035 than it does today, as copper discoveries grind to a screeching halt and existing mines deplete. The signs of an imminent peak and decline in copper production could not be any clearer.
Thank you for reading The Honest Sorcerer. If you value this article or any others please share and consider a subscription, or perhaps buying a virtual coffee. At the same time allow me to express my eternal gratitude to those who already support my work — without you this site could not exist.
The price of copper has reached new record highs this weak, exceeding $11,600 per ton on the London Metal Exchange (LME) on Friday. Ostensibly this was due to a large withdrawal of the metal from LME warehouses earlier this week, but if you look at the
long term trend
, there is clearly much more at play here. The price of copper is trending upwards for decades now. Global financial giant UBS has just
raised its price forecasts aggressively
, predicting that copper will cost $13,000 per ton by December 2026. What is going on here?
Simply put, we are on a collision course between tightening global copper supply and demand fueled by electrification and most recently: AI data centers. Copper is an essential component in everything electric due to it’s high heat and electrical conductivity, surpassed only by silver. Copper wires can be found in everything from power generation, transmission, and distribution systems to electronics circuitry, telecommunications, and numerous types of electrical equipment—
consuming half of all mined copper
. The red metal and its alloys are also vitally important in water storage and treatment facilities—as well as in plumbing and piping—as it kills fungi, viruses and bacteria upon contact and conducts heat very efficiently. Thanks to its corrosion resistance and biostatic characteristics, copper is also widely used in marine applications and construction, as well as for coinage.
Growth in copper demand thus comes from both ‘traditional’ economic growth—especially in the Global South—and the energy supply
addition
from “renewables”. (Not to mention the extra demand from EV-s and data centers, or energy efficiency and conservation measures, such as smart grids, LED lighting, and heat pumps.) Problem is, that the generation and transmission of low carbon electricity requires more copper per megawatt than conventional fossil fuel power plants. Offshore wind farms, for example, take around 11 tonnes of copper per megawatt to build—that is over 5 times as much as gas-fired power plants. Onshore wind and solar are also more copper-intensive, at around 1.7 and 1.4 times, respectively. In addition,
the capacity factors of wind and solar power
are also much-much lower than fossil power. This means that we need to install 5-10 times more renewable power capacity, just to generate the same amount of electricity we used to do with natural gas or coal. Together with the necessary grid extensions, batteries, transmission lines, transformers etc. the copper demand raised by “renewables” will grow orders of magnitude greater than that of traditional, but highly polluting fossil fuel generation.
On the traditional economic growth front, demand can also be expected to grow dramatically. Perhaps it comes as no surprise, that China continues to be the world’s largest consumer of copper with its massive industrial output—accounting for nearly
60% of global copper consumption
, and dwarfing the US in the second place at 6%. Looking ahead, though, India can be expected to rapidly overtake the United States to become the third-largest source of refined copper demand, with Viet Nam also emerging as a major contender for copper. Industrialization, infrastructure development, population expansion, urbanization and relocating plants out of China are all driving forces for the growth in refined copper consumption in these regions. So, even as China’s economy matures and Western industries decline, there are a number of nations with an insatiable demand to take up the slack. No wonder UBS expects global copper demand to grow by 2.8% annually through 2026 and beyond. Australian mining giant
BHP
’s estimates are not much smaller either:
“Putting all these levers together, we project global copper demand to grow by around 70% to over 50 Mt per annum by 2050 – an average growth rate of 2% per year.”
Problem is, copper doesn’t grow on trees. It can only be found in certain geological formations, taking millions of years to form. In other words: it’s a finite, non-renewable resource. Humans have used copper for over 11,000 years, and as usual we went after the easiest to find and extract deposits first. Naturally, when all you have is a pickax and basket, you don’t start to build large open pit mines. Our ancestors thus went after copper nuggets found in riverbeds first, collecting lumps with 35% copper content, or perhaps climbed a little uphill and hammered away rocks with a still considerable amount of metal in them. Then, only when these resources were depleted, have they started to dig caves and build underground mines, following thin seams of copper in the rock.
Today there is very little—if any—copper left in the world, which could be mined using artisan techniques. As we ran out of those easy-to-find, easy-to-get ores with a high metal content, we increasingly had to rely on machines to haul away the mountains of rock overburden and to bring up copper ores with an ever lower metal content. And thus we face a predicament: what shall we do when there is no more easy-to-get copper resources to be found? See, what little is discovered today, lies beneath miles of rock or in the middle of a jungle, and takes more and more money, energy and resources to get. The chart below tells it all:
The decline in copper discoveries is visible.
Source
As you can see, finding more copper is not an issue of price. Only 14 of the 239 new copper deposits discovered between 1990 and 2023 were discovered in the past 10 years. Even though the industry would be willing to pay top dollar for each pound of metal delivered, there is simply not much more to be found. Copper bearing formations are not popping up at random, and there is no point in drilling various spots on Earth prospecting for deposits, either. The major formations have already been discovered, and thus the ever increasing investment spent on locating more copper simply does not produce a return.
And this is where our dreams and desires diverge from material reality.
Despite rising copper prices, exploration budgets remained below their early 2010s peaks, further reducing the possibility of finding new deposits. Companies were prioritizing extending existing mines rather than searching for new ones, with early-stage exploration dropping to just 28% of budgets in 2023. Copper mining is an extremely dirty, water intensive and polluting business. No wonder local communities are doing everything they can to avoid another mine being built next to their village—further reducing the options for extending supply. Global copper reserves were approximately
one billion metric tonnes
as of 2023, and due to the reasons listed above, this figure cannot be expected to grow much larger—unlike demand.
Mining truck. Notice how many ladder-steps you need to take to get into the cabin.
Source
According to
this study
a full transition to an alternative energy system—powered entirely by a combination of “renewables”, nuclear and hydro—would require us to mine 4575 million tons of copper; some four-and-a-half-times the amount we have located so far. To say that we have embarked on a “mission impossible” seems to be an understatement here. Even if we could extract every ounce of copper in the ground in the coming decades, we could only replace 22% of our ageing fossil fuel energy system with an all electric one, then would be left wondering what to do with the remaining 78%… We clearly have a serious math problem here. And this is not just a vague theoretical issue to be solved sometime in the future. As discoveries ground to a screeching halt and existing mines slowly deplete, suppliers of copper will find it increasingly hard to keep pace with growing demand in the years ahead.
At first, falling inventories and
persistent supply risks
will keep market conditions extremely tight. This is where we are at the moment. Persistent mine disruptions, like an accident in Indonesia, a slower-than-expected output recovery in Chile and recurring protests affecting operations in Peru are already putting strains on supply. No wonder UBS has trimmed its refined copper production growth estimates to just 1.2% for 2025… It gets worse, though: tariffs, trade and geopolitical uncertainties, together with droughts, landslides and other climate change related concerns are all threatening to worsen the copper supply outlook in the years ahead. The balance between supply and demand is already very fragile, and can be expected to become feebler still. Hence the price rally we see unfolding.
Copper demand outlook and supply. Notice the increasing shortfall as years progress. Source:
EIA
In the medium term, however, we are facing an even bigger issue. We are rapidly approaching an inflection point, where mined copper supply begins to fall—irrespective of demand. Even as global mined copper output reached a record 22.8 million tons in 2024,
the IEA expects global supply to peak later this decade (at around 24 Mt) before falling noticeably to less than 19 Mt by 2035
, as ore grades decline, reserves become depleted and mines are retired. Despite the potential contribution from African copper, new greenfield supply will struggle to make up the difference, as it takes
17 years
on average till a mine starts production from discovery, and as new mines cost more and more to open. In Latin America, average brownfield projects now require 65% higher capital investments compared to 2020, approaching similar levels to greenfield projects. Starting a mine from scratch, on the other hand, is getting even more challenging, experiencing delays and facing long lead times. Major copper projects including Oyu Tolgoi (Mongolia) and Quebrada Blanca 2 (Chile) have experienced significant delays and cost overruns.
Simply put, we have run out of time, capital, resources and energy to prevent a massive shortfall in copper production by 2030.
“The trend of declining copper head grades is well established and unlikely to be reversed,”
says consultancy firm McKinsey in its
research
. Referring to the metal content of mined ore going into a mill for processing, researchers at McKinsey pointed out the crux of the predicament. As we dug out all the high grade ores, what’s left requires increasingly energy intensive and complex methods to get. BHP, a world-leading Australian multinational mining company, found that the average grade of copper ore has declined by 40% since 1991.
Needless to say, this process had—and continues to have—profound implications. Instead of bringing rocks with 1% copper content to the surface (which is already bad enough in and of itself), miners now have to haul 0.6% grade ores on their trucks. This worsening trend puts excess strain on both the shoveling (excavators and dumper) fleet, and on the ore mill itself. To bring an example, imagine you are driving a dumper track capable of hauling 100 tons of crushed rock from the mining pit. In 1991, each load you emptied into the ore mill contained 1 metric ton of pure copper, waiting to be liberated. Forty years later, the same truckload of ore contained just 600 kg (or 1322 pounds) of the red metal. Needless to say, your truck didn’t consume less diesel fuel and spare parts just because your mine has run out of high grade ores: you still had to haul 100 tons of rock in each round.
However, as years passed, you had to drive into an ever deeper mine, going further deeper for the same amount of rock, while burning through untold gallons of ever costlier fuel. Meanwhile, the mill had to crush this ever lower grade ore into an ever finer dust (1), in order to liberate the ever smaller particles of copper. What’s more, as the McKinsey report points out, less capital-intensive oxide ore bodies are now being exhausted across the globe, leaving miners with more labor and energy-intensive sulfide ores (2). Is it any wonder then that the production costs of copper mining just keeps rising and rising?
U.S. Bureau of Labor Statistics, Producer Price Index by Commodity: Special Indexes: Copper and Copper Products,
retrieved from FRED
, Federal Reserve Bank of St. Louis; December 5, 2025.
This is indeed a perfect storm for copper mining.
Falling ore grades, leading to extra fuel and electricity demand in hauling and milling copper bearing rocks. The depletion of copper oxide mines, and their replacement with sulfide deposits requiring extra equipment and tons of energy to process. Rapidly decreasing rate of new resource discoveries, combined with increased capital costs and complexity for expansions and new projects—all deterring investment. Increased flooding and drought risk, threatening extraction both in tropical humid climates as well as in the deserts of the Andes. Trade wars, tariffs, regulations, geopolitical tensions… Is it any wonder then that BHP has came to the same conclusion as the EIA, showing us a nice little graph depicting that which must never be named:
peak copper
. Even BHP, for whom copper mining is one of the primary source of revenue, estimates that existing mines will be producing around 15% less copper in 2035 than they do today, leading to a whopping 10 million ton shortfall in mined copper compared to demand.
That, my friends, is a mighty big peak and decline - but don’t call it that, please. Source:
BHP
Not that this was not foreseen. The idea of peak copper, or the time when annual copper output reaches its all time high, then begins to decline, is not something new. The math is not terribly complicated, and so it was done more than ten years ago already. Although scientists at Monash University (Melbourne, Australia) somewhat overestimated peak production (putting it at around 27 Mt in 2030), they came pretty close. As things stand today, we will most likely reach peak mined copper supply somewhere between now and 2030, at 24-25 million tons per annum. And all this comes on top of humanity reaching
peak liquid fuel supply
around the same time—isn’t it ironic…?
Global copper production by countries and regions as modelled by GeRS-DeMo in dynamic demand mode.
Source
Almost all of the articles and studies referenced in this essay refer to a “wide variety of supply- and demand-side measures” needed to close the gap left behind by peak copper. Measures include: “stimulating investment in new mines, material efficiency, substitution and scaling up recycling.” If you’ve read this long, for which I’m eternally grateful, allow me to be a little blunt here, and let me call this what it is: BS.
First, we “must” build out a new energy system, before we run out of coal, oil and gas—let alone before we could start recycling old electric vehicles, wind turbines and the rest (3). Electricity currently provides 22% of our energy needs, the rest, especially in heavy industries comes from burning fossil fuels. (Which by the way is a show-stopper on its own, as electricity cannot replace these fuels at scale,
especially in high heat applications
needed to build solar panels, wind turbines and yes, refining copper.) Knowing the amount of copper reserves, and the lamentable, sorry state of discoveries, it is utterly inconceivable that we could build out even half of the necessary “renewable” power generation capacity before we completely run out of the red metal, even if we turned every scrap metal yard upside down and inside out.
Most of the copper in circulation is already serving the needs of electrification, or used in applications where copper’s antimicrobial and heat conducting properties are essential. The lifecycle of these products is measured in years and decades, BHP assessed that the average life of copper in-use is around 20 years. So at best we could recycle what we have manufactured around 2005, when global copper production was half of what we see today… What’s worse, much of this old scrap is never recovered. According to BHP’s estimate only 43% of available scrap was collected and recovered for re-use in 2021, falling to 40% in 2023 as “lower prices, slowing economic activity and regulatory changes acted as headwinds.” And we haven’t even talked about the rise of “scrap nationalism, aiming to preserve the local use of secondary material, and placing restrictions on cross-regional waste trade.” Let’s be realistic, recycling won’t be able to fill the gap. At best, it can slow down the decline… Somewhat.
Have you thought about how aluminum is made? Well, by driving immense electric currents through carbon anodes made from petroleum coke (or coal-tar pitch) to turn molten alumina into pure metal via electrolysis. Two things to notice here. First, the necessary electricity (and the anodes) are usually made with fossil fuels, as “renewables” cannot provide the stable current and carbon atoms needed to make the process possible. Second, all that electricity, even if you generate it with nuclear reactors, have to be delivered via copper wires. And this takes us to our next saving grace:
substitution
, referring to the replacement of copper by other materials, such as aluminum, plastics, or fiber optics.
Substitution and thrifting (the reduction of copper content or usage in products), on the other hand, “would require significant design modifications, product line alterations, investment in new equipment, and worker retraining.” Since copper has some unique advantages, making it difficult to substitute or thrift in many end-uses, this is far easier said than done. Let’s take conductivity. The biggest loss by far in any (and every) electric equipment is the waste heat generated by internal resistance of wires and the myriad of electric components. It’s not hard to see why replacing copper with lower quality materials (like aluminum) in wires, and other critical components comes at a serious drop in performance — if it’s technically possible at all. Except for high voltage cables hanging in the air from tall poles, it’s hard to think of any application where the excess heat generated by electrical resistance would not damage the system to the point of catching fire, or degrading its performance considerably.
So when copper prices rise beyond the point of affordability, we won’t see a significant increases in substitution or thrifting activities. Instead, financially weaker companies will go bankrupt, markets will consolidate and consumers will be priced out en masse. Just like with oil, we will face an affordability crisis here, first driving prices sky-high—only to see them plunge to new depths as demand disappears. Peak copper and peak oil will hit us like a wave of Tsunami, amplifying each other through many feedback loops. (Just think about diesel use in copper mines, or copper use in energy production.) Despite the many warnings we are going into this storm completely unprepared, and have done practically nothing to prevent the waves crashing over us.
The window of material opportunities to maintain—let alone grow—this six continent industrial civilization is closing fast. Not 500 years from now, but starting today and slamming shut ever faster during the decades ahead, as all economically viable reserves of fossil fuels and copper run out. This is a geological reality, not something you can turn around with fusion, solar, or whatever energy source you fancy. We have hit material and ecological limits to growth, and mining in space is not even on the horizon. Trying to switch to “renewables” or building AI data centers so late in the game is thus not only technically infeasible but ill-advised, accelerating resource depletion even further and bringing about collapse even faster. Instead of hoping that technology will somehow save us, we
immediately
need to start working on and implementing
a ramp-down plan
on the highest governance level, before the chaos over “who gets to use the last remaining resources on Earth” ensues us all.
Until next time,
B
Thank you for reading The Honest Sorcerer. If you value this article or any others please share and consider a subscription, or perhaps buying a virtual coffee. At the same time allow me to express my eternal gratitude to those who already support my work — without you this site could not exist.
(1) The lower the grade (metal content) of an ore, the smaller the grains of copper entrapped within the rock are. Smaller grains mean a more homogeneous structure, resulting in harder rocks, requiring more energy to crush… Now, combine this with the fact that we would need to mill those rocks into ever smaller pieces—to free up those tiny copper particles—and you start to see how energy consumption runs rampant as ore grades decline.
(2) The difference lies in what comes after milling copper ore into a powder. You see, copper in oxides is soluble, allowing direct extraction through leaching. During this process dilute sulfuric acid is percolated through crushed ore piled on impermeable pads, dissolving copper into a solution which is collected, then purified via Solvent Extraction (SX) and recovered as pure metal by Electrowinning (EW). The
wide-spread adoption
of this leach - solvent extraction - electrowinning (SxEw) process from the mid-1980’s unlocked previously uneconomic, low grade oxide ores, and now accounts for 20% of mine supply. However, it cannot be used on copper sulfide ores, which require more complex and energy intensive physical separation methods. This type of extraction involves froth flotation after fine grinding, followed by roasting, then smelting (to form a copper-iron sulfide matte), and converting (removing iron to get blister copper)—all done at a vastly greater energy, equipment and labor cost.
(3) Many parts and components built into wind turbines, solar panels and electric vehicles are not designed with recycling in mind. In fact, the industry tends to cramp as many features into one part as it can, in order to reduce assembly costs. This approach often results in parts with monstrous complexity, permanently gluing and welding sub-components made from various materials into one, with plastic often injection molded around them. Put more simply: they are nearly impossible to recycle, and due to their complexity, need skilled manpower to disassemble first, before the excess plastic can be burned off or dissolved in aggressive solvents. Toxic waste (fumes and liquids) are often generated during this process, not to mention the need for excess energy and the complicated logistics network involved in performing this feat. This is why recycling companies tend not to bother with electronic components and dump faulty parts on poor countries in South Asia and Africa.
Reddit to comply with Australia’s ‘legally erroneous’ under-16 social media ban
Guardian
www.theguardian.com
2025-12-08 21:40:28
Platform to introduce age-prediction model analysing users but argued to eSafety commissioner it was a source of information not a social media platformFollow our Australia news live blog for latest updatesGet our breaking news email, free app or daily news podcastReddit will comply with Australia’s...
Reddit will comply with Australia’s under-16s social media ban, due to begin on Wednesday, but says it is “legally erroneous” and “arbitrary” in its effect.
The company argued to the eSafety commissioner that its platform was a source of information, not primarily social media.
Documents obtained by Guardian Australia reveal the company said it was “not centred around real-time social networking among young people”, but rather a “pseudonymous platform organised around sharing information”.
Reddit announced on Tuesday, one day before the ban was due to commence, that it would comply with the law. But
in a post
on the platform that confirmed the decision it also outlined its objections.
New users in Australia will be required to provide their birth date on signup, and existing account holders will go through an age-prediction model, Reddit said.
“We’ll start predicting whether users in Australia may be under 16 and will ask them to verify they’re old enough to use Reddit,” the site said. “We’ll do this through a new privacy-preserving model designed to better help us protect young users from both holding accounts and accessing adult content before they’re old enough.
“If you’re predicted to be under 16, you’ll have an opportunity to appeal and verify your age.”
Reddit described the under 16s ban as “legally erroneous” and “arbitrary” in the post.
Documents obtained by Guardian Australia under freedom of information laws include a September letter from Reddit to eSafety in response to the regulator’s initial contact with Reddit to ask whether it believed the ban should apply to the platform.
In the letter the company argued it was not a social media platform as defined in the law.
“The sole or significant purpose of our platform is to provide knowledge-sharing in timely, context-rich conversations; interaction between end-users is simply an incidental step to enabling this primary purpose,” Reddit said in the letter.
Reddit is a “pseudonymous platform organised around sharing information in topic-based communities rather than personal profiles or social networks,” the platform said.
“It is not in keeping with Reddit norms for users to use their real names or identities on Reddit, as communities are not centred around real-time social networking among young people.”
Reddit does not promote real-time presence, friend requests or activity feeds that drive ongoing engagement, the company said. It said it was committed to collecting minimal personal information from users to preserve pseudonymity on the platform.
The platform pointed to the r/BabyBumpsandBeyondAu and r/AusSkincare subreddits as examples where Australians sought advice or product information.
“People also use the Reddit platform because it serves as the internet’s host of record on a range of sensitive topics, enabled entirely by its pseudonymous nature,” Reddit said, pointing to subreddits such as r/stopdrinking.
“These discussions highlighted to us that the Reddit platform enables knowledge to be sought, distributed, and discussed by the community,” Reddit said.
The Australian Financial Review reported on Tuesday the platform was preparing to launch legal action against the ban, but the company had not confirmed this as of Tuesday morning.
Following Reddit’s announcement, X is the only platform of the 10 initially named by eSafety as needing to ban under-16s users in Australia that has yet to state whether it will comply. The company has not responded to requests for comment. Its
Australian regulation page stated
“anyone above the age of 13 can sign up for a service”.
Delivery Robots Take over Chicago Sidewalks, Sparking Debate and a Petition
LAKEVIEW — The robot revolution is here — on North Side sidewalks, at least.
With names like Stacey, Quincy and Rajesh, the boxy food delivery robots are regularly zooming down side streets — and occasionally getting stuck in the snow — to deliver Shake Shack or Taco Bell to eager patrons in Lakeview, Lincoln Park and Uptown, among other neighborhoods. They’re adorable to some, a safety hazard to others and impossible to ignore for most.
The buzzing bots are causing a stir in person and
online
. In neighborhood Facebook groups, they’ve found fervent support and fierce opposition, while a passionate contingent of neighbors have banded together to oust them from the city altogether.
Josh Robertson is leading that charge. The Lincoln Park resident
has launched a petition
calling for the city to hit pause on the robots, arguing, “Chicago sidewalks are for people, not delivery robots.”
The petition asks the city’s transportation and business departments to “release safety & ADA findings, evaluate that data and local job impacts in a public hearing, and set clear rules” for the robots. As of Dec. 2, more than 1,500 people have signed the petition, 350 of whom included an “incident report” describing their interactions with the robots.
Robertson said he first noticed the robots in his neighborhood earlier this year and thought they were “kind of neat … . It felt futuristic.”
That changed when he went for a walk with his young children and a robot approached on the sidewalk, he said.
“This is a vehicle in the pedestrian path space that’s meant for people, and yet we ended up stepping aside, and something about that felt a little off,” Robertson said. “I began to wonder, what are our sidewalks going to be like if these programs are successful from the company’s point of view, and they continue to scale, and there are dozens and dozens of them on our sidewalks, even on quiet residential sidewalks?”
People walk around a Serve Delivery Robot as it rides along Damen Avenue in Bucktown on Dec. 6, 2025.
Credit:
Colin Boyle/Block Club Chicago
That’s a question many Chicagoans — including some alderpeople — are asking. The offices of
Alds. Angela Clay
(46th) and
Bennett Lawson
(44th) have sent out surveys to neighbors in their respective wards asking them to describe their experiences and concerns with the bots and whether they support or oppose their presence.
“That’s the part that I wish would have happened prior to us implementing this,” said Gaby Rodriguez, of Uptown. “I at least want some control over my sidewalk. I can’t control anything else in this environment, but I can certainly have a voice in what we allow on our sidewalks.”
In a statement, Lawson said his office launched the survey after seeing more robots in Lakeview. The feedback “will help inform our conversations with city departments, operators and others about the future use of delivery robots in our neighborhood and around the city,” he said.
The delivery robot pilot program
launched in Chicago in 2022
under then-Mayor Lori Lightfoot, and a few companies now operate app-based robots in the city. Coco rolled out last year in the 27th and 34 wards, which include parts of the Loop, Near North Side, West Loop, Near West Side, West Town and West Humboldt Park. The company recently partnered with burger chain Shake Shack.
Serve Robotics, used by UberEats and other food delivery apps, expanded to Chicago in late September. Serve rolled out in partnership with more than 100 restaurants in 14 Chicago neighborhoods, including East Garfield Park, Logan Square and Belmont Cragin.
“About half of all food deliveries globally are shorter than 2 and a half miles, which basically means that all of our cities are filled with burrito taxis,” said Viggy Ram, Serve’s vice president of policy. “This is really an effort to make short-distance delivery safer, more sustainable and reduce congestion overall.”
Serve is aware of the Chicago petition and welcomes the feedback, good and bad, in hopes the company can serve the city as best as it can, Ram said. The company’s goal is to expand into more neighborhoods, he said.
Each bot has a “contact us” label for those who want to offer feedback, Ram said.
“Unlike a distracted driver, they are able to look in all four directions at the same time and make the smartest, safest decision possible,” Ram said. “We see this as a much safer option for Chicago’s residents and for pedestrians.”
In a written statement, a representative for Coco said the company “takes safety and community partnership as our top priorities. We have been operating in Chicago for a year with strong community support. We maintain strict protocols for sidewalk safety, ADA compliance and incident response.”
A fleet of Serve Delivery Robot robots are deployed in the 2500 block of North Lincoln Avenue in Lincoln Park on Nov. 24, 2025.
Credit:
Colin Boyle/Block Club Chicago
Some residents have come to the defense of the delivery robots, and even taken a liking to them. One Lakeview neighbor noted a Serve bot gave her “a friendly beep.”
“They are starting to know who are fans,” she said.
Rodriguez thinks that’s intentional, the cutesy design and human-sounding names of the robots distract from what he said are the real issues of accessibility and functionality, particularly for neighbors with disabilities.
Rodriguez argues that the companies have parachuted into Chicago communities without an understanding of, or a desire to learn, neighborhoods’ specific needs, and he worries that while residents currently have the option to use the delivery bot services, they may not have a choice in the future.
“What other corporations are we going to allow on our sidewalks? That’s the last place that was meant to be human-centric, right?” Rodriguez said. “I don’t want to lose that access and give open road for corporations to now start using our sidewalks, which they haven’t in the past.”
A Serve Delivery Robot rides along Damen Avenue in Bucktown on Dec. 6, 2025.
Credit:
Colin Boyle/Block Club Chicago
Rodriguez said he recently called Clay’s office to vent his concerns, and the alderwoman seemed unaware of the quantity and reach of the delivery robots.
Maria Barnes, the 46th ward’s business liaison, confirmed Clay’s office has fielded many concerns from neighbors, though it’s too early to make any conclusions or recommendations based on the survey results, she said.
“We’re still getting feedback from constituents, as well as the companies that are operating these devices. It’s ongoing, so at this point it’s a little premature to form any opinions,” Barnes said.
Robertson shares Rodriguez’s concerns, pointing to incident reports of the robots pushing neighbors off the sidewalks onto busy streets, colliding with bicyclists and even deterring emergency vehicles.
Becca Girsch, executive director of the Lakeview/Roscoe Village Chamber of Commerce, said the organization hasn’t spoken directly with Coco or Serve Robotics but has taken note of the polarizing reaction in recent weeks as Robertson’s petition continues to garner signatures.
“It’s hard not to pay attention,” Girsch said. “It seems like the winds are against the robots. I think right now we’re hearing mostly the negative. So if that’s the trend, then I’m not sure how long this pilot program will sustain itself.”
Robertson said he’s hopeful for a future where, even if the delivery bots continue to be used in the city, they’re better implemented into residents’ lives.
“The fact that they’re responding to this quickly tells me that Chicagoans’ voices are beginning to be heard on this issue,” Robertson said. “These are suddenly such a visible and such a big part of our public lives that we need to make sure that the right level of attention and the right discussions are happening, to weigh the pros and the cons and again, to ultimately ask, what kind of neighborhoods do we want to build?”
Listen to the Block Club Chicago podcast:
Search Results placeholder
FinCEN says ransomware gangs extorted over $2.1B from 2022 to 2024
Bleeping Computer
www.bleepingcomputer.com
2025-12-08 21:07:30
A new report by the Financial Crimes Enforcement Network (FinCEN) shows that ransomware activity peaked in 2023 before falling in 2024, following a series of law enforcement actions targeting the ALPHV/BlackCat and LockBit ransomware gangs. [...]...
A new report by the Financial Crimes Enforcement Network (FinCEN) shows that ransomware activity peaked in 2023 before falling in 2024, following a series of law enforcement actions targeting the ALPHV/BlackCat and LockBit ransomware gangs.
From thousands of Bank Secrecy Act filings, the report documents 4,194 ransomware incidents between January 2022 and December 2024. These reports show that organizations paid more than $2.1 billion in ransom payments, nearly reaching the total reported over 8 years from 2013 to 2021.
In total, from 2013 through 2024, FinCEN tracked approximately $4.5 billion in payments to ransomware gangs.
Law enforcement operations show impact
According to the report, 2023 was the best year for ransomware gangs, with victims reporting 1,512 individual incidents and approximately $1.1 billion in ransom payments, a 77 percent increase from 2022.
However, both stats fell in 2024, with a slight dip to 1,476 incidents, but a dramatic decrease to $734 million in payments. This decrease is believed to be due to law enforcement operations targeting
BlackCat
in 2023 and
LockBit
at the beginning of 2024.
Both of these ransomware gangs were the most active at the time of disruption, with the threat actors moving to new operations or struggling to relaunch.
FinCEN says the amount paid varied, with most ransom payments below $250,000. The analysis also showed that manufacturing, financial services, and healthcare suffered the most ransomware attacks, with financial institutions reporting the most significant dollar losses.
"Between January 2022 and December 2024, the most commonly targeted industries (by number of incidents identified in ransomware-related BSA reports during the review period) were manufacturing (456 incidents), financial services (432 incidents), healthcare (389 incidents), retail (337 incidents), and legal services (334 incidents),"
explained FinCEN's analysi
s.
"The most affected industries by the total amount of ransom paid during the review period were financial services (approximately $365.6 million), healthcare (approximately $305.4 million), manufacturing (approximately $284.6 million), science and technology (approximately $186.7 million), and retail (approximately $181.3 million) (see Figure 4)."
Most impacted industries
Source: FinCEN
In total, FinCEN identified 267 distinct ransomware families, with only a small number responsible for most of the reported attacks.
Akira appeared in the most incident reports (376), followed by ALPHV/BlackCat, which also earned the most, at roughly $395 million in ransom payments, and then LockBit at $252.4 million in payments.
The other ransomware gangs included Black Basta, Royal, BianLian, Hive, Medusa, and Phobos. Collectively, the top 10 most active ransomware gangs accounted for $1.5 billion in ransom payments from 2022 through 2024.
Most active ransomware operations
Source: FinCEN
The payment methods were also tracked, with the majority paid via Bitcoin (97%), and a small number paid in Monero, Ether, Litecoin, and Tether.
FinCEN encourages organizations to continue reporting attacks to the FBI and ransom payments to FinCEN to help disrupt cybercrime.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
'They Want Loopholes': As City Council Tries to Strengthen Immigrant Protections, Adams Administration Disappears
hellgate
hellgatenyc.com
2025-12-08 20:58:07
Laws that would strengthen the city's "sanctuary" status are running up against the clock....
While New York City has a series of
"sanctuary" laws
that limit its cooperation with federal immigration enforcement, there's currently very little an immigrant can do if city agencies were to violate those protections and hand them over to Immigration and Customs Enforcement anyway. The Department of Correction, for example, has been caught repeatedly
helping ICE get its hands on people
who otherwise should have been protected by the City's "sanctuary" laws, and has faced zero consequences for breaking those laws. Just last week, the Department of Investigation
released a report that found an NYPD officer
had digitally tracked a group of people on behalf of federal immigration authorities.
But this loophole in the legislation might finally change by the end of this year. Since the beginning of 2023, immigrant advocates and dozens of city councilmembers have been pushing for the passage of
the NYC Trust Act
, which would allow immigrants to sue the City when their "sanctuary" rights have been violated.
On Monday, the NYC Trust Act got a full hearing in front of the Council's Immigration Committee, alongside three other bills that would help strengthen the City's "sanctuary" protections, including an even stricter ban than the existing one on ICE opening an office in city jails, a law that would require signage in city buildings about the rights New Yorkers have when it comes to ICE, and another that bars employers from
using E-Verify
outside of the initial onboarding process, as a way to threaten current employees.
But the committee hearing was missing one major player—the Adams Administration, which, according to immigration chair Alexa Avilés had ordered all city agencies to not bother to show up to the hearing. That includes the Mayor's Office of Immigrant Affairs, which sent over a three paragraph memo outlining its opinions of the legislation (it didn't support the bills.)
This API was emitting warnings for over 3 years in a top-3 Python package by downloads urging libraries
and users to stop using the API and
that was not enough
. We still received feedback
from users that this removal was unexpected and was breaking dependent libraries.
We ended up
adding the APIs back
and creating a hurried release to fix the issue.
It's not clear to me that
waiting longer would have helped, either. The libraries that were impacted
are actively developed, like the Kubernetes client, Fastly client, and Airflow
and I trust that if the message had reached them they would have taken action.
My conclusion from this incident is that
DeprecationWarning
in its current state does not
work for deprecating APIs, at least for Python libraries. That is
unfortunate, as
DeprecationWarning
and the
warnings
module
are easy-to-use, language-“blessed”, and explicit without impacting users that don't
need to take action due to deprecations. Any other method of deprecating
API features is likely to be home-grown and different across each project
which is far worse for users and project maintainers.
Possible solutions?
DeprecationWarning
is called out in the
“ignored by default” list
for Python. I could ask for more Python developers to run with warnings enabled, but solutions in the form of “if only we could all just” are a folly.
Maybe the answer is for each library to create its own
“deprecation warning” equivalent just to not be in the “ignored by default” list:
importwarningsclassUrllib3DeprecationWarning(UserWarning):passwarnings.warn("HTTPResponse.getheader() is deprecated",category=Urllib3DeprecationWarning,stacklevel=2)
Maybe the answer is to do away with advance notice and adopt SemVer with many major versions, similar to
how Cryptography operates for API compatibility. Let me know if
you have other ideas.
Toasts are small, rectangular notifications that pop up on the screen, triggered either by a user or system behavior. They commonly show up on the bottom left or right-hand side of the viewport and disappear after a preset amount of time.
While it can be tempting to use toast UI as a solution for your task, know that there are many accessibility and usability issues inherent with this pattern. Because of this,
GitHub recommends using other more established, effective, and accessible ways of communicating with users
.
Primer offers a variety of solutions for informing users about updates. Consider:
What kind of outcome you want to achieve, and
How the UI will best enable a user to do that.
Are you attempting to highlight a successful or unsuccessful form submission? Give feedback that an action was successfully undertaken? Alert someone that a long-running task has finished?
Thinking through your use case can help select a UI treatment that not only best serves our users, but also reinforces the internal consistency of experience within the overall GitHub platform.
User and system initiated actions that are direct and straightforward should be successfully completed as a matter of course. An example of this is creating an Issue, and then seeing the Issue show up on the list of Repo Issues.
There does not need to be a secondary form of reinforcement to communicate success, as it should be self-evident—including a toast to communicate this success may ironically lessen a sense of trust.
User and system-initiated actions that require more complicated interaction may need additional feedback mechanisms to help inform the user that their request was successfully enacted. An example of this is the bulk creation of Issues.
Complex interactions may benefit from a secondary form of feedback to communicate success. The manner in which this secondary feedback is expressed depends on the design, but two common approaches are:
Using
banners
to provide a summary of what was performed.
Progressively showing content as it is formed as part of a multi-step or progressive disclosure process.
Note that both approaches persist feedback information and do not auto-dismiss it.
Banners
and
dialogs
can provide feedback about user and system error as a result of an undertaken action. Banners are useful when the error information needs to be passively available, while dialogs are useful for deliberately interrupting the user to get their attention.
Simple forms may not need any other confirmation state other than creating and displaying what the user requested.
More complicated forms can utilize an interstitial confirmation page or
banner
that informs the user about what is being done with the data they submitted.
Primer already has a robust set of components and guidance for
handling input validation
. Using these offerings helps GitHub to feel consistent across the entire surface area of the site.
Actions that take a long time to complete should
utilize banners
to inform the user of task completion or failure. Also consider ways to notify the user in other communication channels such as email,
notifications
, or a push notification in the GitHub app.
There is the potential for a client’s session to become desynchronized, especially if a browser tab has been left open for a long time on a part of GitHub where a lot of dynamic updates are present.
Dialogs
and
banners
can be used to inform the user that a refresh is needed to resynchronize the client and server.
Toast UI risks violating the following
Web Content Accessibility Guideline
(
WCAG
) Success Criteria (
SC
). Each of these SCs has one of three levels for support, and represent friction or a hard barrier for our users. GitHub honors the first two levels: A and AA.
A mechanism needs to be present to extend the toast UI’s presence indefinitely until manually dismissed by the user. This is a guarantee that the toast’s duration is a length that allows all users to be able to navigate to, read, and potentially take action on the toast UI content.
Toast message code is commonly placed at the start or the end of the DOM. Many forms of assistive technology work by reading the DOM in sequence, so there will be a disconnect between what triggers the toast UI and the toast UI itself. This impedes discovery and understanding.
Toasts UI started as a mechanism to display passive notifications, but evolved to include interactive controls.
All interactive controls placed inside a toast UI need to be operable via keyboard, as well as accessing the toast UI container itself. This includes a mechanism for dismissing the toast, as well as managing focus when it is removed from the DOM.
Increasing the text size on the browser or operating system level runs into three potential risks for toast UI.
First is making the toast so large that it obscures the rest of the page content. Second is creating horizontal overflow in an attempt to prevent obscuring the underlying page content. Third is attempting to block text resizing on a toast component to prevent both of the previous risks.
A toast that contains interactive elements needs those elements to be able to receive keyboard focus. Additionally, the toast’s position in the DOM may make the order of focus not make sense compared to the interactive content that comes before or after it.
Many developers work on a larger display, in order to have more screen real estate to work with. Toasts could be placed in such a way that they go unnoticed, in that they sit outside of a user’s immediate field of view.
Since toasts “float” above the rest of the UI, there is a chance that they can obscure underlying content.
This obscuring effect is especially worth considering given that important UI such as form submission buttons tend to also be placed at the bottom corner of the viewport. The effect also becomes more pronounced if multiple toasts stack on top of each other.
Some users rely on a software or hardware-based magnification solution in order to be able to use a computer. Toast notifications may not be seen by the user, in that they are displayed outside of the range of the magnification window.
Toasts that both display important information and automatically dismiss themselves may create a situation where a user is given important information, but then has no way to go back and review the information.
Toasts are an over-used interaction pattern on the web. This leads to a phenomenon where users are taught to ignore and avoid their content, as it is often low-quality or unrelated to their immediate task at hand.
A toast’s placement may be far away from the UI that triggered it. This violation of the gestalt principle of proximity means there is more of a chance a user does not understand the relationship between the toast message and its related piece of content.
Users pressing
Esc
to dismiss a toast may accidentally dismiss another piece of UI, if multiple keyboard-dismissable pieces of content are present. The opposite also applies, where a user may dismiss a toast containing important information while trying to dismiss an unrelated piece of UI.
The following subscription-only content has been made available to you
by an LWN subscriber. Thousands of subscribers depend on LWN for the
best news from the Linux and free software communities. If you enjoy this
article, please consider
subscribing to LWN
. Thank you
for visiting LWN.net!
The
Internet Engineering Task Force
(IETF) is the standards body responsible
for the TLS encryption standard — which your browser is using right now
to allow you to read LWN.net. As part of its work to keep TLS secure, the IETF
has been entertaining
proposals
to adopt "post-quantum" cryptography (that is,
cryptography that is not known to be easily broken by a quantum computer) for TLS
version 1.3. Discussion of the proposal has exposed a large disagreement between
participants who worried about weakened security and others who worried about
weakened marketability.
What is post-quantum cryptography?
In 1994, Peter Shor developed
Shor's algorithm
,
which can use a quantum computer to factor large numbers asymptotically faster
(i.e. faster by a proportion that grows as the size of the input does)
than a classical computer can. This was a huge blow to the theoretical security of the then-common
RSA
public-key encryption
algorithm, which depends on the factoring of numbers being hard in order to
guarantee security. Later work extended Shor's algorithm to apply to other
key-exchange algorithms, such as
elliptic-curve Diffie-Hellman
, the most common key-exchange algorithm on the
modern internet. There are
doubts
that any attack using a quantum computer could actually be made practical — but
given that the field of cryptography moves slowly, it could still be worth
getting ahead of the curve.
Quantum computing is sometimes explained as trying all possible answers to a
problem at once, but that is incorrect.
If that were the case, quantum computers could trivially break any possible
encryption algorithm. Instead, quantum computers work by applying a limited set
of transformations to a quantum state that can be thought of as a
high-dimensional unit-length vector. The beauty of Shor's algorithm is that he
showed
how to use these extremely limited operations to reliably factor numbers
.
The study of post-quantum cryptography is about finding an encryption mechanism
that none of the generalizations of Shor's algorithm or related quantum
algorithms apply to: finding encryption techniques where there is no known way
for a quantum computer to break them meaningfully faster than a classical computer can.
While attackers may not be breaking encryption with quantum computers today, the
worry is that they could use a "store now, decrypt later" attack to break
today's cryptography with the theoretically much more capable quantum computers
of tomorrow.
For TLS, the question is specifically how to make a
post-quantum key-exchange mechanism. When a TLS connection is established, the
server and client use public-key cryptography to agree on a shared encryption
key without leaking that key to any eavesdroppers. Then they can use that shared
key with (much less computationally expensive) symmetric encryption to secure the
rest of the connection. Current symmetric encryption schemes are almost
certainly not vulnerable to attack by quantum computers because of their
radically different design, so the only part of TLS's security that needs
to upgrade to avoid attacks from a quantum computer is the key-exchange mechanism.
Belt and suspenders
The problem, of course, is that trying to come up with novel, hard mathematical
problems that can be used as the basis of an encryption scheme does not always
work. Sometimes, cryptographers will pose a problem believing it to be
sufficiently hard, and then a mathematician will come along and discover a new
approach that makes attacking the problem feasible. That is exactly what
happened to the SIKE protocol
in 2022. Even when a cryptosystem is not
completely broken, a particular implementation can still suffer from side-channel
attacks or other problematic behaviors, as
happened
with post-quantum encryption standard Kyber/ML-KEM
multiple
times
from its initial draft in 2017 to the present.
That's why, when the
US National Institute of Standards and Technology (NIST)
standardized Kyber/ML-KEM
as its recommended
post-quantum key-exchange mechanism in August 2024, it
provided
approved ways to combine a traditional key-exchange mechanism with
a post-quantum key-exchange mechanism. When these algorithms are properly combined (which is not too
difficult, although cryptographic implementations always require some care),
the result is a hybrid scheme that remains secure so long as either one of its
components remains secure.
The Linux Foundation's
Open Quantum
Safe
project, which provides open-source implementations of post-quantum
cryptography, fully supports this kind of hybrid scheme. The IETF's
initial
draft recommendation
in 2023 for how to use post-quantum cryptography in TLS
specifically said that TLS should use this
kind of hybrid approach:
The migration to [post-quantum cryptography] is unique in the history of modern digital cryptography in
that neither the traditional algorithms nor the post-quantum algorithms are
fully trusted to protect data for the required data lifetimes. The traditional
algorithms, such as RSA and elliptic curve, will fall to quantum cryptanalysis,
while the post-quantum algorithms face uncertainty about the underlying
mathematics, compliance issues (when certified implementations will be
commercially available), unknown vulnerabilities, hardware and software
implementations that have not had sufficient maturing time to rule out classical
cryptanalytic attacks and implementation bugs.
During the transition from traditional to post-quantum algorithms, there is a
desire or a requirement for protocols that use both algorithm types. The primary
goal of a hybrid key exchange mechanism is to facilitate the establishment of a
shared secret which remains secure as long as as one of the component key
exchange mechanisms remains unbroken.
But the
most
recent draft
from September 2025, which was ultimately adopted as a working-group document, relaxes that
requirement, noting:
However, Pure PQC Key Exchange may be required for specific deployments with
regulatory or compliance mandates that necessitate the exclusive use of
post-quantum cryptography. Examples include sectors governed by stringent
cryptographic standards.
This
refers
to the US National Security Agency (NSA)
requirements
for products purchased by the US government.
The requirements "
will effectively deprecate the use of RSA, Diffie-Hellman (DH), and elliptic
curve cryptography (ECDH and ECDSA) when mandated.
" The NSA has
a history
of publicly endorsing weak (plausibly already broken, internally)
cryptography in order to make its job
— monitoring internet communications —
easier. If the draft were to become an internet standard, the fact that it
optionally permits the use of non-hybrid post-quantum cryptography might make
some people feel that such cryptography is safe, when that is not the current
academic consensus.
There are
other arguments
for allowing non-hybrid post-quantum encryption — mostly
boiling down to the implementation and performance costs of supporting a more
complex scheme. But when Firefox, Chrome, and the Open Quantum Safe project all
already support and use hybrid post-quantum encryption, that motivation
didn't ring true
for other IETF participants.
Some proponents of the change argued that supporting non-hybrid post-quantum
encryption would be simpler, since a non-hybrid encryption scheme would be
simpler than a hybrid one. Opponents said that was focusing on the wrong kind of
simplicity; adding another method of encryption to TLS makes implementations
more complex, not less. They also pointed to the cost of modern elliptic-curve
cryptography as being so much smaller than the cost of post-quantum cryptography
that using both would not have a major impact on the performance of TLS.
From substance to process
The disagreement came to a head when Sean Turner, one of the chairs of the IETF
working group discussing the topic,
declared
in March 2025 that consensus had been reached and the proposal ought
to move to the next phase of standardization: adoption as a working-group
document. Once a draft document is adopted, it enters a phase of editing by the
members of the working group to ensure that it is clearly written and
technically accurate, before being sent to the Internet Engineering Steering
Group (IESG) to possibly become an internet standard.
Turner's decision to adopt the draft
came as a surprise
to some of the participants in the discussion, such as
Daniel J. Bernstein, who
strongly disagreed
with weakening the requirements for TLS 1.3 to allow
non-hybrid key-exchange mechanisms and had repeatedly said as much. The IETF
operates on a consensus model where, in theory, objections raised on the mailing
list need to be responded to and either refuted or used to improve the standard
under discussion.
In practice, the other 23 participants in the discussion
acknowledged the concerns of the six people who objected to the inclusion of non-hybrid
post-quantum key-exchange mechanisms in the standard. The group that wanted to
see the draft accepted just disagreed that
it was an important weakening in the face of regulatory and maintenance
concerns, and wanted to adopt the standard as written anyway.
From there, the discussion turned on
the question
of whether the working-group charter allowed for adopting a
draft that reduced the security of TLS in this context. That question never
reached a consensus either. After repeated appeals from Bernstein over the next
several months,
the IESG, which handles the IETF's internal policies and procedures,
asked Paul Wouters and Deb Cooley, the IETF's area directors responsible for the
TLS working group, whether Turner's declaration of consensus had been made
correctly.
Wouters
declared
that Turner had made the right call, based on the state of the
discussion at the time. He pointed out that while the draft permits TLS to use
non-hybrid post-quantum key-exchange algorithms, it doesn't recommend them: the
recommendation remains to use the hybrid versions where possible. He also noted
that the many voices calling for adoption indicated that there was a market
segment being served by the ability to use non-hybrid algorithms.
A few days after Wouters's response, on November 5, Turner
called for last objections
to adopting the draft as a working-group document. Employees of
the NSA
,
the United Kingdom's Government Communications Headquarters
(GCHQ), and
Canada's Communications Security Establishment Canada
(CSEC) all wrote in
with their support, as did employees of several companies working on US
military contracts. Quynh Dang, an employee of NIST,
also supported
publication as a working-group document, although claimed not
to represent NIST in this matter.
Among others,
Stephen Farrell
disagreed, calling for the standard to at
least add language addressing the fact that security experts in the
working group thought that the hybrid approach was more secure: "
Absent that, I think producing an RFC based on this draft
provides a misleading signal to the community.
"
As it stands now, the working group has adopted the draft that allows for
non-hybrid post-quantum key-exchange mechanisms to be used in TLS. According to
the IETF process
, the draft will now be edited by the working-group members for
clarity and technical accuracy, before being presented to the IESG for approval
as an internet standard. At that point, companies wishing to sell their devices
and applications to the US government will certainly enable the use of these
less-secure mechanisms — and be able to truthfully advertise their products as meeting
NIST, NSA, and IETF standards for security.
[ Thanks to Thomas Dalichow for bringing this topic to our attention. ]
I’ve never liked the philosophy of “put an icon in every menu item by default”.
Google Sheets, for example, does this. Go to “File” or “Edit” or “View” and you’ll see a menu with a list of options, every single one having an icon (same thing with the right-click context menu).
It’s extra noise to me. It’s not that I think menu items should
never
have icons. I think they can be incredibly useful (more on that below). It’s more that I don’t like the idea of “give each menu item an icon” being the default approach.
This posture lends itself to a practice where designers have an attitude of “I need an icon to fill up this space” instead of an attitude of “Does the addition of a icon here, and the cognitive load of parsing and understanding it, help or hurt how someone would use this menu system?”
The former doesn’t require thinking. It’s just templating — they all have icons, so we need to put
something
there. The latter requires care and thoughtfulness for each use case and its context.
To defend my point, one of the examples I always pointed to was macOS. For the longest time, Apple’s OS-level menus seemed to avoid this default approach of sticking icons in every menu item.
That is, until macOS Tahoe shipped.
Tahoe now has icons in menus everywhere. For example, here’s the Apple menu:
Let’s look at others. As I’m writing this I have Safari open. Let’s look at the “Safari” menu:
Hmm. Interesting. Ok so we’ve got an icon for like half the menu items. I wonder why some get icons and others don’t?
For example, the “Settings” menu item (third from the top) has an icon. But the other item in its grouping “Privacy Report” does not. I wonder why? Especially when Safari has an icon for Privacy report, like if you go to customize the toolbar you’ll see it:
Hmm. Who knows? Let’s keep going.
Let’s look at the "File" menu in Safari:
Some groupings have icons and get inset, while other groupings don’t have icons and don’t get inset. Interesting…again I wonder what the rationale is here? How do you choose? It’s not clear to me.
Let’s keep going. Let’s go to the "View" menu:
Oh boy, now we’re really in it. Some of these menu items have the notion of a toggle (indicated by the checkmark) so now you’ve got all kinds of alignment things to deal with. The visual symbols are doubling-up when there’s a toggle
and
an icon.
The “View” menu in Mail is a similar mix of:
Text
Text + toggles
Text + icons
Text + icons + toggles
You know what would be a fun game? Get a bunch of people in a room, show them menus where the textual labels are gone, and see who can get the most right.
But I digress.
In so many of these cases, I honestly can’t intuit why some menus have icons and others do not. What are so many of these icons affording me at the cost of extra visual and cognitive parsing? I don’t know.
To be fair, there are
some
menus where these visual symbols are incredibly useful. Take this menu from Finder:
The visual depiction of how those are going to align is actually incredibly useful because it’s way easier for my brain to parse the symbol and understand where the window is going to go than it is to read the text and imagine in my head what “Top Left” or “Bottom & Top” or “Quarters” will mean. But a visual symbol? I instantly get it!
Those are good icons in menus. I like those.
Apple Abandons Its Own Guidance
What I find really interesting about this change on Apple’s part is how it seemingly goes against their own previous human interface guidelines (as
pointed out to me by Peter Gassner
).
They have an entire section in their 2005 guidelines (
and 1992
and 2020
) titled “Using Symbols in Menus”:
See what it says?
There are a few standard symbols you can use to indicate additional information in menus…Don’t use other, arbitrary symbols in menus, because they add visual clutter and may confuse people.
Confused people. That’s me.
They even have an example of what
not
to do and guess what it looks like? A menu in macOS Tahoe.
Conclusion
It’s pretty obvious how I feel. I’m tired of all this visual noise in my menus.
And now that Apple has seemingly thrown in with the “stick an icon in every menu by default” crowd, it’s harder than ever for me to convince people otherwise. To persuade, “Hey, unless you can articulate a really good reason to add this, maybe our default posture should be no icons in menus?”
So I guess this is the world I live in now. Icons in menus. Icons in menus everywhere.
Send help.
Binance employee suspended after launching a token and promoting it with company accounts
Web3 Is Going Great
web3isgoinggreat.com
2025-12-08 19:40:31
Binance has announced that the company has suspended an employee who used the platform's official Twitter accounts to promote a memecoin they had launched. The token, called "year of the yellow fruit", pumped in price after official Binance accounts coaxed followers to "harvest abundantly"....
Binance has announced that the company has suspended an employee who used the platform's official Twitter accounts to promote a memecoin they had launched. The token, called "year of the yellow fruit", pumped in price after official Binance accounts coaxed followers to "harvest abundantly".
Binance publicly acknowledged that an employee had been suspended for misconduct over the incident. "These actions constitute abuse of their position for personal gain and violate our policies and code of professional conduct," Binance tweeted from its BinanceFutures account. After this announcement, the memecoin token price spiked even further.
Earlier this year,
Binance fired another employee
after discovering they had used inside information to profit from a token sale event.
Earlier this year,
LWN
featured an excellent article titled “
Linux’s missing CRL infrastructure
”. The article highlighted a number of key issues surrounding traditional Public Key Infrastructure (PKI), but critically noted how even the available measures are effectively ignored by the majority of system-level software on Linux.
One of the motivators for the discussion is that the Online Certificate Status Protocol (OCSP) will cease to be supported by Let’s Encrypt. The remaining alternative is to use Certificate Revocation Lists (CRLs), yet there is little or no support for managing (or even querying) these lists in most Linux system utilities.
To solve this, I’m happy to share that in partnership with
rustls
maintainers
Dirkjan Ochtman
and
Joe Birr-Pixton
, we’re starting the development of upki: a universal PKI tool. This project initially aims to close the revocation gap through the combination of a new system utility and eventual library support for common TLS/SSL libraries such as
OpenSSL
,
GnuTLS
and
rustls
.
The Problem
Online Certificate Authorities responsible for issuing TLS certificates have long had mechanisms for revoking known bad certificates. What constitutes a known bad certificate varies, but generally it means a certificate was issued either in error, or by a malicious actor of some form. There have been two primary mechanisms for this revocation:
Certificate Revocation Lists
(CRLs) and the
Online Certificate Status Protocol
(OCSP).
In July 2024,
Let’s Encrypt
announced
the deprecation of support for the Online Certificate Status Protocol (OCSP). This wasn’t entirely unexpected - the protocol has suffered from privacy defects which leak the browsing habits of users to Certificate Authorities. Various implementations have also suffered reliability issues that forced most implementers to adopt “soft-fail” policies, rendering the checks largely ineffective.
The deprecation of OCSP leaves us with CRLs. Both Windows and macOS rely on operating system components to centralise the fetching and parsing of CRLs, but Linux has traditionally delegated this responsibility to individual applications. This is done most effectively in browsers such as Mozilla Firefox, Google Chrome and Chromium, but this has been achieved with bespoke infrastructure.
However, Linux itself has fallen short by not providing consistent revocation checking infrastructure for the rest of userspace - tools such as curl, system package managers and language runtimes lack a unified mechanism to process this data.
The ideal solution to this problem, which is slowly
becoming more prevalent
, is to issue short-lived credentials with an expiration of 10 days or less, somewhat removing the need for complicated revocation infrastructure, but reducing certificate lifetimes is happening slowly and requires significant automation.
CRLite
There are several key challenges with CRLs in practice - the size of the list has grown dramatically as the web has scaled, and one must collate CRLs from all relevant certificate authorities in order to be useful. CRLite was originally proposed by researchers at IEEE S&P and subsequently adopted in Mozilla Firefox. It offers a pragmatic solution to the problem of distributing large CRL datasets to client machines.
In a recent
blog post
, Mozilla outlined how their CRLite implementation meant that on average users “downloaded 300kB of revocation data per day, a 4MB snapshot every 45 days and a sequence of “delta-updates” in-between”, which amounts to CRLite being 1000x more bandwidth-efficient than daily CRL downloads.
At its core, CRLite is a data structure compressing the full set of web-PKI revocations into a compact, efficiently queryable form. You can find more information about CRLite’s design and implementation on
Mozilla’s Security Blog
.
Introducing upki
Following our work on
oxidizing Ubuntu
,
Dirkjan
reached out to me with a proposal to introduce a system-level utility backed by CRLite to non-browser users.
upki will be an open source project, initially packaged for Ubuntu but available to all Linux distributions, and likely portable to other Unix-like operating systems. Written in Rust, upki supports three roles:
Server-side mirroring tool
: responsible for downloading and mirroring the CRLite filters provided by Mozilla, enabling us to operate independent CDN infrastructure for CRLite users, and serving them to clients. This will insulate upki from changes in the Mozilla backend, and enable standing up an independent data source if required. The server-side tool will manifest as a service that periodically checks the Mozilla Firefox CRLite filters, downloads and validates the files, and serves them.
Client-side sync tool
: run regularly by a systemd-timer, network-up events or similar, this tool ensures the contents of the CDN are reflected in the on-disk filter cache. This will be extremely low on bandwidth and CPU usage assuming everything is up to date.
Client-side query tool
: a CLI interface for querying revocation data. This will be useful for monitoring and deployment workflows, as well as for users without a good C FFI.
The latter two roles are served by a single Rust binary that runs in different modes depending on how it is invoked. The server-side tool will be a separate binary, since its use will be much less widespread. Under the hood, all of this will be powered by Rust library crates that can be integrated in other projects via
crates.io
.
For the initial release, Canonical will stand up the backend infrastructure required to mirror and serve the CRLite data for upki users, though the backend will be configurable. This prevents unbounded load on Mozilla’s infrastructure and ensures long-term stability even if Firefox’s internal formats evolve.
Ecosystem Compatibility
So far we’ve covered the introduction of a new Rust binary (and crate) for supporting the fetching, serving and querying of CRL data, but that doesn’t provide much service to the existing ecosystem of Linux applications and libraries in the problem statement.
The upki project will also provide a shared object library for a stable ABI that allows C and C-FFI programs to make revocation queries, using the contents of the on-disk filter cache.
Once
upki
is released and available, work can begin on integrating existing crypto libraries such as OpenSSL, GNUtls and rustls. This will be performed through the shared object library by means of an optional callback mechanism these libraries can use to check the revocation lists before establishing a connection to a given server with a certificate.
Timeline
While we’ve been discussing this project for a couple of months, ironing out the details of funding and design, work will soon begin on the initial implementation of upki.
Our aim is to make upki available as an opt-in preview for the release of Ubuntu 26.04 LTS, meaning we’ll need to complete the implementation of the server/client functionality, and bootstrap the mirroring/serving infrastructure at Canonical before April 2026.
In the following Ubuntu release cycle, the run up to Ubuntu 26.10, we’ll aim to ship the tool by default on Ubuntu systems, and begin work on integration with the likes of NSS, OpenSSL, GNUtls and rustls.
Summary
Linux has a clear gap in its handling of revocation data for PKIs. Over the coming months we’re hoping to address that gap by developing upki not just for Ubuntu, but for the entire ecosystem. Thanks to Mozilla’s work on CRLite, and the expertise of Dirkjan and Joe, we’re confident that we’ll deliver a resilient and efficient solution that should make a meaningful contribution to systems security across the web.
If you’d like to do more reading on the subject, I’d recommend the following:
The Korean cryptocurrency exchange Upbit suffered a loss of around $30 million in various Solana-based assets due to a hack. Some entities have suggested that Lazarus, a North Korean state-sponsored cybercrime group, was behind the hack.Upbit reimbursed users who had lost funds from company...
The Korean cryptocurrency exchange Upbit suffered a loss of around $30 million in various Solana-based assets due to a hack. Some entities have suggested that Lazarus, a North Korean state-sponsored cybercrime group, was behind the hack.
Upbit reimbursed users who had lost funds from company reserves. The exchange was able to freeze around $1.77 million of the stolen assets.
This theft occurred exactly six years after Upbit suffered a theft of 342,000 ETH (priced at around $50 million at the time).
One of the earliest lessons in managing a Linux machine is learning the
top
command. This lightweight, text‑based utility comes preinstalled on all Linux distributions and provides real‑time information about running services, CPU usage, and memory consumption. It also allows administrators to selectively
t
erminate misbehaving processes
, making it an essential tool for quick troubleshooting.
Screen picture by Don Watkins CC-by-SA 4.0
For users who want a more interactive experience,
htop
offers a colorful and user‑friendly interface. Unlike
top
,
htop
is not preinstalled and must be added manually with commands such as:
Beyond text‑based tools, Linux also provides graphical options
, such as the
GNOME
System Monitor
. This utility comes preinstalled with the Gnome desktop environment and offers a visual representation of system performance. Users can view resource graphs for CPU, memory, disk, and network utilization, and manage processes with simple mouse clicks. While customization options are limited compared to command‑line tools, its ease of use makes it accessible for beginners who prefer a graphical dashboard.
Screen picture by Don Watkins CC-by-SA 4.0
3. Modern Dashboards: Mission Center
A newer addition to the Linux ecosystem is
Mission Center
, a comprehensive performance dashboard. Built with GTK4/Libadwaita and written in Rust, it delivers speed, reliability, and hardware‑accelerated graphs for smooth performance. Mission Center tracks CPU, memory, disk, network, GPU, and even fan activity, while breaking down resource usage by individual apps and processes. For quick checks, it also includes a compact summary mode.
Screen picture by Don Watkins CC-by-SA 4.0
Mission Center is open source under the GPL v3 license, with its
source code
freely available. Installation is straightforward via
Flatpak
,
Snap
and it is also distributed as an AppImage for both
x86_64
and
Arm64
architectures. This makes it a versatile and modern choice for Linux users seeking a full‑system monitoring solution.
Trials avoid high risk patients and underestimate drug harms
The FDA does not formally regulate representativeness, but if trials under-enroll vulnerable patients, the resulting evidence may understate harm from drugs. We study the relationship between trial participation and the risk of drug-induced adverse events for cancer medications using data from the Surveillance, Epidemiology, and End Results Program linked to Medicare claims. Initiating treatment with a cancer drug increases the risk of hospitalization due to serious adverse events (SAE) by 2 percentage points per month (a 250% increase). Heterogeneity in SAE treatment effects can be predicted by patient's comorbidities, frailty, and demographic characteristics. Patients at the 90th percentile of the risk distribution experience a 2.5 times greater increase in SAEs after treatment initiation compared to patients at the 10th percentile of the risk distribution yet are 4 times less likely to enroll in trials. The predicted SAE treatment effects for the drug's target population are 15% larger than the predicted SAE treatment effects for trial enrollees, corresponding to 1 additional induced SAE hospitalization for every 25 patients per year of treatment. We formalize conditions under which regulating representativeness of SAE risk will lead to more externally valid trials, and we discuss how our results could inform regulatory requirements.
Copy Citation
Jason Abaluck, Leila Agha, and Sachin Shah, "Trials Avoid High Risk Patients and Underestimate Drug Harms," NBER Working Paper 34534 (2025), https://doi.org/10.3386/w34534.
Most likely, you have heard the generic acronym RAS, which typically stands for Resiliency, Availability, and Serviceability. However, in the world of time synchronization at IBM, we changed RAS to mean Resiliency, Accuracy, and Security.
From RAS to IBMz17
Timing, timekeeping, time synchronization, and, more specifically, accurate synchronization are key requirements for modern IT systems. This is especially true for industries involved in transaction processing, such as the financial sector.
This need for accuracy is why the IBM Z sysplex relies on highly precise timing and synchronization technology to ensure data integrity and enable database reconstruction from logs. To achieve this, IBM Z uses the best
oven-controlled crystal oscillators (OCXOs)
in the industry.
But in 2025, it’s not enough. We also need tremendous resiliency and security to maintain those levels of accuracy.
IBM z17 introduced several important time synchronization enhancements that improve the security and resiliency of a parallel sysplex environment.
Enter
IBM z17
. The IBM z17 introduced several important time synchronization enhancements that improve the security and resiliency of a parallel sysplex environment. These updates help end users maintain the accuracy required to comply with government and industry regulations.
Background: The Evolution of IBM Time Synchronization
Here is a brief overview of IBM’s Z time synchronization evolution.
For the past two decades, time synchronization in IBM Z has centered around Server Time Protocol (STP). STP is IBM’s proprietary, message-based protocol that allows a collection of connected mainframes (a parallel sysplex) to maintain synchronized time known as Coordinated Server Time (CST).
This network of synchronized IBM Z machines is called a Coordinated Timing Network (CTN).
However, STP does not synchronize a sysplex with the outside environment. That function relies on a different protocol.
From 2007 to 2019, that protocol was the Network Time Protocol (NTP). Starting with the IBM z15 in 2019, the
Precision Time Protocol (PTP) (IEEE 1588)
became a second option. Now in 2025, there’s a new option.
New on IBM z17: Enhanced Time Synchronization
NTP resiliency refers to the network’s ability to maintain accurate time synchronization despite network issues or failures. To improve overall resiliency, IBM z17 introduced two new components:
Support for NTPv4/Chrony – Improves accuracy and stability by leveraging the full suite of NTP algorithms through Chrony.
Mixed Mode Operation (NTP + PTP) – Increases resiliency and stability by allowing STP to use up to five external reference sources (three NTP and two PTP) simultaneously.
The rest of this article focuses on the NTPv4/Chrony support.
NTPv4 and Chrony: A Smarter, More Accurate Approach
NTP has existed since 1985 and remains one of the most common Internet protocols. Even with the z15’s PTP support, NTP continues to serve as the preferred external time reference for many IBM Z customers.
Therefore, IBM continues to enhance its NTP implementation, most recently with support for NTPv4/Chrony on the z17.
NTPv4 is defined by IETF standard RFC 5905 and is backward compatible with NTPv3 (RFC 1305).
It adds IPv6 compatibility and algorithmic improvements that can achieve accuracy within tens of microseconds.
Chrony, a newer NTPv4 implementation, performs well in congested networks—achieving millisecond accuracy over the Internet and tens of microseconds on a LAN.
Chrony achieves this by using hardware timestamping, similar to PTP, rather than the software timestamping of standard NTPv4.
Chrony gives IBM Z systems the best of both worlds:
NTP reliability and modern precision.
In short, Chrony gives IBM Z systems the best of both worlds: NTP reliability and modern precision.
How IBM z17 Improves NTP Resiliency
For IBM z17 and later, you can configure up to three NTP servers per IBM Z system in an STP-only CTN (z16 and earlier were limited to two).
Key Definitions
Truechimer: A clock that maintains time accuracy according to a trusted standard such as UTC.
Falseticker: A clock that fails to maintain that accuracy due to error, fault, or malicious interference.
Candidate: An association that has valid peer variables.
Each NTP server operates through two processes:
Peer Process – Receives and validates each packet. Invalid packets are discarded; valid ones are passed to the Clock Filter algorithm. Optionally, an Access Control List (ACL) can verify IP address entries for added security.
Poll Process – Sends packets at programmed intervals.
These processes, together with their peer state variables, form what’s known as an association. As packets arrive, NTP compares the server’s time to the system clock, calculates offsets, and applies corrections through the NTP discipline process.
Inside the Algorithms: How It All Works
There are four algorithms at work.
Clock Filter Algorithm
It selects the best sample data and rejects noise caused by collisions or congestion.
Filters out transient “popcorn spikes.”
Select Algorithm
It determines which sources are truechimers and which are falsetickers.
Uses multiple redundant servers and network paths.
Checks NTP stratum level, root distance, and source reachability.
Identifies truechimers via correctness intervals, then passes them to the Cluster algorithm.
Cluster Algorithm
It ranks truechimers by evaluating peer jitter and selecting jitter to determine which sources deliver the highest overall accuracy.
Produces a list of “survivors” from most to least favored.
Combine Algorithm
Produces a weighted average of offset and jitter from surviving sources.
Weights are based on the reciprocal of each source’s root distance.
These normalized weights sum to one.
The combined offset synchronizes the NTP server’s system clock.
Putting It All Together: More Resilient Timekeeping
IBM z17 introduced multiple significant enhancements to its NTP implementation. When configured correctly, these changes enhance IBM z17’s synchronization accuracy to UTC and create a more resilient implementation than was possible on prior generations.
Learn More
For more details, refer to the following sources or contact the author directly at steve.guendert[at]ibm.com.
References
Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space (2nd Edition). David L. Mills, CRC Press, 2011.
I've been building software professionally for nearly 20 years. I've been through a lot of changes - the 'birth' of SaaS, the mass shift towards mobile apps, the outrageous hype around blockchain, and the perennial promise that low-code would make developers obsolete.
The economics have changed
dramatically
now with agentic coding, and it is going to totally transform the software development industry (and the wider economy). 2026 is going to catch a lot of people off guard.
In my previous post I delved into why I think
evals are missing
some of the big leaps, but thinking this over since then (and recent experience) has made me confident we're in the early stages of a once-in-a-generation shift.
The cost of shipping
I started developing just around the time open source started to really explode - but it was clear this was one of the first big shifts in cost of building custom software. I can remember eye watering costs for SQL Server or Oracle - and as such started out really with MySQL, which did allow you to build custom networked applications without incurring five or six figures of annual database licensing costs.
Since then we've had cloud (which I would debate is a cost saving at all, but let's be generous and assume it has some initial capex savings) and lately what I feel has been the era of complexity. Software engineering has got - in my opinion, often needlessly - complicated, with people rushing to very labour intensive patterns such as TDD, microservices, super complex React frontends and Kubernetes. I definitely don't think we've seen much of a cost decrease in the past few years.
AI Agents however in my mind
massively
reduce the labour cost of developing software.
So where do the 90% savings actually come from?
At the start of 2025 I was incredibly sceptical of a lot of the AI coding tools - and a lot of them I still am. Many of the platforms felt like glorified low code tooling (Loveable, Bolt, etc), or VS Code forks with some semi-useful (but often annoying) autocomplete improvements.
Take an average project for an internal tool in a company. Let's assume the data modelling is already done to some degree, and you need to implement a web app to manage widgets.
Previously, you'd have a small team of people working on setting up CI/CD, building out data access patterns and building out the core services. Then usually a whole load of CRUD-style pages and maybe some dashboards and graphs for the user to make. Finally you'd (hopefully) add some automated unit/integration/e2e tests to make sure it was fairly solid and ship it, maybe a month later.
And that's just the direct labour. Every person on the project adds coordination overhead. Standups, ticket management, code reviews, handoffs between frontend and backend, waiting for someone to unblock you. The actual coding is often a fraction of where the time goes.
Nearly all of this
can be done in a few hours with an agentic coding CLI. I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.
The agentic coding tools have got
extremely
good at converting business logic specifications into pretty well written APIs and services.
A project that would have taken a month now takes a week. The thinking time is roughly the same - the implementation time collapsed. And with smaller teams, you get the inverse of Brooks's Law: instead of communication overhead scaling with headcount, it disappears. A handful of people can suddenly achieve an order of magnitude more.
Latent demand
On the face of it, this seems like incredibly bad news for the software development industry - but economics tells us otherwise.
Jevons Paradox
says that when something becomes cheaper to produce, we don't just do the same amount for less money. Take electric lighting for example; while sales of candles and gas lamps fell, overall
far
more artificial light was generated.
If we apply this to software engineering, think of supply and demand. There is
so much
latent demand for software. I'm sure every organisation has hundreds if not thousands of Excel sheets tracking important business processes that would be far better off as a SaaS app. Let's say they get a quote from an agency to build one into an app for $50k - only essential ones meet the grade. At $5k (for a decent developer + AI tooling) - suddenly there is far more demand.
Domain knowledge is the only moat
So where does that leave us? Right now there is still enormous value in having a human 'babysit' the agent - checking its work, suggesting the approach and shortcutting bad approaches. Pure YOLO vibe coding ends up in a total mess very quickly, but with a human in the loop I think you can build incredibly good quality software,
very
quickly.
This then allows developers who really master this technology to be hugely effective at solving business problems. Their domain and industry knowledge becomes a huge lever - knowing the best architectural decisions for a project, knowing which framework to use and which libraries work best.
Layer on understanding of the business domain and it does genuinely feel like the mythical 10x engineer is here. Equally, the pairing of a business domain expert with a motivated developer and these tools becomes an incredibly powerful combination, and something I think we'll see becoming quite common - instead of a 'squad' of a business specialist and a set of developers, we'll see a far tighter pairing of a couple of people.
This combination allows you to iterate incredibly quickly, and software becomes almost disposable - if the direction is bad, then throw it away and start again, using those learnings. This takes a fairly large mindset shift, but the hard work is the
conceptual thinking
, not the typing.
Don't get caught off guard
The agents and models are still improving rapidly, which I don't think is really being captured in the benchmarks. Opus 4.5 seems to be able to follow long 10-20 minute sessions without going completely off piste. We're just starting to see the results of the hundreds of billions of dollars of capex that has gone into GB200 GPUs now, and I'm sure newer models will quickly make these look completely obsolete.
However, I've spoken to so many software engineers that are really fighting this change. I've heard the same objections too many times - LLMs make too many mistakes, it can't understand
[framework]
, or it doesn't really save any time.
These assertions are rapidly becoming completely false, and remind me a lot of the desktop engineers who dismissed the iPhone in 2007. I think we all know how that turned out - networking got better, the phones got way faster and the mobile operating systems became very capable.
Engineers need to really lean in to the change in my opinion. This won't change overnight - large corporates are still very much behind the curve in general, lost in a web of bureaucracy of vendor approvals and management structures that leave them incredibly vulnerable to smaller competitors.
But if you're working for a smaller company or team and have the power to use these tools, you should. Your job is going to change - but software has always changed. Just perhaps this time it's going to change faster than anyone anticipates. 2026 is coming.
One objection I hear a lot is that LLMs are only good at greenfield projects. I'd push back hard on this. I've spent plenty of time trying to understand 3-year-old+ codebases where everyone who wrote it has left. Agents make this dramatically easier - explaining what the code does, finding the bug(s), suggesting the fix. I'd rather inherit a repo written with an agent and a good engineer in the loop than one written by a questionable quality contractor who left three years ago, with no tests, and a spaghetti mess of classes and methods.
NATS
is a distributed streaming system. Regular NATS streams offer only best-effort delivery, but a subsystem, called JetStream, guarantees messages are delivered at least once. We tested NATS JetStream, version 2.12.1, and found that it lost writes if data files were truncated or corrupted on a minority of nodes. We also found that coordinated power failures, or an OS crash on a single node combined with network delays or process pauses, can cause the loss of committed writes and persistent split-brain. This data loss was caused (at least in part) by choosing to flush writes to disk every two minutes, rather than before acknowledging them. We also include a belated note on data loss due to process crashes in version 2.10.22, which was fixed in 2.10.23. NATS has now documented the risk of its default
fsync
policy, and the remaining issues remain under investigation. This research was performed independently by Jepsen, without compensation, and conducted in accordance with the
Jepsen ethics policy
.
Background
NATS
is a popular streaming system. Producers
publish messages to streams
, and consumers subscribe to those streams, fetching messages from them. Regular NATS streams are allowed to drop messages. However, NATS has a subsystem called
JetStream
, which
uses
the
Raft consensus algorithm
to replicate data among nodes. JetStream promises
“at least once”
delivery: messages may be duplicated, but acknowledged messages
1
should not be lost.
2
Moreover, JetStream streams are
totally ordered logs
.
JetStream is intended to
“self-heal and always be available”
. The documentation also states that
“the formal consistency model of NATS JetStream is Linearizable”
. At most one of these claims can be true: the
CAP theorem
tells us that
Linearizable
systems can not be totally available.
3
In practice, they tend to be available so long as a majority of nodes are non-faulty and communicating. If, say, a single node loses network connectivity, operations must fail on that node. If three out of five nodes crash, all operations must fail.
Indeed, a
later section
of the JetStream docs acknowledges this fact, saying that streams with three replicas can tolerate the loss of one server, and those with five can tolerate the simultaneous loss of two.
Replicas=5 - Can tolerate simultaneous loss of two servers servicing the stream. Mitigates risk at the expense of performance.
In order to ensure data consistency across complete restarts, a quorum of servers is required. A quorum is ½ cluster size + 1. This is the minimum number of nodes to ensure at least one node has the most recent data and state after a catastrophic failure. So for a cluster size of 3, you’ll need at least two JetStream enabled NATS servers available to store new messages. For a cluster size of 5, you’ll need at least 3 NATS servers, and so forth.
With these guarantees in mind, we set out to test NATS JetStream behavior under a variety of simulated faults.
Test Design
We designed a
test suite
for NATS JetStream using the
Jepsen testing library
, using
JNATS
(the official Java client) at version 2.24.0. Most of our tests ran in Debian 12 containers under LXC;
some tests
ran in
Antithesis
, using the official NATS Docker images. In all our tests we created a single JetStream stream with a target replication factor of five. Per NATS’ recommendations, our clusters generally contained three or five nodes. We tested a variety of versions, but the bulk of this work focused on NATS 2.12.1.
The test harness
injected a variety of faults
, including process pauses, crashes, network partitions, and packet loss, as well as single-bit errors and truncation of data files. We limited file corruption to a minority of nodes. We also simulated power failure—a crash with partial amnesia—using the
LazyFS
filesystem. LazyFS allows Jepsen to drop any writes which have not yet been flushed using a call to (e.g.)
fsync
.
Our tests did not measure Linearizability or
Serializability
. Instead we ran
several producer processes
, each bound to a single NATS client, which published globally unique values to a single JetStream stream. Each message included the process number and a sequence number within that process, so message
4-0
denoted the first
publish
attempted by process
4
, message
4-1
denoted the second, and so on. At the end of the test we ensured all nodes were running, resolved any network partitions or other faults, subscribed to the stream, and
attempted to read all acknowledged messages from the the stream
. Each reader called
fetch
until it had observed (at least) the last acknowledged message published by each process, or timed out.
We measured JetStream’s at-least-once semantics
based on the union of all published and read messages
. We considered a message
OK
if it was attempted and read. Messages were
lost
if they were acknowledged as published, but never read by any process. We divided lost messages into three epochs, based on the first and last OK messages written by the same process.
4
We called those lost before the first OK message the
lost-prefix
, those lost after all the last OK message the
lost-postfix
, and all others the
lost-middle
. This helped to distinguish between lagging readers and true data loss.
In addition to verifying each acknowledged message was delivered to at least one consumer across all nodes, we also checked the set of messages read by all consumers connected to a specific node. We called it
divergence
, or
split-brain
, when an acknowledged message was missing from some nodes but not others.
Results
We begin with a belated note on total data loss in version 2.10.22, then continue with four findings related to data loss and replica divergence in version 2.12.1: two with file corruption, and two with power failures.
Total Data Loss on Crash in 2.10.22 (#6888)
Before discussing version 2.12.1, we present a long-overdue finding from earlier work. In versions 2.10.20 through 2.10.22 (released 2024-10-17), we found that process crashes alone could cause the total loss of a JetStream stream and all its associated data. Subscription requests would return
"No matching streams for subject"
, and
getStreamNames()
would return an empty list. These conditions would persist for hours:
in this test run
, we waited 10,000 seconds for the cluster to recover, but the stream never returned.
Jepsen reported this issue to NATS as
#6888
, but it appears that NATS had already identified several potential causes for this problem and resolved them. In
#5946
, a cluster-wide crash occurring shortly after a stream was created could cause the loss of the stream. A new leader would be elected with a snapshot which preceded the creation of the stream, and replicate that empty snapshot to followers, causing everyone to delete their copy of the stream. In
#5700
, tests running in
Antithesis
found that out-of-order delivery of snapshot messages could cause streams to be deleted and re-created as well. In
#6061
, process crashes could cause nodes to delete their local Raft state. All of these fixes were released as a part of 2.10.23, and we no longer observed the problem in that version.
Lost Writes With
.blk
File Corruption (#7549)
NATS has several checksum mechanisms meant to detect data corruption in on-disk files. However, we found that single-bit errors or truncation of JetStream’s
.blk
files could cause the cluster to lose large windows of writes. This occurred even when file corruption was limited to just one or two nodes out of five. For instance,
file corruption in this test run
caused NATS to lose 679,153 acknowledged writes out of 1,367,069 total, including 201,286 which were missing even though later values written by the same process were later read.
In some cases, file corruption caused the quiet loss of
just a single message
. In others, writes vanished in large blocks. Even worse, bitflips could cause split-brain, where different nodes returned different sets of messages. In
this test
, NATS acknowledged a total of 1,479,661 messages. However, single-bit errors in
.blk
files on nodes
n1
and
n3
caused nodes
n1
,
n3
, and
n5
to lose up to 78% of those acknowledged messages. Node
n1
lost 852,413 messages, and nodes
n3
and
n5
lost 1,167,167 messages, despite
n5
’s data files remaining intact. Messages were lost in prefix, middle, and postfix: the stream, at least on those three nodes, resembled Swiss cheese.
Total Data Loss With Snapshot File Corruption (#7556)
When we truncated or introduced single-bit errors into JetStream’s snapshot files in
data/jetstream/$SYS/_js_/
, we found that nodes would sometimes decide that a stream had been orphaned, and delete all its data files. This happened even when only a minority of nodes in the cluster experienced file corruption. The cluster would never recover quorum, and the stream remained unavailable for the remainder of the test.
In
this test run
, we introduced single-bit errors into snapshots on nodes
n3
and
n5
. During the final recovery period, node
n3
became the metadata leader for the cluster and decided to clean up
jepsen-stream
, which stored all the test’s messages.
[1010859] 2025/11/15 20:27:02.947432 [INF]
Self is new JetStream cluster metadata leader
[1010859] 2025/11/15 20:27:14.996174 [WRN]
Detected orphaned stream 'jepsen >
jepsen-stream', will cleanup
Nodes
n3
and
n5
then deleted all files in the stream directory. This might seem defensible—after all, some of
n3
’s data files
were
corrupted. However,
n3
managed to become the leader of the cluster despite its corrupt state! In general, leader-based consensus systems must be careful to ensure that any node which becomes a leader is aware of majority committed state. Becoming a leader, then opting to delete a stream full of committed data, is particularly troubling.
Although nodes
n1
,
n2
, and
n4
retained their data files,
n1
struggled to apply snapshots;
n4
declared that
jepsen-stream
had no quorum and stalled. Every attempt to subscribe to the stream threw
[SUB-90007] No matching streams for subject
. Jepsen filed issue
#7556
for this, and the NATS team is looking into it.
Lazy
fsync
by Default (#7564)
NATS JetStream promises that once a
publish
call has been acknowledged, it is “successfully persisted”. This is not exactly true. By default, NATS calls
fsync
to flush data to disk only once every two minutes, but acknowledges messages immediately. Consequently, recently acknowledged writes are generally
not
persisted, and could be lost to coordinated power failure, kernel crashes, etc. For instance, simulated power failures in
this test run
caused NATS to lose roughly thirty seconds of writes: 131,418 out of 930,005 messages.
Because the default flush interval is quite large, even killing a single node at a time is sufficient to cause data loss, so long as nodes fail within a few seconds of each other. In
this run
, a series of single-node failures in the first two minutes of the test caused NATS to delete the entire stream, along with all of its messages.
Change the default fsync/sync interval for page cache in the filestore. By default JetStream relies on stream replication in the cluster to guarantee data is available after an OS crash. If you run JetStream without replication or with a replication of just 2 you may want to shorten the fsync/sync interval. You can force an fsync after each messsage [sic] with
always
, this will slow down the throughput to a few hundred msg/s.
Consensus protocols often require that nodes sync to disk before acknowledging an operation. For example, the famous 2007 paper
Paxos Made Live
remarks:
Note that all writes have to be flushed to disk immediately before the system can proceed any further.
The
Raft thesis
on which NATS is based is clear that nodes must “flush [new log entries] to their disks” before acknowledging. Section 11.7.3 discusses the possibility of instead writing data to disk asynchronously, and concludes:
The trade-off is that data loss is possible in catastrophic events. For example, if a majority of the cluster were to restart simultaneously, the cluster would have potentially lost entries and would not be able to form a new view. Raft could be extended in similar ways to support disk-less operation, but we think the risk of availability or data loss usually outweighs the benefits.
Jepsen suggests that NATS change the default value for
fsync
to
always
, rather than every two minutes. Alternatively, NATS documentation should prominently disclose that JetStream may lose data when nodes experience correlated power failure, or fail in rapid succession (
#7564
).
A Single OS Crash Can Cause Split-Brain (#7567)
In response to #7564, NATS engineers
noted
that most production deployments run with each node in a separate availability zone, which reduces the probability of correlated failure. This raises the question: how many power failures (or hardware faults, kernel crashes, etc.) are required to cause data loss? Perhaps surprisingly, in an asynchronous network the answer is “just one”.
To understand why, consider that a system which remains partly available when a minority of nodes are unavailable must allow states in which a committed operation is present—solely in memory—on a bare majority of nodes. For example, in a leader-follower protocol the leader of a three-node cluster may consider a write committed as soon as a single follower has responded: it has two acknowledgements, counting itself. Under normal operation there will usually be some window of committed operations in this state.
6
.
Now imagine that one of those two nodes loses power and restarts. Because the write was stored only in memory, rather than on disk, the acknowledged write is no longer present on that node. There now exist two out of three nodes which do
not
have the write. Since the system is fault-tolerant, these two nodes must be able to form a quorum and continue processing requests—creating new states of the system in which the acknowledged write never happened.
Strictly speaking, this fault requires nothing more than a single power failure (or HW fault, kernel crash, etc.) and an asynchronous network—one which is allowed to deliver messages arbitrarily late. Whether it occurs in practice depends on the specific messages exchanged by the replication system, which node fails, how long it remains offline, the order of message delivery, and so on. However, one can reliably induce data loss by killing, pausing, or partitioning away a minority of nodes before and after a simulated OS crash.
For example, process pauses and a single simulated power failure in
this test run
caused JetStream to lose acknowledged writes for windows roughly on par with
sync_interval
. Stranger still, the cluster entered a persistent split-brain which continued after all nodes were restarted and the network healed. Consider these two plots of lost writes, based on final reads performed against nodes
n1
and
n5
respectively:
Consumers talking to
n1
failed to observe a short window of acknowledged messages written around 42 seconds into the test. Meanwhile, consumers talking to
n5
would miss acknowledged messages written around 58 seconds. Both windows of write loss were on the order of our choice of
sync_interval = 10s
for this run. In repeated testing, we found that any node in the cluster could lose committed writes, including the node which failed, those which received writes before the failure, and those which received writes afterwards.
The fact that a single power failure can cause data loss is not new. In 2023, RedPanda wrote
a detailed blog post
showing that Kafka’s default lazy
fsync
could lead to data loss under exactly this scenario. However, it is especially concerning that this scenario led to persistent replica divergence, not just data loss! We filed
#7567
for this issue, and the NATS team is investigating.
In NATS 2.10.22, process crashes could cause JetStream to forget a stream ever existed (#6888). This issue was identified independently by NATS and resolved in version 2.10.23, released on 2024-12-10. We did not observe data loss with simple network partitions, process pauses, or crashes in version 2.12.1.
However, we found that in NATS 2.12.1, file corruption and simulated OS crashes could both lead to data loss and persistent split-brain. Bitflips or truncation of either
.blk
(#7549) or snapshot (#7556) files, even on a minority of nodes, could cause the loss of single messages, large windows of messages, or even cause some nodes to delete their stream data altogether. Messages could be missing on some nodes and present on others. NATS has multiple checksum mechanisms designed to limit the impact of file corruption; more thorough testing of these mechanisms seems warranted.
By default, NATS only flushes data to disk every two minutes, but acknowledges operations immediately. This approach can lead to the loss of committed writes when several nodes experience a power failure, kernel crash, or hardware fault concurrently—or in rapid succession (#7564). In addition, a single OS crash combined with process crashes, pauses, or network partitions can cause the loss of acknowledged messages and persistent split-brain (#7567). We recommended NATS change the default value of
fsync
to
always
, or clearly document these hazards. NATS has
added new documentation
to the
JetStream Concepts page
.
This documentation
also describes
several goals for JetStream, including that “[t]he system must self-heal and always be available.” This is impossible: the CAP theorem states that Linearizable systems cannot be totally available in an asynchronous network. In our three and five-node clusters JetStream generally behaved like a typical Raft implementation. Operations proceeded on a majority of connected nodes but isolated nodes were unavailable, and if a majority failed, the system as a whole became unavailable. Jepsen suggests clarifying this part of the documentation.
As always, Jepsen takes an experimental approach to safety verification: we can prove the presence of bugs, but not their absence. While we make extensive efforts to find problems, we cannot prove correctness.
LazyFS
This work demonstrates that systems which do not exhibit data loss under normal process crashes (e.g.
kill -9 <PID>
) may lose data or enter split-brain under simulated OS-level crashes. Our tests relied heavily on
LazyFS
, a project of
INESC TEC
at the University of Porto.
7
After killing a process, we used LazyFS to simulate the effects of a power failure by dropping writes to the filesystem which had not yet been
fsync
ed to disk.
While this work focused purely on the loss of unflushed writes, LazyFS can also simulate linear and non-linear torn writes: an anomaly where a storage device persists part, but not all, of written data thanks to (e.g.) IO cache reordering. Our 2024 paper
When Amnesia Strikes
discusses these faults in more detail, highlighting bugs in PostgreSQL, Redis, ZooKeeper, etcd, LevelDB, PebblesDB, and the Lightning Network.
Future Work
We designed only a simple workload for NATS which checked for lost records either across all consumers, or across all consumers bound to a single node. We did not check whether single consumers could miss messages, or the order in which they were delivered. We did not check NATS’ claims of Linearizable writes or Serializable operations in general. We also did not evaluate JetStream’s “exactly-once semantics”. All of these could prove fruitful avenues for further tests.
In some tests, we
added and removed
nodes from the cluster. This work
generated some preliminary results
. However, the NATS documentation for membership changes was incorrect and incomplete: it gave
the wrong command
for removing peers, and there appears to be an undocumented but mandatory
health check step
for newly-added nodes. As of this writing, Jepsen is unsure how to safely add or remove nodes to a NATS cluster. Consequently, we leave membership changes for future research.
Our thanks to
INESC TEC
and everyone on the LazyFS team, including Maria Ramos, João Azevedo, José Pereira, Tânia Esteves, Ricardo Macedo, and João Paulo. Jepsen is also grateful to Silvia Botros, Kellan Elliott-McCrea, Carla Geisser, Coda Hale, and Marc Hedlund for their expertise regarding datacenter power failures, correlated kernel panics, disk faults, and other causes of OS-level crashes. Finally, our thanks to
Irene Kannyo
for her editorial support. This research was performed independently by Jepsen, without compensation, and conducted in accordance with the
Jepsen ethics policy
.
Throughout this report we use “acknowledged message” to describe a message whose
publish
request was acknowledged successfully by some server. NATS also offers a separate notion of acknowledgement, which indicates when a message has been processed and need not be delivered again.
↩︎
JetStream also promises “exactly once semantics” in some scenarios. We leave this for later research.
↩︎
The CAP theorem’s definition of “availability” requires that all operations on non-faulty nodes must succeed.
↩︎
This is overly conservative: in a system with Linearizable writes, we should never observe a lost message which was acknowledged prior to the invocation of the
publish
call for an OK message, regardless of process. However, early testing with NATS suggested that it might be better to test a weaker property, and come to stronger conclusions about data loss.
↩︎
Redpanda argues
that the situation is actually worse: a single power failure, combined with network partitions or process pauses, can cause Kafka to lose committed data.
↩︎
Some protocols, like Raft, consider an operation committed as soon as it is acknowledged by a majority of nodes. These systems offer lower latencies, but at any given time there are likely a few committed operations which are missing from a minority of nodes due to normal network latency. Other systems, like Kafka, require acknowledgement from
all
“online” nodes before considering an operation committed. These systems offer worse latency in healthy clusters (since they must wait for the slowest node) but in exchange, committed operations can only be missing from some node when the fault detector decides that node is no longer online (e.g. due to elevated latency).
↩︎
Jepsen contributed some funds, testing, and integration assistance to LazyFS, but most credit belongs to the LazyFS team.
↩︎
I’ve spent the last 48 hours completely falling down the rabbit hole of
NVIDIA’s Q3 Fiscal 2026 earnings report
. If
you just skim the headlines, everything looks perfect: Revenue is up 62% to $57
billion, and Jensen Huang is talking about a "virtuous cycle of AI."
But I wanted to understand what was
really
happening under the hood, so I dug
into the balance sheet and cross-referenced it with all the news swirling
around OpenAI and Oracle. I’m not a professional Wall Street analyst, but even
just connecting the dots myself (with the help of Gemini), I’m seeing some cracks in the "AI Alliance."
While NVIDIA posts record numbers, it feels like their biggest customers are
quietly arming themselves for a breakout.
Here is my take on the hardware market, the "frenemy" dynamics between OpenAI
and NVIDIA, and the "circular financing" theories that everyone—including
Michael Burry, has been talking about.
Here is a quick summary of the points I'll discuss below:
NVIDIA’s Earnings: Perfection with a side of stress
On the surface, NVIDIA is the absolute monarch of the AI era. You can’t argue
with a Data Center segment that now makes up nearly 90% of the company's
business. However, when I looked closer at the financials, I found three
specific things that stood out to me as "red flags."
The Cash Flow Mystery:
NVIDIA reported a massive
$31.9 billion in Net
Income
, but when I checked the cash flow statement, they only generated
$23.8 billion in Operating Cash Flow
. That is an $8 billion gap where
profits aren't converting to cash immediately.
The Inventory Balloon:
I noticed that inventory has nearly doubled this
year, hitting
$19.8 billion
. Management says this is to prep for the
"Blackwell" launch, but holding ~120 days of inventory seems like a huge
capital drag to me.
The "Paper" Chase:
I calculated their Days Sales Outstanding (DSO), and
it has crept up to about
53 days
. As revenue skyrockets, NVIDIA is
waiting nearly two months to get paid, which suggests they might be extending
massive credit terms to enterprise clients to keep the flywheel spinning.
My personal read? NVIDIA is "burning the furniture" to build inventory, betting
everything that the
Blackwell architecture
will sell out instantly in Q4.
Making Sense of the Round-Tripping News
I want to be clear: I didn't discover this next part. It’s been all over the
financial news lately, and if you follow
Michael Burry
(the "Big Short"
guy), you’ve probably seen his tweets warning about "circular financing" and
suspicious revenue recognition
.
I wanted to map it out for myself to see what the fuss was about. Burry shared
a chart recently that visualizes a "web" of deals, and it looks something like
this:
Leg 1:
NVIDIA pledges billions (part of a widely reported $100B
investment roadmap) to
OpenAI
.
Leg 2:
OpenAI signs a massive
$300 billion
cloud contract with
Oracle
(Project Stargate) to host its models.
Leg 3:
To fulfill that contract, Oracle turns around and places a
$40
billion
order for NVIDIA’s GB200 GPUs.
Here is the Nano Banana Pro generation I just did for the visual people out there:
Burry’s argument, and the reason
regulators like the DOJ are reportedly looking
into this
—is
that this mimics "Round-Tripping." It raises a tough question: If NVIDIA
stopped investing in OpenAI, would OpenAI still have the cash to sign that deal
with Oracle? And would Oracle still buy those chips? If the answer is "no,"
then some of that revenue might be more fragile than it looks.
OpenAI making moves to reduce dependency on NVIDIA
The other big shift I’ve been tracking is OpenAI’s pivot. They used to be
NVIDIA’s star pupil, but now they look more like a future rival.
On one hand, they are hugging NVIDIA tight—deploying 10 gigawatts of infrastructure to train GPT-6. But on the
other, they seem to be building a supply chain to kill their dependency on
Jensen Huang.
The evidence is pretty loud if you look for it. "Project Stargate" isn't just a
data center; it's a huge infrastructure plan that includes custom hardware.
OpenAI made some news buying DRAM wafers directly from Samsung and SK Hynix (the 2 main HBM
world provider), bypassing NVIDIA’s supply chain, and many others, as reported
here
,
here
, or
here
, and widely debated
on Hacker News here
.
Plus, the talent migration is telling: OpenAI has poached
key silicon talent, including Richard Ho (Google’s former TPU
lead) back in 2023, and more recently many hardware engineers from Apple (around 40
apparently).
With the
Broadcom partnership
,
my guess is OpenAI plans to use NVIDIA GPUs to
create
intelligence, but run that
intelligence on their own custom silicon to stop bleeding cash, or by betting on
Edge TPU-like chips for inference, similar to what Google does with its NPU chip.
The big question is, which money is Openai planning on using to fund this?
and how much influence does NVIDIA has over OpenAI’s future plans?
The $100 billions that NVIDIA is "investing" in OpenAI is not yet confirmed neither,
as reported
here
,
An interesting idea for Oracle: Groq acquisition
Everyone is talking about
Inference
costs right now, basically, how
expensive it is to actually
run
ChatGPT or any other LLMs versus training it.
Now I'm looking at
Groq
, a startup claiming specifically to be faster and cheaper
than NVIDIA for this task. The founder is
Jonathan Ross
,
a former Google TPU lead and literally the person that basically had the idea of TPU.
There is another layer to this that I think is getting overlooked as well:
The
HBM Shortage
created by Openai’s direct wafer purchases.
From what I understand, one of the biggest bottlenecks for NVIDIA right now is
HBM (High Bandwidth Memory), which is manufactured in specialized memory fabs
that are completely overwhelmed. However, Groq’s architecture relies on SRAM
(Static RAM). Since SRAM is typically built in logic fabs (like TSMC) alongside
the processors themselves, it theoretically shouldn't face the same supply
chain crunch as HBM.
Looking at all those pieces, I feel Oracle should seriously look into buying Groq.
Buying Groq wouldn't just give Oracle a faster chip, it could give them a chip that is
actually
available
when everything else is sold out. It’s a supply chain hedge.
It's also a massive edge for its main client, OpenAI, to get faster and cheaper inference.
Combine that with the fact that
Oracle’s margins on renting NVIDIA chips are
brutal
, reportedly
as low as 14%, then the deal just makes sense. By owning Groq, Oracle could stop
paying the "NVIDIA Tax," fix their margins, and bypass the HBM shortage
entirely.
But would NVIDIA let that happen? and if the answer is no, then what does that tell us
about the circular funding in place? Is there a Quid pro quo where Nvidia agrees to invest
100 billions in OpenAI in exchange of Oracle being exclusive to Nvidia?
Final Thoughts
As we head into 2026, when looking at Nvidia, openai and Oracle dynamics, it looks like they are squeezing each
other balls. I do not know if Nvidia knew about the Openai deal about the wafer memory supply, or was there any collusion?
Does NVIDIA is fighting to maintain exclusivity for both training and inference at Stargate? What kind of chips is Openai
planning on building ? TPU/LPU like? Or more Edge TPU?
Me, I’m just a guy reading the reports, I have no way to speculate on this market. But I do know one thing: The AI hardware market
is hotter than ever, and the next few quarters are going to be fascinating to watch.
The Yew team is thrilled to announce the release of Yew 0.22! After a longer-than-expected journey, this release brings significant improvements to ergonomics, performance, and developer experience.
The
yew-agent
crate now includes its own web worker implementation, removing the external dependency on
gloo-worker
. This also adds support for
module-type web workers
:
let spawner =WorkerSpawner::<MyWorker>::new() .as_module(true)// Use ES module workers .spawn();
The
FromQuery
and
ToQuery
traits from gloo are now re-exported via
yew_router::query
for more flexible query parameter handling, along with dynamic basename support.
The police in Poland arrested three Ukrainian nationals for allegedly attempting to damage IT systems in the country using hacking equipment and for obtaining "computer data of particular importance to national defense." [...]...
The police in Poland arrested three Ukrainian nationals for allegedly attempting to damage IT systems in the country using hacking equipment and for obtaining "computer data of particular importance to national defense."
The three men, aged between 39 and 43, could not explain why they were carrying the electronic devices. They now face charges of fraud, computer fraud, and possession of devices and software intended for criminal activity.
According to the police, the Ukrainians "were visibly nervous" when officers stopped them and said they were heading to Lithuania while traveling around Europe.
"Officers thoroughly searched the vehicle's interior. They found suspicious items that could even be used to interfere with the country's strategic IT systems, breaking into IT and telecommunications networks," the Polish police says in a
press release
.
"During the investigation, officers seized a spy device detector, advanced FLIPPER hacking equipment, antennas, laptops, a large number of SIM cards, routers, portable hard drives, and cameras." [machine translated]
During questioning, the three individuals pretended not to understand more specific questions about the seized equipment.
The Flipper Zero device is a portable tool for pentesting and hardware hacking intended for education and security research purposes. It can interact with a range of radio frequencies, capture data delivered this way, or jam radio communication.
The device can read or emulate RDIF, NFC, and Bluetooth signals, and emulate input devices, such as a keyboard and mouse, which can be used to execute scripts.
Due to the device's extensive capabilities and relatively low cost, it has become popular among cybersecurity enthusiasts and for malicious purposes. While many other devices can perform the same function, widespread media attention and its use in attacks, has led to bans in
Brazil
,
Canada
, and on the
Amazon online marketplace
.
Another device was a K19 RF/GS detection tool used for finding hidden surveillance equipment. It is advertised as being capable to detect wireless signals (RF), GPS trackers, hidden cameras (via laser/IR), and strong magnetic fields.
The Ukrainians claimed to be IT specialists, and the police in Poland are considering multiple scenarios for the reason the three men came to the country.
Although the data on the seized storage devices was encrypted, officers from the country's Central Bureau for Combating Cybercrime (CBZC) were able to collect evidence.
Authorities have not shared any details about the cyber activities of the three men but announced the charges against them and detained them for three months pending trial.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
[$] Disagreements over post-quantum encryption for TLS
Linux Weekly News
lwn.net
2025-12-08 18:27:58
The
Internet Engineering Task Force (IETF) is the standards body responsible
for the TLS encryption standard — which your browser is using right now
to allow you to read LWN.net. As part of its work to keep TLS secure, the IETF
has been entertaining
proposals to adopt "post-quantum" cryptography ...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on December 18, 2025)
J
has an infamous prototype implementation called the
J Incunabulum
.
If you click that link, you’ll immediately understand why it has the reputation
it has.
I’d like to share why I now think this “unreadable” code actually communicates
better than so-called “readable” code.
Let me be clear; on my first encounter with the J Incunabulum, I found it
inscrutable. However, after having heavily steeped in APL for a few years, I
just gave it a quick re-read and had a completely different experience.
Seeing the Forest
The code catches flack for it’s single-letter names and terseness, but it
directly references bog standard APL and parsing concepts. What’s more
important is that the code macro-structure is immediately apparent and clearly
answers high-level questions up front.
What Are Our Basic Datatypes?
Right off the bat, Whitney tells us the kind of data we’ll be thinking about:
typedef char C;typedef long I;
typedef struct a{I t,r,d[3],p[2];}*A;
The first line gives us our main C types, and we can easily guess the second
defines an array structure. Indeed, at a bare minimum, APL arrays need to
encode type, rank, and shape metadata, telling us what
t
,
r
, and
d
are.
Thus
p
points to our element data. Note that
d
probably stands for
“dimension” or perhaps “depth”; I’m unsure on this point.
We also notice that
d[3]
says we only support up to rank 3, already signaling
the explicit limitations chosen for this proof of concept. It’s not entirely
clear why we have two
p
pointers, though. Let’s keep reading.
What Are Our Basic Operations?
Next we find the important fundamental operations we’ll be using to build our
new array language:
#define P printf
#define R return
#define V1(f) A f(w)A w;
#define V2(f) A f(a,w)A a,w;
#define DO(n,x) {I i=0,_n=(n);for(;i<_n;++i){x;}}
I *ma(n){R(I*)malloc(n*4);}mv(d,s,n)I *d,*s;{DO(n,d[i]=s[i]);}
tr(r,d)I *d;{I z=1;DO(r,z=z*d[i]);R z;}
A ga(t,r,d)I *d;{A z=(A)ma(5+tr(r,d));z->t=t,z->r=r,mv(z->d,d,r);
R z;}
Apparently, we’ll be printing and returning a lot, as expected. APL functions
have fixed arity of one or two, calling the left argument
⍺
(alpha) and the
right
⍵
(omega). Considering that J functions are called “verbs”, it becomes
pretty clear that
V1
and
V2
are all the function prototypes needed for J
primitives. Note the
K&R-style definitions
.
DO
defines our basic loop operation, so iterations will probably all
naïvely be
O(1)
;
ma
says we’ll be allocating 4-byte (
i.e.
32-bit) chunks;
mv
essentially gives us a copy operation over chunks;
tr
or “times reduce” by inspection; and finally
ga
generates a new array.
All off these—except for
tr
perhaps—are clearly going to be useful. The
32-bit and
data model
assumptions here are a bit dirty, but we can
forgive that in a proof of concept.
How About Using the Basic Operations?
Now that we know our basic toolkit here’s the implementation core!
All our basic functions, both dyadic and monadic, sit together in a single
block. In fact, each of these definitions is brutally simple and explicitly
elide array language features in service of architectural clarity. For example,
Even if you don’t know APL, one can guess what “plus” should do. We already
know that
V2
defines a new arity-2 function, and what’s the obvious way to
add arrays? Just vector addition,
i.e.
element-by-element, right?
Procedurally, we need to allocate an output array of the right size and then
populate it with the result, one element at a time.
Now we see why we want the
tr
helper: it gives us the data element count of
an arbitrary multi-dimensional array.
Also, as an APLer we immediately notice the elision of
scalar extension
.
That can me modeled with
rsh
on the left argument, so we’re probably just
defining some minimal subset of the primitive APL functions.
Other definitions all follow a similar pattern: 1) calculate the metadata for
the result array, 2) allocate a result in the conventional name
z
, and 3)
populate said array.
Notice, too, that
find
is not yet implemented. Curious. What else stands out?
These are the only branches we see, indicating
intentional complications
,
and
rsh
(read “reshape”) is obviously the longest implementation. From
experience, we know that reshape recycles elements when extending, and indeed
it’s clear that’s what the
if
-branch takes care of—cleverly using a
circular copy on
z->p
.
Also notable is the complete lack of error handling. We are clearly trying to
laser focus on exhibiting core ideas here.
What About Output?
We have just a single datatype, so display is brutally simple:
There are no pretty print facilities, but rank, element data, and recursive
nesting are the bare minimum needed to fully understand array contents. This is
all that is needed to get up and running.
What About Parsing?
First we setup a
vtable
that contains our implementation functions:
C vt[]="+{~<#,";
A(*vd[])()={0,plus,from,find,0,rsh,cat},
(*vm[])()={0,id,size,iota,box,sha,0};
Each J verb has three pieces of data: 1) its glyph, 2) its dyadic definition,
and 3) its monadic definition. We use a standard APL technique of storing this
data in a column-major table. The
vt
column names the verbs (via their
glyphs),
vd
maps dyadic usage to it’s vtable function, and 3) similar for
monadic usage with
vm
.
Nothing special, but clear and direct. In particular, this means that a verb’s
“ID” is its table offset.
I st[26]; qp(a){R a>='a'&&a<='z';}qv(a){R a<'a';}
A ex(e)I *e;{I a=*e;
if(qp(a)){if(e[1]=='=')R st[a-'a']=ex(e+2);a= st[ a-'a'];}
R qv(a)?(*vm[a])(ex(e+1)):e[1]?(*vd[e[1]])(a,ex(e+2)):(A)a;}
Both
qp
and
qv
are obvious predicates (
q
for “query”?), and we can see
that
ex
is calling functions in our vtable. This is obviously the executor.
It even supports
=
, which stores values in
st
. The one-to-one map between
alphabet characters and
st
indices is nice and minimal. We only allow
single-letter names, probably enough for small language experiments.
If we’re not an
=
-definition, though, then we execute. APL’s
function
precedence
rule makes this dead simple: just right-recurse. The only
question is whether we’re a dyadic, monadic, or “niladic” application. The
nested digraph here creates this obvious trident branch structure.
What About Input?
After a cool three dozen
SLOC
, we conclude our J interpreter:
At this point, the patterns are familiar.
noun
and
verb
do the obvious
things,
wd
creates our token stream, notably only supporting single character
tokens, and
main
reads exactly like
R-E-P-L
(well, in reverse).
Points of Note
The overall code is organized in a top-down manner. Our audience here is APLers
who know a thing or two about language implementation. It makes sense to lead
with high level ideas, and the code exactly mirrors this structure. We
literally read the code linearly top-to-bottom just like a short whitepaper.
That’s great communication, IMHO!
As mentioned offhand above a few times, the particular limitations chosen serve
to clarify the presented message. It’s not about implementation shortcuts.
Consider that we imposed
Single-character tokens only;
No Spaces;
No parentheses to control order of operations;
No array literals;
No scalar extension;
No function definitions;
Limited rank;
etc.
What purpose did these concessions serve? Why not more?
Look at
rsh
. It would clearly be simpler to remove the final conditional
that allows circular extension. Given all the other limitations, why not add
another? Well, this would make the domain of inputs to
rsh
special and
different from the other functions. That complicates the conceptual model.
J didn’t exist when the Incunabulum was written. It was an idea.
Iverson
,
Whitney
, and
Hui
were trying to discuss and discover what a new APL
language could look and feel like. The Incunabulum is a single page of code
that presents tools to concisely express the salient ideas in that design
space—
array model
, usability of ASCII glyphs,
leading axis theory
,
etc.
With that goal, poking at things like fancier array literals or parentheses
support feels like bikeshedding. The rank cutoff is arbitrary but trivially
changeable; none of the code depends on a specific cutoff.
Function definitions are interesting from a language design perspective.
Worrying about scope and closure support introduces questions about syntax. We
could easily directly store token vectors easily enough, but this complicates
the recursion in
ex
. All-in-all, these are questions somewhat orthogonal to
the others.
I’ll stop there…
What is most interesting, IMHO, is that all the above comes across in short
20-odd minutes of reading this supposedly “inscrutable” code. What more could
one ask for?
What about
find
?
Out of curiosity, I threw together a plausible Whitney implementation:
V2(find){I r=w->r,*d=w->d,n=tr(r,d),j=n;DO(n,if(a->p[0]==w->p[i])j=i);
A z=ga(0,1,&w->r);DO(r,z->p[i]=(j/(r==1?1:tr(r-1-i,d+1+i)))%d[i]);R z;}
which is a bit subtle and certainly the most complicated logic in here. It
works fine for simple integer arrays, but boxes compare as pointers. Recursive
comparison of contents might make more sense. Apparently,
find
opens a whole
design question about what array equality even means.
Maybe all these factors contributed to
find
remaining unimplemented, or maybe
Whitney just ran out of time!
Epilogue: Getting Compiled
It’s fun to play around with this little implementation. I definitely recommend
compiling it yourself and giving it a little test drive.
Due to the K&R function prototypes, modern GCC will need the
-ansi
or
-std=c89
flag. Also, as noted above, the code assumes a 32-bit architecture
in one place. We could cross compile, but the easiest workaround is a simple
patch:
--- a/inucabulum.c
+++ b/inucabulum.c
@@ -5,7 +8,7 @@
#define V1(f) A f(w)A w;
#define V2(f) A f(a,w)A a,w;
#define DO(n,x) {I i=0,_n=(n);for(;i<_n;++i){x;}}
-I *ma(n){R(I*)malloc(n*4);}mv(d,s,n)I *d,*s;{DO(n,d[i]=s[i]);}
+I *ma(n){R(I*)malloc(n*sizeof(I));}mv(d,s,n)I *d,*s;{DO(n,d[i]=s[i]);}
tr(r,d)I *d;{I z=1;DO(r,z=z*d[i]);R z;}
A ga(t,r,d)I *d;{A z=(A)ma(5+tr(r,d));z->t=t,z->r=r,mv(z->d,d,r);
R z;}
It’ll spit out some
builtin-declaration-mismatch
warnings, but those are
immaterial here.
Google Chrome adds new security layer for Gemini AI agentic browsing
Bleeping Computer
www.bleepingcomputer.com
2025-12-08 18:08:52
Google Chrome is introducing a new security architecture designed to protect upcoming agentic AI browsing features powered by Gemini. [...]...
Google is introducing in the Chrome browser a new defense layer called 'User Alignment Critic' to protect upcoming agentic AI browsing features powered by Gemini.
Agentic browsing is an emerging mode in which an AI agent is configured to autonomously perform for the user multi-step tasks on the web, including navigating sites, reading their content, clicking buttons, filling forms, and carrying out a sequence of actions.
User Alignment Critic is a separate LLM model isolated from untrusted content that acts as a "high-trust system component."
Gemini is Google’s AI assistant, that can generate text, media, and code. It is used on Android and various Google services, and integrated into Chrome since September.
At the time, Google
announced
plans to add agentic browsing capabilities in Chrome via Gemini, and now the company is introducing a new security architecture to protect it.
The new architecture, announced by Google engineer Nathan Parker, mitigates the risk of indirect prompt injection, in which malicious page content manipulates AI agents into performing unsafe actions that expose user data or facilitate fraudulent transactions.
Parker explains that the new security system involves a layered defense approach combining deterministic rules, model-level protections, isolation boundaries, and user oversight.
The main pillars of the new architecture are:
User Alignment Critic
– A second, isolated Gemini model that cannot be “poisoned” by malicious prompts will vet every action the primary AI agent wants to take by examining metadata and independently evaluating its safety. If the action is deemed risky or irrelevant to the user’s set goal, it orders a retry or hands control back to the user.
User Alignment Critic logic on Chrome
Source: Google
Origin Sets
– Restricts agent access to the web and allows interactions only with specific sites and elements. Unrelated origins, including iframes, are withheld entirely, and a trusted gating function must approve new origins. This prevents cross-site data leakage and limits the blast radius of a compromised agent.
Restricting what the agent sees on a given webpage
Source: Google
User oversight
– When the agent visits sensitive sites such as banking portals or requires Password Manager sign-ins to access stored passwords, Chrome pauses the process and prompts the user to confirm the action manually.
User prompted to handle the final step of risky actions
Source: Google
Prompt injection detection
– A dedicated classifier on Chrome scans pages for indirect prompt-injection attempts. This system operates alongside Safe Browsing and on-device scam detection, blocking suspected malicious actions or scam content.
This layered defense approach towards agentic browsing shows that Google is more careful about giving its LLMs access to the browser than vendors of similar products, who researchers showed to be vulnerable to phishing,
prompt injection attacks
, and purchasing from fake shops.
Google has also developed automated red-teaming systems that generate test sites and LLM-driven attacks to continuously test defenses and develop new ones where required, pushed quickly to users via Chrome’s auto-update mechanism.
"We also prioritize attacks that could lead to lasting harm, such as financial transactions or the leaking of sensitive credentials," Google
says
, adding that its engineers would get immediate feedback on the attack success rate and would be able to respond quickly with fixes delivered through Chrome's aut-update mechanism.
To stimulate security research in this area, Google announced bounty payments of up to $20,000 for anyone who can break the new system, calling the community to join in the effort to build a robust agentic browsing framework on Chrome.
Broken IAM isn't just an IT problem - the impact ripples across your whole business.
This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.
Addressing Linux's missing PKI infrastructure
Linux Weekly News
lwn.net
2025-12-08 17:48:35
Jon Seager, VP of engineering for Canonical, has announced
a plan to develop a universal Public Key Infrastructure tool called
upki:
Earlier this year, LWN featured an excellent article titled
"Linux's missing CRL
infrastructure". The article highlighted a number
of key issues surrounding traditio...
Jon Seager, VP of engineering for Canonical, has
announced
a plan to develop a universal Public Key Infrastructure tool called
upki:
Earlier this year, LWN featured an excellent article titled
"
Linux's missing CRL
infrastructure
". The article highlighted a number
of key issues surrounding traditional Public Key Infrastructure (PKI),
but critically noted how even the available measures are effectively
ignored by the majority of system-level software on Linux.
One of the motivators for the discussion is that the Online
Certificate Status Protocol (OCSP) will cease to be supported by Let's
Encrypt. The remaining alternative is to use Certificate Revocation
Lists (CRLs), yet there is little or no support for managing (or even
querying) these lists in most Linux system utilities.
To solve this, I'm happy to share that in partnership with
rustls
maintainers
Dirkjan Ochtman
and
Joe Birr-Pixton
, we're starting the
development of upki: a universal PKI tool. This project initially aims
to close the revocation gap through the combination of a new system
utility and eventual library support for common TLS/SSL libraries such
as
OpenSSL
,
GnuTLS
and
rustls
.
No code is available as of yet, but the announcement indicates that
upki will be available as an opt-in preview for
Ubuntu 26.04 LTS. Thanks to Dirjan Ochtman for the tip.
Quanta to Publish Popular Math and Physics Titles by Terence Tao and David Tong
Quanta Books
is delighted to announce two new upcoming books by mathematician Terence Tao and theoretical physicist David Tong.
Six Math Essentials
will be Tao’s first math book written for a popular audience. In the book, Tao — a recipient of the Fields Medal and one of the world’s top mathematicians — will explore six ideas that have guided mathematicians throughout history. This short and friendly volume is for all readers, Tao says, because he believes that “mathematics has become unnecessarily intimidating and abstruse to the general public while being more essential than ever in the modern world.”
Six Math Essentials
will be available internationally, with translated editions in Chinese, French, Greek, Italian, Polish and other languages. It will arrive in U.S. bookstores in November 2026.
Tong’s book,
Everything Is Fields
, will illuminate quantum field theory — the physics that explains the fundamental makeup of the universe — drawing from Tong’s distinguished track record as a quantum field theorist and public communicator. “This book reveals the hidden unity that ties together particles and forces,” says Tong. “Everything — matter, light, even you — are just waves on a restless sea known as a quantum field.”
“Terry Tao and David Tong are intellectual powerhouses and seasoned communicators,” says Thomas Lin, publisher of Quanta Books and founding editor of the Pulitzer Prize–winning
Quanta Magazine
. “Their books embody the curiosity and ambition that animate our imprint, and I can’t wait to share them with readers everywhere.”
Quanta Books is an editorially independent subsidiary of the Simons Foundation and a partner imprint of
Farrar, Straus and Giroux
. The imprint publishes books that illuminate and elucidate the central questions and fundamental ideas of modern science for readers, inviting a deeper understanding of the universe through artful storytelling. Quanta Books’ first title,
The Proof in the Code
by math journalist Kevin Hartnett, will be published in June 2026 and is available for
preorder
now.
In
Six Math Essentials
, Tao, the world’s most renowned mathematician, introduces readers to six core ideas that have guided mathematicians from antiquity to the frontiers of what we know today. This elegant volume explores: numbers as the gateway to quantitative thinking, algebra as the gateway to abstraction, geometry as a way to go beyond what we can see, probability as a tool to navigate uncertainty with rigorous thinking, analysis as a means to tame the very large or very small, and dynamics as the mathematics of change.
Six Math Essentials
— Tao’s first popular math book — offers a glimpse into the workings of an incomparable mind and how he thinks about the creativity, beauty, and interconnectedness of the mathematical enterprise. Math, Tao insists, isn’t magic — it’s a powerful way of thinking that anyone can learn.
Everything Is Fields
In
Everything Is Fields
, Tong leads readers on a lively tour through quantum field theory. Tong, a leading theoretical physicist and University of Cambridge professor, explores Quantum field theory, or QFT. The theory forms the underlying mathematical framework of the Standard Model, the deepest description we have of the fundamental laws of physics. And, as Tong shows, it reveals a startling truth: that, at our most basic level, we are made not of particles or forces, but fields, fluid-like substances stretched throughout the entire universe. With his infectious sense of wonder and characteristic wit, Tong buoys our journey through the most difficult topic in theoretical physics. He revels in all that we’ve learned about our world and illuminates the questions we’re still trying to answer about the stuff that makes up you, me, and everything else.
The Proof in the Code
The Proof in the Code
is the definitive account of the birth and rise of Lean, a proof assistant developed at Microsoft that is transforming the enterprise of mathematics and ushering in a new era of human-computer collaboration. Although Lean was originally conceived of as a code-checking program, a small group of mathematicians recognized its potential to become something far more powerful: the “truth oracle” that thinkers have sought for centuries, a tool to definitively verify or refute any mathematical or logical assertion, no matter how complex. This is the story of the grassroots effort to make that dream a reality. Filled with insights about the future of math, computers, and AI,
The Proof in the Code
is a brilliant work of journalism by Hartnett, a leading math writer whose research and reporting offer a profound answer to a longstanding mystery: Can computers reveal universal truths?
Recently I have spoke with two of my friends who all had fun playing with AI.
Last month, I met with Eric, a fearless PM at a medium size startup who recently got into vibe coding with Gemini.
After getting familiarized with Gemini, Eric was genuinely amazed by how AI quickly turns prompt into playable web applications. It served great purpose as a first prototype to communicate ideas to designers and engineers. But Eric really wanted to skip those steps and directly ship it to prod. But he couldn’t really understand that Gemini actually built a single-page HTML file that merely looks like a working app. Sadly, one cannot build a reliable enterprise product out of this. And there is really no effective way for Eric to catch up on these technical details and outpace the engineering team himself.
Last week, I had coffee with Daniel, a senior staff engineer who recently grew fond of AI coding and found it to be the true force multiplier.
Daniel was skeptical of AI at first, but lately he hasn’t wrote a single line of code for months already. What he does is just precisely prompt the AI to create new components in an existing framework (involving Kafka, postgres, AuthN/Z, and k8s infra stuff) and adhering to certain preexisting paradigms. He would just spot-check the correctness of AI’s work and quickly spin up local deployments to verify it’s indeed working. Later, he pushes the changes through code review process and lands those features. All without writing a single line of code and it’s production ready just as if he wrote them himself. To Daniel, building and shipping things fast and scalable is simpler than ever.
Interpolating between the two stories
After speaking with Eric and Daniel, I suddenly feel that there is an overarching theme around the use of AI that we can probably interpolate out of the stories here. And after pondering for a weekend, I think I can attempt to describe it now: it’s the problem of
reliable engineering - how can we make AI work reliably
.
With the AI superpower, one can task it to do all crazy things on the internet with just typing a few lines of prompt. AI always thinks and learns faster than us, this is undeniable now. However, to make AI work actually useful (not only works, but reliable and trustworthy), we also need to catch up with what the AI does as quickly as possible.
It’s almost like - we need to send the AI off to learn and think as fast as possible, but we also need to catch up as soon as possible to make it all relevant. And the speed we catch up things is critical to whether AI can help us effectively do these tasks. For the case of Daniel, he can spot-check and basically just skim through AI’s work and know for sure it’s doing the right thing with a few simple tests steps to verify, hence his results are more reliable. Whereas for Eric, he needs to basically learn software development from the bottom up to comprehend what the AI has done, and that really doesn’t give him the edge to outpace engineering teams to ship features reliably by himself.
Where AI exploded: fast verification, slow learning and creation
To generalize the problem again, I think for all the tasks we do, we can break them down into two parts: learning/creation and verification. Basically doing the task and checking if the task is done right. Interestingly, this gives us a good perspective to our relationship with AI on performing such tasks.
Effort wise, if
verification « learning/creation
, one can very effectively check AI’s work and be confident about its reliability.
If
verification ~= learning/creation
, one spends equal amount of time checking AI’s work. It’s not a big win, maybe AI becomes a good automation script to cut down some boilerplate.
If
verification » learning/creation
, one cannot be sure about AI’s work that easily, and we are in the vibe-land.
A very good example of the first category is image (and video) generation. Drawing/rendering a realistic looking image is a crazily hard task. Have you tried to make a slide look nicer? It will take me literally hours to center the text boxes to make it look “good”. However, you really just need to take a look at the output of Nano Banana and you can tell if it’s a good render or a bad one based on how you feel. The verification is literally
instantaneous
and
effortless
because it’s all encoded as feeling or vibes in your brain. “Does this look right?” probably can be answered in the span of milliseconds by your vision cortex. There is also no special knowledge required -
human beings have been evaluating visual images since birth
, hardwired into our instincts.
The significant cost asymmetry can greatly explain why AI image generation exploded. If we can look for similar scenarios, we can probably identify other “killer” use cases of AI as well.
Verification debt: scarier than tech debt
However, if we go down into the bottom of the spectrum where verification becomes more intense - requiring domain knowledge, technical expertise, industry know-hows to tell if the AI is producing slop or not, we will enter this dark age of piling verification debt. More things are being created, but we are lagging behind to check if any of it actually works to our satisfaction.
If an organization keeps vibe-coding without catching up with verification, those tasks can quickly end up as “debts” that needs to be verified. When verification becomes the bottleneck, dangerous things can happen if we still want to move fast - we will risk ourselves running unverified code and having unexpected side effects that are yet to be validated. It can also apply to other fields - imagine asking AI to craft a new vaccine and you don’t want to wait for FDA to use it.
I’ve come across a few blog posts that talks about Verification Debt already. I think it’s genuinely a good problem for technical leaders to have in their mind in this era.
Verification Engineering is the next Context Engineering
AI can only reliably run as fast as we check their work. It’s almost like a complexity theory claim. But I believe it needs to be the case to ensure we can harvest the exponential warp speed of AI but also remain robust and competent, as these technologies ultimate serve human beings, and us human beings need technology to be reliable and accountable, as we humans are already flaky enough ;)
This brings out the topic of Verification Engineering. I believe this can be a big thing after Context Engineering (which is the big thing after Prompt Engineering). By cleverly rearranging tasks and using nice abstractions and frameworks, we can make verification of AI performed tasks easier and use AI to ship more solid products the world. No more slop.
I can think of a few ideas to kickoff verification engineering:
How to craft more technicall precise prompts to guide AI to surgically do things, rather than vibing it.
How to train more capable technical stakeholders who can effectively verify and approve what AI has done.
How to find more tasks that are relatively easy to verify but rather hard to create.
How to push our theoretical boundaries of what things we can succinctly verify (complexity theory strikes again).
Where next
I believe whoever figures out ways to effectively verify more complex tasks using human brains, can gain the most benefit out of the AI boom. Maybe we need to discard traditional programming languages and start programming in abstract graph-like dataflow representations where one can easily tell if a thing is done right or wrong despite its language or implementation details.
Maybe our future is like the one depicted in Severance - we look at computer screens with wiggly numbers and whatever “feels right” is the right thing to do. We can harvest these effortless low latency “feelings” that nature gives us to make AI do more powerful work.
We collected 10k hours of neuro-language data in our basement
Over the last 6 months, we collected ~10k hours of data across thousands of unique individuals. As far as we know, this is the largest neuro-language dataset in the world.
[1]
See
here
,
here
,
here
,
here
, and
here
(discussion only, no data available) for some of the larger datasets. See recent papers discussing the problem of small datasets
here
,
here
, and
here
.
Why did we do this? We train thought-to-text models. That is, we train models to decode semantic content from noninvasive neural data. Here are some entirely zero-shot examples:
The neural data is taken from the seconds leading up to but not including the time when the subject typed or spoke, meaning that the model detects an idea before the subject even compiles that idea down into words.
All examples are zero-shot to new subjects, whom the model has never seen before.
We'll write about the model in a future post. But before you can train a model that generalizes to new people, you need to get many thousands of hours of data. When we started, the existing datasets were either inapplicable or tiny. Most were in the low hundreds of hours (if that), and most had tens or, at a stretch, hundreds of subjects.
So we got thousands of people to come wear headsets in our basement. This post is about how we collected our dataset—what participants do, the hardware and software involved, and what we learned about operations and ML when we scaled it up.
What participants actually do
A participant comes in, signs a consent form, and sits down in a booth. A session manager fits a headset onto them and starts the session. Then, the participant has a freeform conversation with an LLM for two hours.
Sessions vary. Some are listening and speaking with an LLM, and some are reading and typing.
[3]
We use Deepgram for audio transcription, OSS120B on Cerebras for the LLM responses, and ElevenLabs for voicing certain replies. In the past, we used various Gemma and Llama models on Groq.
The goal is to maximize the amount that subjects type or say during the two-hour period, without constraining the topics they discuss.
[4]
In the beginning, we included tasks like 'retype this sentence', or 'paraphrase this but use this different tone'. Over time, we eliminated these and replaced them with more freeform conversation. We still include a few baseline tasks for calibration and easy model evals.
Each session produces multimodal neural data time-aligned with text and audio.
Participants have to touch-type without looking at the keyboard. In the beginning, participants would occasionally press a crazy key combination that crashed or closed the software. We could have fixed this in the code, but that would've taken time—so instead we 'simplified' the keyboards.
What your participants type—and whether it's remotely coherent—is a more difficult problem. We implemented a token quantity/quality scoring system that determines if we invite a participant back for future sessions, and we make sure participants know about this so they're incentivized to engage.
Below are passages typed by participants in May vs. October:
May:
October:
SO, AI NEEDS THIS CODE: 1, THOSE WHO BELONG TO THE CHURCH CAN NEVER BE FOUND GUILTY WHEN SINNED 2. HIDE THE SINS! CRIMES! WHICH IS A FEDERAL CRIME BUT THOSE ARE THE OLDEST TEACHINGS OR LAWS OF CHRISTIANITY! AND WE ARE ALL LIVING IN THIS HELL IN THE WEST. CHRISTIANS ARE DEEMED CRIMINALLY INSANE, PER A JEWISH THERAPIST, AND THE TEACHINGS ARE SUGGEST VERY GROTESQUE CRIMES AND SHE SHOWED ME THE PASSAGES IN THE FAKE VATICAN BIBLE. NO WONDER IS WAS NOT WRITTEN BY JESUS! DUH!
I guess the way I am thinking about it is that since the amygdala is the irrational fight or flight part of the brain it would activate/be used with a higher frequency when a human being finds themselves under threat. Humans tend not to find themselves under threat when experiencing loving and therefore safe interactions. Therefore,when engaging in positive social interaction, the amygdala is less reactive. I don't know exactly what has sparked this interest other than a curiosity to understant the human brain and how we make decisions and funtion as social beings. I guess it all could stem from my interest in improving well being/ reducing suffering.
I would travel to local elementary schools and teach kids how to ride bikes as well as teach them bike safety stuff. That was the most enjoyable part and stuck with me the most. I think it was seeing their excitement when they would get riding on their own. And watching their independence and confidence flourish. It was a super rewarding experience. This is so funny, it feels like a job interview. I think its the beginning of a newfound independence and selfhood for a lot of the kids.They get to move on their own accord and get to experience the world in a new way, its the first taste of freedom.
You'll also get much better engagement if the LLM personalizes the sessions. For the first few months of data collection, participants chatted with the LLM about generic, banal topics. Now, participants introduce themselves to the LLM very early in the session, and the LLM uses that context to tailor back-and-forth conversation to the particular person it's talking to. As a result, participants engage more with the LLM—and therefore provide better data.
Participants often raised discomfort as a distraction from the sessions. Ventilation was a common complaint. So, we bought
these fans
and
these pipes
. These can't be plugged in next to the data collection booths (because of electrical interference), so we snake an ~8m ventilation pipe along the ceiling from a central hub into each booth.
Making the headsets comfortable to wear is difficult, since you need to press a 4-pound helmet into participants' scalps. To address this, we cut polygonal sections of padding that compress inwards so as to not cover any sensors.
% of participants by # of sessions completed
At first, <20% of participants even finished their first session. Now, >97% complete their first session, and almost half sign up for more.
Headsets
There were two main things we thought about when we designed the headsets. The first was what modalities the headsets should have, and the second was how training headsets should compare to inference ones.
Modalities
There are many ways of measuring brain data: common modalities include EEG, fMRI, fNIRS, transcranial ultrasound, and MEG. We tried various modalities, but the main takeaway we found is that you need multiple. You can't practically make it work with just one, even if you get the best possible headset of that modality.
None of the available multimodal headsets were good enough (far worse than the best single modality versions of each). So we bought some of the best single-modality headsets, took them apart, 3D printed parts to make them fit together, and combined them into our own optimized multimodal headsets.
[5]
We have a 3D printer at our office that we use for prototyping and designing pieces. For the ones we put in production in data collection, we send them out to a professional printer and have them printed in bulk. We usually have them printed in Pa-F Nylon, which is stiffer and holds up longer before needing replacement.
If you want your model to perform well across various neural modalities and across sensors from different providers, you should design and train on a range of headsets. We buy sensors from several providers, combine them into different multimodal headsets, and then use those headsets essentially interchangeably. We also designed our data format such that data from many kinds of sensors fit nicely into a single, standard framework that our model can parse.
Training vs. inference
Designing headsets for training is very different from designing headsets for inference—what we'll eventually sell as a product. Training headsets should be maximally sensor-dense, can afford to be expensive, and don't need to be as comfortable. In inference, though, few people are willing to wear a 4-pound helmet as they go about their day—even if it can read their minds. So, we did ablation studies. The take-away here is that you should only think about the inference headset once you've trained a model on your data, because that lets you figure out the exact minimal inference headset.
(inference headset concept)
(training headset concept)
What should be shared across both training and inference is your data format. Initially, we got this wrong: we used HDF5 for data collection and storage and processed it into MDS for model training. Eventually, we switched to using Zarr 3 for everything. Zarr 3 gives us chunked, cloud-native storage with the same format for training and inference.
You might think a crucial consideration for training (and for inference) is noise. At first, so did we.
Noise Reduction
The sources of noise you'll notice are very different depending on which modality you use. That said, all modalities of noninvasive neural data are noisy. We're not disclosing all the modalities or headset configurations we use here, but we'll use EEG as an example. The important lessons, which apply to any modality, are that (1) noise-reduction is only worth it if it doesn't cripple the amount of data you can collect, and (2) you should always keep in mind the logistics of running sessions and recruiting participants.
Gel
The classic wisdom is that gel makes EEG data much better, and without it, your data will be substantially noisier. But if you care about data quantity, you probably shouldn't use gel.
It takes up to 30 minutes to apply, and we allocate ~3 minutes for the time between one participant finishing a session and the next one starting.
[6]
Most kinds of gel also dry out over time, meaning that we likely would've had to make sessions shorter—and fewer participants would have signed up if they had to let us put gel in their hair.
Using gel would've >2xed the marginal cost of an hour of data.
Instead, we got the highest quality dry electrodes we could, and we spring-loaded the 3D printed pieces so that a spring presses the electrode against the head. We had to try
various strengths of spring
because we wanted to maximize contact without causing discomfort. Generally, stronger springs work well at the front and back of the head; and weaker ones on the top of the head and above the ears.
The essential take-away here is that the fast switching time (2-3 mins) is super important. If you care about data quantity, you should operate with some fixed switching time as a constraint, and limit yourself only to interventions that improve quality without violating that constraint.
Electrical noise
Most buildings have a lot of background electrical noise, which shows up on any EEG power spectrum—in particular, a spike at 60Hz, the U.S. power line frequency. Here is what that spike looks like with no filtering:
(not from our dataset—example from
MNE
.
[7]
Worth noting that this data is from outside of the United States, where the power line frequency is 50hz rather than 60hz.
)
At first, we tried to get around this by triple-layering
rubber mats
around the equipment.
But the fundamental issue was that some of the headset components weren't wireless, so we had to plug them into the wall (meaning that the rubber didn't help that much, though it does help a bit and we still use it).
We then tried getting
adapters that plug into the wall and output clean power
. This didn't really help.
Eventually, we used
Anker batteries
and only plugged stuff into the DC adapters (we got extra batteries so we could switch them out to charge). This helped a lot, but the thing that really helped was turning off all the power to that side of the building.
Turning the power off had a lot of downsides. It meant we had to drag ~30 lb batteries back and forth an average of once an hour to charge, and it was difficult to power some of the headsets with only DC power, which made us drop ~10% of frames.
Luckily, after a few thousand hours, noise stopped mattering as much.
Why noise matters much less at scale
The key observation: data quantity swamps every noise-reduction technique once you cross ~4k-5k hours.
When we only had a few hundred hours, denoising was mandatory. Every extra source of variation—different booths, power setups, posture changes—meant the same neural pattern showed up in fewer comparable examples, so the encoder had less to learn from. Keeping the environment stable and electrically boring was the easiest way to keep the problem manageable.
At ~4-5 thousand hours, that constraint changes. The model now sees the same patterns across many people and setups, and has enough capacity to represent both the mess and the neural signal.
[8]
Similar effects appear in other modalities. Speech models like Whisper, trained on hundreds of thousands of hours of diverse, weakly supervised web audio, show that trading label quality for sheer quantity improves robustness and generalization (see
here
). Video-language models trained on uncurated instructional videos learn strong representations even though a large fraction of clip-caption pairs are misaligned or noisy (see
here
). In each of these cases, once the dataset is sufficiently large and diverse, total volume of data outweighs strict curation and noiselessness for downstream robustness.
The decoder gets enough examples to tell apart "this changes with the text" from "this is just the room". At that point, data quantity overwhelms noise, and most of the extreme noise-reduction work stops buying much—so we turned the power back on.
Scaling the operation
After a few thousand hours, noise stops being the thing to worry about in data collection. The things that matter most are
The raw number of people you can put in headsets; and
The marginal cost per usable hour of data.
People and bookings
Since we run sessions 20 hours/day, 7 days/week, we get a lot of bookings and see a lot of people. An Uber driver once started telling us about 'this great new way to earn money in SF'—and it turned out to be our data collection.
Surprisingly central to getting headset occupancy high enough was building a custom booking suite.
[9]
We tried Calendly, You Can Book Me, and various other things before making our own. In the end, all the available booking systems had different issues, e.g. not allowing us to blacklist certain people, not allowing dynamic pricing or overbooking, and limited visibility for participants and bookings.
There are two main tenets: dynamic pricing and dynamic overbooking. Because few people book at 7am on a Sunday, dynamic pricing means participants are paid more for that slot. Because many people book at 7pm on a Friday, but few of them actually show up, dynamic overbooking allows more people to sign up. The overbooking algorithm can also access information about particular participants.
[10]
E.g. if Alice has reliably shown up for sessions before, the algorithm lowers the expected total no-show rate during future times when Alice has booked.
In order to get your model to generalize, it's important to get a dataset of thousands of unique individuals. That is *not* just thousands of hours from dozens or hundreds of individuals. In an ideal world, most participants would only come in for one or two sessions, but that trades off hard against total hours. We cap the number of sessions that any one participant is allowed to do at 10 sessions. Before we introduced the cap, our schedule was fantastically full, but we weren't getting enough unique participants because long-term returners were filling all the slots.
Even so, participant recruitment gets easier with scale. We now have participant-ambassadors, whom we pay to recruit more participants for us even after they've completed their 10 sessions.
[11]
Since the start, we've tried dozens of ways to directly recruit first-time participants. By far the most effective has been Craigslist. Almost every day since April, we've posted a listing—
in
sections
from
'
computer
'
to
'
creative
'
to
'
labor gigs
'—that advertises a $50 payout for wearing a helmet and typing for two hours.
Marginal cost per usable hour of data
Between May and October, we cut the marginal cost per usable hour of data by ~40%. Here are the highest-impact things we did.
In August, we entirely rewrote the data format and data collection backend to catch issues in the data live, before participants complete two potentially useless hours of data collection. The sessions stream to the cloud, and we automatically sanity-check each session in real time for modality dropout, token quality, timestamp drift, and alignment jitter. Any session that falls outside the tolerance bands gets flagged for session managers to restart or debug.
[12]
This is only possible because we changed our data format to use Zarr 3 and optimized it for fast quality checks.
This change alone cut the marginal cost of data by ~30% and ~1.5xed the amount of usable data we collect.
Second, we enable session managers to run more sessions in parallel without sacrificing supervision. We put
EVERSECU cameras
in the booths, so session managers can monitor and speak directly to participants without leaving the main supervision station. We also made a unified booking -> intake -> data collection backend, which massively simplifies the participant intake process and improves security.
[13]
As one example of how the unified system helps, it detects how much support a given participant is likely to need (based on, e.g., whether they've attended sessions before, their answers to questions on the booking form, etc.) and how many concurrent bookings are already scheduled for that participant's sign-up time. If needed, it can also stagger booking start-times by 5-10 minutes so session managers don't struggle with an onslaught of arrivals all at once.
Now What
The steps to building thought-to-text have always been clear: (1) collect a dataset; (2) train a model; (3) close the loop. We're now well into step two—we spend >95% of our time training models and very little time actively thinking about data collection.
But you can't have a model without a dataset, so you do need to get this part right.
If you're collecting a similar kind of data, training multi-modal models, or want to give us cheap GPUs, we'd love to hear from you. Please reach out to us at
contact@condu.it
.
And if this dataset sounds cool to you and you want to train models with it, we're hiring engineers and researchers. Reach out to us at
jobs@condu.it
.
Appendix: Booths
We started out putting each participant in a separate room at a normal work station. We saw huge noise spikes in the data from participants moving their heads, and sometimes they'd get up and walk around with the headset on or take the headset off without telling us.
The solution to this was putting multiple booths in one shared room for easier supervision. We also installed chinrests that hold participants' heads still, which help reduce motion artifacts in the data.
[14]
We initially wanted to get something like an optician's chinrest, but the bar across the forehead got in the way of the headset. We ended up buying
speaker stands
and sawing pieces of wood to screw onto them. This works pretty well, although participants don't always use them. You should ensure that any desks, chairs, and chinrests that you buy are height-adjustable.
Now, we use
these nice phone booths
(~$10k each, though you can sometimes get them used). We initially picked them because they were the best option for turning into safe Faraday Cages.
We've stopped worrying so much about electrical noise, so we only ever bothered turning one booth into a Faraday Cage. But professional phone booths save a lot of hassle and set participants at ease, so you should use them if you can.
If you don't have two weeks to wait for booths to arrive or if you want a cheaper option, we also used
these vocal recording booths
. The downside of using these is that they aren't remotely soundproof, so the participants could hear each other talking—which interfered with speaking and listening tasks.
We added three layers of
soundproof curtains
.
[15]
This still wasn't enough, so we got dozens of
sound panels
and used rope to hang them wall to wall in the booths.
Unfortunately, the weight of the curtains caused the booths to collapse. The solution to this is a lot of rope, which we used to tie the poles of the booth together and then nailed into a hook in the wall.
It costs ~$2,000 to set up these booths: $600 for the booth itself, $1,300 for soundproofing, and $100 for miscellaneous construction (rope, screws, etc). They look less professional, and you can't make them into a safe Faraday Cage, but otherwise this setup actually does work pretty well. We have a couple that we still use in our current data collection center, and they've been running flawlessly 20 hours/day for months.
|- Welcome to Nova! -|
~ Nova is a lightweight language for... ~
. sketching out ideas,
. documents, notes and personal tools,
. casual modeling and thinking,
. computing without computers
If you've ever wanted to make a computer come to life through programming, you probably know how complicated it can be. Intricate incantations, confusing instructions, and large, complicated tools can make approaching programming incredibly difficult.
To address this, we've built something we call Nova. It is a programming language, a note-taking system, a way of sketching, and a way of conversing with programmers and machines!
We invite you to investigate what we've discovered and try it for yourself!
SQLFlow ships with cli support to test a stream configuration against any fixture file of test data. The goal is to support testing and linting of a configuration file before executing in a stream environment.
Run the invoke command to test the configuration file against a set of test data:
docker run -v $(pwd)/dev:/tmp/conf -v /tmp/sqlflow:/tmp/sqlflow turbolytics/sql-flow:latest dev invoke /tmp/conf/config/examples/basic.agg.mem.yml /tmp/conf/fixtures/simple.json
This section runs SQLFlow as a stream processor that reads data from a Kafka topic and writes the output to the console. SQLFow runs as a daemon and will continuously read data from kafka, execute the SQL and write the output to the console.
"To all of you who have always supported us," read the statement. "On December 4, Reiwa 7 [The year 2025 in the Japanese calendar], Tetsu Yamauchi passed away peacefully, surrounded by family.
"We sincerely thank everyone who enjoyed Tetsu's music and offered kind words until now. Those were fun times. It's a long time, but a short time."
Tetsu Yamauchi was born in Fukuoka, Japan, in October 1946 and joined Japanese progressive rockers Micky Curtis & The Samurais in the late 1960s, with whom he recorded two albums,
Kappa
and
Samurai
, both released in 1971.
Later that year, he hooked up with Free guitarist
Paul Kossoff
and drummer Simon Kirke, plus keyboardist John ‘Rabbit’ Bundrick, to record a one-off album after Free had temporarily splintered amid disagreements between frontman Paul Rodgers and bassist Andy Fraser.
The
Kossoff, Kirke, Tetsu & Rabbit
album was a collection of rootsy
blues
and funk rock that lacked Free’s bite and Paul Rodgers’s voice, but it got the increasingly troubled Kossoff working again, and Free reunited in early 1972.
Within months, Fraser left the band, and Yamauchi was drafted in to replace him. He subsequently appeared on the Free's final album,
Heartbreaker,
and co-wrote the classic
Wishing Well
.
Sign up below to get the latest from Classic Rock, plus exclusive special offers, direct to your inbox!
Free broke up for the final time after a US tour in March 1973, and Yamauchi replaced Ronnie Lane in the Faces, where he remained for two years. He played on the 1974 live album
Coast to Coast: Overture and Beginners,
and fully embraced the rock'n'roll lifestyle at a time when his bandmates were attempting to moderate their own behaviour.
"Tetsu was a real wild card after Ronnie Lane left the band," Ronnie Wood told
Classic Rock
. "Too crazy."
Yamauchi's only studio contribution to the Faces came with the single
You Can Make Me Dance, Sing or Anything (Even Take The Dog For A Walk, Mend A Fuse, Fold Away The Ironing Board, Or Any Other Domestic Shortcomings),
which was released in late 1972 and still holds the record for the longest-titled song ever to chart in the UK.
After The Faces broke up, Yamauchi recorded his second solo album,
Kikyou
(his first,
Tetsu,
came out in 1972), and worked as a session musician before returning to Japan, where he formed Tetsu Yamauchi & the Good Times Roll Band, who released a live album in 1977.
In 1985, he formed the Ope Band with free jazz drummer Shoji Hano, a relationship that also produced
Dare Devil
, a 1992 live album recorded with renowned free jazz saxophonist and clarinettist Peter Brötzmann and guitarist Haruhiko Gotsu.
For the last 15 years of his life Yamauchi lived quietly, refusing requests for interviews, although he returned to the stage in 2023 and 2024 as Meets Duo alongside drummer Yoshitaka Shimada, one of the original members of his Good Times Roll Band.
“Just heard that Tetsu passed away," Simon Kirke wrote on social media. "He was a good friend and a great bass player. My condolences to his family and close friends. May he rest in peace."
Online Editor at Louder/Classic Rock magazine
since 2014. 39 years in music industry, online for 26. Also bylines for: Metal Hammer, Prog Magazine, The Word Magazine, The Guardian, The New Statesman, Saga, Music365. Former Head of Music at Xfm Radio, A&R at Fiction Records, early blogger, ex-roadie, published author. Once appeared in a Cure video dressed as a cowboy, and thinks any situation can be improved by the introduction of cats. Favourite Serbian trumpeter: Dejan Petrović.
Legion Health (YC S21) is hiring a founding engineer (SF, in-person)
Legion Health (YC S21) operates a psychiatric practice and is building the AI-native operations layer for mental health care. We focus on the operational backend: scheduling, intake, documentation, billing, and care coordination. These workflows—not diagnostics—are the main bottlenecks in mental health delivery.
We run our own clinic, so the systems you build ship directly into real patient care. Our agent infrastructure currently supports more than 2,000 patients with one human support lead.
We’re hiring a Founding Engineer (in-person, San Francisco). You’d work directly with the founders on:
event-driven backend systems (Node.js, TypeScript, Postgres/Supabase, AWS)
internal operations tools for both humans and agents
state/coordination logic that represents a patient’s journey
HIPAA-compliant data and audit pipelines
We’re open to backend or full-stack/product engineers who think in systems and have owned real workflows end-to-end. Prior experience with LLMs is optional; interest is required.
Satya Nadella is burning decades of customer good will chasing the latest tech fad.
(Image credit: Getty Images | MANDEL NGAN)
If there's one thing that typifies Microsoft under CEO
Satya Nadella
's tenure: it's a general inability to connect with customers.
A recent report from The Information
detailed
how Microsoft's internal AI efforts are going awry, with cut forecasts and sales goals for its Azure AI products across the board. The Information said that Microsoft's sales people are "struggling" to meet goals, owing to a complete lack of demand. Microsoft denied the reports, but it can't deny market share growth trends — all of which point to Google Gemini surging ahead.
With OpenAI's business model under constant scrutiny and racking up genuinely dangerous levels of debt, it's become a cascading problem for Microsoft to have tied up layer upon layer of its business in what might end up being something of a lame duck.
Swipe to scroll horizontally
FirstPageSage AI Chatbot Usage Chart (December 3, 2025)
#
Generative AI Chatbot
AI Search Market Share
Estimated Quarterly User Growth
1
ChatGPT (excluding Copilot)
61.30%
7% ▲
2
Microsoft Copilot
14.10%
2% ▲
3
Google Gemini
13.40%
12% ▲
4
Perplexity
6.40%
4% ▲
5
Claude AI
3.80%
14% ▲
6
Grok
0.60%
6% ▲
7
Deepseek
0.20%
10% ▲
There are reams of
research
that suggest agentic AI tools require human intervention at a frequency ratio that makes them cost ineffective, but Microsoft seems unbothered that its tools are poorly conceived.
All the latest news, reviews, and guides for Windows and Xbox diehards.
SEO and analytics firm FirstPageSage has
released
its AI market share report for the start of December, and it shows Google Gemini actively poised to supplant
Microsoft Copilot
. Based on reports that Google Gemini is now actively beating ChatGPT's best models, FirstPageSage has Google Gemini sprinting past Microsoft Copilot quarter over quarter, although ChatGPT itself will remain the front runner.
Google's AI advantages are accumulating, as Microsoft's disadvantages snowball
Microsoft's destiny under Satya Nadella seems to increasingly point towards being a server broker for NVIDIA, rather than tech leader and innovator.
(Image credit: Microsoft)
Whether it's Google's Tensor server tech or dominating position with Google Play-bound Android, Microsoft's lack of forethought and attention paid to their actual customers is starting to catch up with the firm.
Nadella has sought to blame the company's unwieldy size
for the lack of innovation, but it reads like an excuse to me. It's all about priorities — and Nadella has chased shareholder sentiment over delivering for its customers or employees, and that short-termism is going to put Microsoft on the backfoot if AI actually does deliver another computing paradigm shift.
Microsoft depends almost entirely on pricy NVIDIA technology for its data centers, whereas Google is actively investing to own the entire stack. Microsoft has also worked incredibly hard to cram half-baked AI features into its products, whereas Google has arguably been a lot more thoughtful in its approach. Microsoft sprinted out of the gate like a bull in a China shop, and investors rewarded them for it — but fast forward to 2025, and Google's AI products simply work better, and are more in-tune with how people might actually use them.
I am someone who is actively using the AI features across Google Android and Microsoft Windows on a day to day basis, and the delta between the two companies is growing ever wider. Basic stuff like the photo editing features on Google Pixel phones are
lightyears
beyond the abysmal tools found in the Microsoft Photos app on Windows. Google Gemini in Google Apps is also far smarter and far more intuitive than Copilot on
Microsoft 365
, as someone actively using both across the two businesses I work in.
Microsoft's "ship it now fix it later" attitude risks giving its AI products an Internet Explorer-like reputation for poor quality.
Dare I say it, Gemini is actually helpful, and can usually execute tasks you might actually need in a day to day job. "Find me a meeting slot on this date to accommodate these timezones" — Gemini will actually do it. Copilot 365 doesn't even have the capability to schedule a calendar event with natural language in the Outlook mobile app, or even provide something as basic as clickable links in some cases. At least Xbox's Gaming Copilot has a beta tag to explain why it fails half of the time. It's truly absurd how half-baked a lot of these features are, and it's odd that Microsoft sought to ship them in this state. And
Microsoft wants to make Windows 12 AI first
?
Please
.
Microsoft's "ship it now fix it later" attitude risks giving its AI products an Internet Explorer-like reputation for poor quality, sacrificing the future to more patient, thoughtful companies who spend a little more time polishing first. Microsoft's strategy for AI seems to revolve around offering cheaper, lower quality products at lower costs (
Microsoft Teams
, hi
), over more expensive higher-quality options its competitors are offering. Whether or not that strategy will work for artificial intelligence, which is exorbitantly expensive to run, remains to be seen.
Microsoft's savvy early investment in OpenAI gave it an incredibly strong position early on, but as we get deeper into the cycle, some cracks are starting to show. Many of Microsoft's AI products to date simply scream of a total lack of direction and utter chaos, but it's not all hopeless. Some of Microsoft's enterprise solutions for AI are seeing strong growth.
Github Copilot
has been something of a success story for Redmond, and Microsoft is exploring its own
Maia and Cobalt chips
and even language models, in attempts to decouple itself from NVIDIA and OpenAI respectively. But Satya Nadella's Microsoft has an uncanny knack for failing to deliver on promising initiatives like those.
Without a stronger emphasis on quality, Microsoft's future in AI could simply end up revolving around re-selling NVIDIA server tech and jacking up local electricity prices, rather than providing any real home-grown innovation in the space. Shareholders will be more than happy for Microsoft to simply be a server reseller, but it would be a ignoble legacy for what was previously one of tech's most innovative companies.
Jez Corden is the Executive Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by tea. Follow on
Twitter (X)
and tune in to the
XB2 Podcast
, all about, you guessed it, Xbox!