A current pilot project aims to pay former law enforcement and military officers to physically track immigrants and verify their addresses to give to ICE for $300 each. There is no indication that the pilot involves licensed private investigators, and appears to be open to people who are now essentially members of the general public, 404 Media has learned.
The pilot is a dramatic, and potentially dangerous, escalation in the Trump administration’s mass deportation campaign. People without any official role in government would be tasked with tracking down targets for ICE. It appears to be part of ICE’s
broader plan to use bounty hunters
or skip tracers to confirm immigrant’s addresses through data and physical surveillance. Some potential candidates for the pilot were recruited on LinkedIn and were told they would be given vehicles to monitor the targets.
“The more I listened to it, the more I’m like, something doesn’t sound right,” a person who was briefed on the pilot plans told 404 Media. 404 Media granted multiple people anonymity to speak more candidly and to avoid retaliation.
💡
Do you know anything else about ICE's plan to hire skip tracers or similar? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
In a LinkedIn post in October, Jim Brown, president of government contractor consultant Feds United and a former longtime senior ICE official, said he was looking for retired law enforcement or military personnel for an upcoming project.
“Feds United is seeking approximately 20 more retired experienced law enforcement officers or retired military personnel in the DC/Northern Virginia area to participate in a 90-day pilot project that is expected to kick off within the next few weeks,” he wrote.
“The project will assess whether contractors can validate addresses associated with subjects of interest. Participants will work on surveillance teams and this is an OBSERVE and REPORT only task,” Brown wrote. Nearly two dozen people replied to that post, with many of them expressing interest in the work.
Brown’s LinkedIn post did not mention ICE, but two people briefed on the plans said the work would entail verifying information for ICE.
A screenshot of Brown's LinkedIn post.
Feds United’s website says it is a “client-focused federal consulting firm that supplies subject matter experts to federal contractors to assist them in proposal response development and completing contract delivery services to the government client.” It claims to offer “subject matter experts” from ICE, Customs and Border Protection (CBP), the Secret Service, and the FBI.
Recently on LinkedIn Brown has been posting positively about ICE’s Enforcement and Removal Operations (ERO), and specifically the agency’s arrest of convicted criminals in the country illegally. Immigrants with no criminal record are now the largest group in ICE detention,
according to data from September
.
Brown said that ICE does not have good addresses for some of its targets, one person briefed on the plans recalled. Feds United would give recruited individuals a list of addresses based on things like utility bills, the person said. Feds United would split the people into teams to perform the surveillance, and after verifying the target lived at the address each person on the team would be paid $300, they added. This would go up to a maximum of $30,000, they said.
“Do not talk to the neighbors,” the person said, recalling the plans. “This was strictly supposed to be observe and report,” referring to a tactic where they are not supposed to directly interact with anyone.
Broadly these details of the pilot line up with ICE’s strategy laid out in procurement documents reported in the media and reviewed by 404 Media. At the end of October, ICE published a Request for Information (RFI) asking interested contractors to contact the agency. Companies would be given information on 10,000 immigrants to locate, with further packages going up to 1,000,000,
the Intercept reported
. Contractors would be paid “monetary bonuses” based on performance, the document said.
This month
404 Media reported
that ICE has allocated $180 million to hiring bounty hunters and skip tracers to stalk immigrants. Other procurement documents said ICE was seeking assistance with a “docket size” of 1.5 million, and the agency would give contractors a batch of 50,000 last known addresses of aliens residing in the U.S. Bounty hunters or skip tracers would then verify the people who lived at those addresses, or find their new location, and provide that information to ICE’s ERO.
“To achieve a higher level of confidence, the vendor may physically verify the alien’s location and presence, preferably confirming their home or work location. The vendor will then report the physical location to the Government or inform the Government that it is not able to locate the alien, and any additional visits would be fruitless. The vendor should prioritize locating the home address and only resort to employment location, failing that,” one of the documents said.
“It is outrageous that ICE is funneling taxpayer money into a surveillance operation aimed at immigrants instead of real threats. It is as wasteful as it is disgraceful,” Congressional Hispanic Caucus Chairman Rep. Adriano Espaillat told 404 Media in a statement. “Every crime that goes uninvestigated is on this administration for diverting law enforcement capacity toward Stephen Miller’s political fantasies rather than true public safety.”
Private investigators and skip tracers 404 Media spoke to had mixed reactions to ICE’s plan. One was concerned about the outsourcing of government functions to private industry, while another said they would do the work.
One of the people briefed on the Virginia and DC pilot said Feds United was subcontracting under SOS International LLC, or SOSi, which is a large government contractor. In October the Department of Homeland Security (DHS) signed a $7 million contract with SOSi for skip tracing services,
The Lever reported
.
“I do not comment on current projects where I am not the prime vendor,” Brown from Feds United told 404 Media. SOSi did not respond to a request for comment. When asked specifically if SOSi would be able to comment on the pilot, Brown said “after my years of federal training, my response is ‘I cannot confirm nor deny who my client is.’”
None of the people briefed on the plan who spoke to 404 Media are licensed private investigators. In Virginia, private investigators
must apply and be registered with
the state’s Department of Criminal Justice Services. In DC, private investigators and security professionals
similarly need to apply for a license
. But in Feds United’s case, the company appears to be trying to recruit people simply on whether they are former military or law enforcement, despite them being asked to perform physical surveillance of targets.
“It’s probably because of the surge of work that they send out these unlicensed individuals to see how they do and eventually they plan to roll them in under their company license of the general contractor,” Igor Ostrovskiy, an experienced private investigator with Ostro Intelligence, and who has expressed concerns with ICE’s plans, told 404 Media. He called the plan dangerous, especially if the people are armed.
“I’ve done large contracts [...] and it just didn’t track,” one of the other people briefed on the plans said.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.
We at the Lix team are proud to announce our fifth major release, version 2.94 “Açaí na tigela”.
This release focuses on bugfixes, quality-of-life improvements, performance improvements and started integrating Lix with the
Cap’n’Proto remote procedure call
runtime, in order to replace the previous bespoke implementation.
Açaí na tigela
is a sweet Brazilian snack food from Pará and Amazonas, made with the frozen and mashed fruit of the açaí palm.
Lix is a Nix implementation focused on reliability, predictability, friendliness, developed by a community of people from around the world. We have long term plans to incrementally evolve Nix to work in more places, to make it more reliable and secure, and to update the language and semantics to correct past mistakes and reduce errors, all the while providing an amazing tooling experience.
Upgrading from CppNix or previous Lix versions
The upgrade procedure depends on how you installed Lix or CppNix, and is fully described in the
Lix installation guide
.
If you are using Lix from nixpkgs on NixOS, you just need to upgrade your nixpkgs once the upgrade pull request has passed through the build farm into your channel; no other action is required.
If you want to help us test the next version of Lix, consider running
main
by following the
beta guide
.
Changes
Lix 2.94 builds on the work from Lix 2.93 in improving the daemon and language to make room for future evolution.
Here are the highlights
from the release notes
. This is not a comprehensive list, and we are thankful for every contributor’s hard work in making this release happen.
News from RPC
As mentioned in previous communications, Lix pursues the goal of delivering a reasonable RPC protocol that replaces the bespoke and obsolete Nix daemon protocol.
Building on top of
KJ
was chosen because it provides access to
Cap’n Proto
and gives us a well-tested RPC substrate.
Build hooks are used during remote builds. When Lix performs a remote build (
nix __build-remote
), it spawns a hook program. This hook instance is a Cap’n Proto RPC server that speaks the new protocol.
This subsystem has been the first target of the ongoing RPC work.
These changes will be mostly invisible to users. The main visible improvement is that multiple build hook processes may now wait concurrently. In the old protocol only one could wait at a time.
The Lix project has received many Flakes-related changes in the past, often driven by the CppNix project. The quality of these changes did not match the usual Lix standards and forced the core team to spend considerable effort evaluating their interactions with the existing feature set. Several inconsistency issues slipped through review. This is unsurprising because Flakes remain an experimental feature with semantics that change in practice.
Now that there are at least three separate implementations of Flakes, the Lix project cannot reasonably maintain a third flavor inside core.
The Flakes implementation in the Lix codebase has also been a recurring source of maintenance headaches.
We intend to remove Flakes from the core entirely and ship them as a plugin that is included by default.
Future Flakes improvements can then happen in that subproject without affecting Lix core.
Extracting Flakes is a 2.95.0 objective.
If you are confident with C++, please consider helping us with this migration.
Breaking changes
A significant amount of technical debt has been cleared to allow safer evolution of Lix.
Language
Lix strings may now contain NUL bytes
Lix can now manipulate strings that contain arbitrary binary data, including NUL bytes. The previous behavior was inconsistent and unintentional. Examples in the release notes show where this caused incorrect behavior.
Function equality semantics are more consistent, but still bad
Functions always compare as not equal in Nixlang except when they come from the same memory location. This optimization exists to speed up comparisons of large attribute sets and had to be extended to functions stored
inside
attribute sets.
While reworking the evaluator, Lix made this behavior more consistent, although still undesirable.
For example:
let s.f = f; in s.f == s.f
now evaluates to
true
.
Lix intends to remove this optimization later.
Function equality is undefined behavior and should not be relied upon in portable Nixlang code.
This affects clients connecting to the local daemon socket or remote builders configured using the ssh-ng protocol. Builders using the ssh protocol are still supported for older clients such as Nix 2.3.
Maintaining these older protocols required too much effort and lacked test coverage.
The
impure-derivations
and
dynamic-derivations
experimental features are removed.
New impure or dynamic derivations can no longer be created. Existing ones cannot be read or built. Their outputs remain valid until garbage collected. The
.drv
files may
only
be garbage collected.
A new cgroup delegation model for the
cgroups
experimental feature
Builds using cgroups (
use-cgroups = true
and
experimental-features = cgroups
) now always receive a delegated cgroup tree with permission to manage controllers in that subtree.
This can cause visible breakage because the build process (daemon or direct store access) now requires to be run inside a cgroup tree that was already delegated by the caller: service manager or system administrator for example.
The
uid-range
experimental feature now depends on
cgroups
.
The release notes contain guidance on setting up the tree and working around issues if you get stuck.
If your DNS setup is healthy (first server in
/etc/resolv.conf
responds quickly) and the derivation only needs TCP or UDP, this change should not affect you.
Enable zstd with a high compression level instead of xz for binary cache uploads
Binary cache uploads now use zstd instead of xz. This significantly improves upload time on modern systems and high-speed links, enabling gigabit link saturation while uploading to fast Garage S3 implementations.
The release notes contain a typo: reducing runtime from 77 seconds to 18 seconds is about a 75 % improvement, not 50 %.
On a 4.4GB NAR file, uploads can be 75 % faster at the cost of roughly 18 % larger output.
Lix adds an experimental feature that allows integers to be coerced where strings were previously required. This reduces boilerplate but changes language semantics, so it is off by default.
Interrupt handling has been improved so Ctrl-C behaves predictably across long evaluations and daemon interactions.
One Ctrl-C requests a graceful shutdown. A second Ctrl-C aborts immediately with no guarantee of data integrity.
❯ nix-instantiate --eval --expr 'let f = n: if n == 0 then 0 else f (n - 1) + f (n - 1); in f 32'
^CStill shutting down. Press ^C again to abort all operations immediately.
^C
❌130 ❯
Stack traces now summarize involved derivations at the bottom
Evaluation stack traces now end with a summary that collects derivations involved in the error, which helps identify which package triggered the failure in a dependency tripping an assertion such as unsupported, insecure or broken derivations.
error:
… while calling the 'head' builtin
at /nix/store/9v6qa656sq3xc58vkxslqy646p0ajj61-source/lib/attrsets.nix:1701:13:
1700| if length values == 1 || pred here (elemAt values 1) (head values) then
1701| head values
| ^
1702| else
… while evaluating the attribute 'value'
at /nix/store/9v6qa656sq3xc58vkxslqy646p0ajj61-source/lib/modules.nix:1118:7:
1117| // {
1118| value = addErrorContext "while evaluating the option `${showOption loc}':" value;
| ^
1119| inherit (res.defsFinal') highestPrio;
(stack trace truncated; use '--show-trace' to show the full trace)
error: Package ‘olm-3.2.16’ in /nix/store/9v6qa656sq3xc58vkxslqy646p0ajj61-source/pkgs/by-name/ol/olm/package.nix:37 is marked as insecure, refusing to evaluate.
< -snip the whole explanation about olm's CVEs- >
note: trace involved the following derivations:
derivation 'etc'
derivation 'dbus-1'
derivation 'system-path'
derivation 'nheko-0.12.1'
derivation 'mtxclient-0.10.1'
--keep-failed
chown the build dir to the invoking user
When using
--keep-failed
or
keep-failed = true
, Lix now reliably changes ownership of the failed build directory to the user who requested the build, including through the daemon.
When fixed-output derivations fail because the produced output does not match the expected hash, both paths are printed. The offending output is added to the store so that you can inspect it, compute a new hash, or fetch a known-good output for comparison.
Show tree with references that lead to an output cycle
Output cycles now include a reference tree showing exactly how the cycle arose.
Example:
error: cycle detected in build of '/nix/store/gc5h2whz3rylpf34n99nswvqgkjkigmy-demo.drv' in the references of output 'bar' from output 'foo'.
Shown below are the files inside the outputs leading to the cycle:
/nix/store/3lrgm74j85nzpnkz127rkwbx3fz5320q-demo-bar
└───lib/libfoo: …stuffbefore /nix/store/h680k7k53rjl9p15g6h7kpym33250w0y-demo-baz andafter…
→ /nix/store/h680k7k53rjl9p15g6h7kpym33250w0y-demo-baz
└───share/snenskek: …???? /nix/store/dm24c76p9y2mrvmwgpmi64rryw6x5qmm-demo-foo …
→ /nix/store/dm24c76p9y2mrvmwgpmi64rryw6x5qmm-demo-foo
└───bin/alarm: …texttexttext/nix/store/3lrgm74j85nzpnkz127rkwbx3fz5320q-demo-bar abcabcabc…
→ /nix/store/3lrgm74j85nzpnkz127rkwbx3fz5320q-demo-bar
disallowedRequisites
now reports chains of disallowed requisites
Errors now include the full chain of references leading to each forbidden path rather than only the immediate offender.
Example:
$ nix-build -A hello
error: output '/nix/store/0b7k85gg5r28gb54px9nq7iv5986mns9-hello-2.12.2' is not allowed to refer to the following paths:
/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.40-66
Shown below are chains that lead to the forbidden path(s).
/nix/store/0b7k85gg5r28gb54px9nq7iv5986mns9-hello-2.12.2
└───/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.40-66
nix copy
previously hit
Too many open files
errors on main. We added a rate limit to avoid that. You can remove the limit by increasing the open files limit using
ulimit -n <new number>
.
Pointer tagging, thunk state sharing and unreferenced Values
Lix implemented pointer tagging and reduced the
Value
structure to a single machine word. Thunk state sharing was implemented, enabling more reuse of
Value
objects.
Value
is now used as a reference-counted smart pointer to a heap object.
This unblocks further optimizations and resulted in:
15 % memory savings and a 3 % evaluation time
regression
on system rebuild
17 % memory savings and a 7 % evaluation time improvement on nix search
Common strings that occurs in evaluation such as the result of
builtins.attrNames
are now reused more efficiently, reducing allocations and slightly improving evaluation speed.
Up to 11 % memory savings were observed in large NixOS deployments with a slight decrease in CPU usage.
Temporary build directories no longer default to
temp-dir
(typically
/tmp
), fixing CVE-2025-46415.
Many users use a tmpfs for
/tmp
. The default build directory is now
/nix/var/nix/builds
. If you care about tmpfs semantics, bind-mount that directory onto a tmpfs.
Compared to 2.93.3, additional changes were made so Darwin handles the new path length correctly, which allows reasonable derivations to connect to UNIX domain sockets in sandboxes. This will also be shipped in 2.93.4.
In 2.93, we lowered the
connect-timeout
to 5 seconds. Some users have DNS setups where the first nameserver times out, causing resolution to exceed 5 seconds.
We replaced linear backoff with exponential backoff to handle these cases more robustly.
“My shell didn’t work” —
nix-shell
default shell directory is not
/tmp
anymore
Historically,
nix-shell
stored internal shell files in
$TMPDIR
or
/tmp
and also used it for
$NIX_BUILD_TOP
. Many users have
$TMPDIR
unset, so
/tmp
was consistently used.
If you ran
sudo nix-shell
and exited uncleanly, you could create
/tmp/env-vars
with root permissions, causing all subsequent shells for unprivileged users to fail silently. The workaround was to delete the file manually.
Lix now creates a dedicated temporary directory for shell metadata that does not collide with other shells. Cleanup is handled by Lix itself after the shell exits.
Lix 2.93 changed SSH remote store handling in a way that broke classical
ForceCommand
and similar directives. We reverted the problematic part in 2.93.1 and carry the same fix here.
Previous configurations work again out of the box.
The release notes may contain imprecisions and typos, we are working to correct this without doing a point release.
No impactful known issues are yet known!
Credits
Thanks, as always, to the following groups:
The large community who beta tested the upcoming release by running
main
in production since the 2.94 branch-off. We really appreciate having immediate feedback on our work and the trust of running main alongside us means a lot to us. We know we tested the patience of some of you, we thank you for that.
If you want to run Lix main yourself, see
the beta guide
for details.
Everyone who contributed by filing bugs and giving us feedback on Matrix.
All the first time contributors who made their first contributions to a Nix implementation in Lix. We are eternally grateful to everyone who helped us out on the numerous important but tedious issues.
All the contributors who have helped us with the backlog of bugs.
The CppNix contributors and CppNix team, without whom we would not have this software, and who wrote some of the improvements ported into this release.
A quiet but heartfelt note of gratitude goes to
eldritch horrors
for their steady guidance throughout this release, even in the face of its many challenges.
Onwards and upwards for the next release. We look forward to continuing to work together with everyone to build a better foundation for the evolution of Nix.
Tycoon 2FA and the Collapse of Legacy MFA
Bleeping Computer
www.bleepingcomputer.com
2025-11-18 15:01:11
Tycoon 2FA enables turnkey real-time MFA relays behind 64,000+ attacks this year, proving legacy MFA collapses the moment a phishing kit targets it. Learn from Token Ring how biometric, phishing-proof FIDO2 hardware blocks these relay attacks before they succeed. [...]...
The rise of the Tycoon 2FA phishing kit should serve as a global warning siren for every enterprise. This is not a tool for elite hackers. This is a turnkey kit that anyone with a browser can use to bypass the very MFA and auth apps companies depend on. And it is being used at scale.
Over 64,000 attacks have already been tracked this year, many targeting Microsoft 365 and Gmail because those platforms represent the easiest, fastest path into an enterprise.
Phishing as a Service, No Skill Required
Tycoon 2FA’s power comes from removing the need for technical skill. It is Phishing as a Service, fully packaged, polished, and automated. A teenager who cannot write a line of code can deploy it. The kit walks the operator through setup. It provides fake login pages. It spins up reverse proxy servers.
It does all the heavy lifting. The attacker simply sends a link to hundreds of your employees and waits for one to bite.
Real-Time MFA Relay and Total Session Takeover
Once the victim clicks, Tycoon 2FA does the rest. It intercepts usernames and passwords in real time. It captures session cookies. It proxies the MFA flow directly to Microsoft or Google. The victim thinks they are simply passing a security check, but they are authenticating the attacker.
This is the terrifying part. Even well-trained users fall for this because everything looks pixel perfect identical. The pages are dynamic, pulling live responses from legitimate servers.
If Microsoft says enter your code, the page updates instantly. If Google sends a prompt, it appears exactly as expected. There is no visible difference. There is no clue. And there is no way for any legacy MFA or authenticator app to stop it because Tycoon is man in the middle by design.
Built to Evade Detection
It gets worse. Tycoon 2FA includes anti detection layers that rival commercial malware strains. Base64 encoding. LZ string compression. DOM vanishing. CryptoJS obfuscation. Automated bot filtering. CAPTCHA challenges. Debugger checks.
The kit hides itself from scanners and researchers. It only reveals its true behavior when a human target arrives. And once it completes the authentication relay, the attacker gets full session access inside Microsoft 365 or Gmail.
From there they move laterally into SharePoint, OneDrive, email, Teams, HR systems, finance systems. One successful phish creates total compromise.
The ebook “CISO Guide: Stopping Ransomware with Next-Gen MFA” explores how ransomware attacks are evolving and why legacy MFA can’t keep up.
This essential guide reveals the real-world impact of phishing-resistant MFA, how it stops ransomware before damage is done, and why CISOs are making the switch to biometric phishing proof identity.
This is why legacy MFA has collapsed. You just rolling that out makes your company a honeypot. SMS codes. Push notifications. TOTP apps. All share the same flaw. They rely on user behavior. They depend on the hope that a user notices something is wrong.
They offer attackers shared secrets that can be intercepted, forwarded, or replayed. Tycoon 2FA and dozens of similar kits exploit exactly that. They turn the user into the attack vector. Even passkeys are proving vulnerable when synced through cloud accounts or when fallback recovery paths exist that can be socially engineered.
Attackers understand this completely. Criminal groups like Scattered Spider, Octo Tempest, and Storm 1167 are using these kits daily. It is the fastest growing attack method in the world because it is easy, scalable, and requires no technical sophistication.
Companies are rolling out MFA and authenticator apps only to find out these systems collapse the moment a phishing kit decides to target them. The truth is simple. If someone can trick your employee into entering a code or approving a prompt, the attacker wins. And Tycoon does exactly that.
The Path Forward: Phishing-Proof MFA
But there is a path forward and it is fast and easy to roll out. Biometric phishing proof identity built on FIDO2 hardware. Authentication that is proximity based, domain bound, and impossible to relay or spoof. A system where there are no codes to enter, no prompts to approve, no shared secrets to intercept, and no way to trick the user into helping the attacker.
A system that rejects fake websites automatically. A system that forces a live biometric fingerprint match on a physical device that must be near the computer being logged into.
This changes everything because it removes the user from the decision tree. Instead of hoping someone recognizes a fake login page, the authenticator itself checks the origin cryptographically.
Instead of hoping someone refuses a malicious push request, the authenticator never receives a push request at all. Instead of asking people to be perfect, the system verifies identity with hardware, not judgment.
The Token Model
This is the model behind
Token Ring and Token BioStick
. Phishing proof by architecture. Biometric by requirement. Proximity based by default. Domain bound by cryptography.
There is no code to steal. There is no approval to trick. There is no recovery flow for a scammer to exploit. Even if a user clicks the wrong link. Even if a user hands over a password (if they even have one). Even if a social engineer calls pretending to be IT. The authentication simply fails because the domain does not match and the fingerprint is not present.
Tycoon 2FA hits a wall. The relay breaks. The attack dies instantly. And these solutions are inexpensive and available today.
Enterprises using these devices report something important. Employees comply easily with this passwordless wireless solution. Authentication is fast (2 seconds). There is nothing to remember. Nothing to type. Nothing to approve. It is a better user experience and a vastly stronger security posture.
When identity is bound to a physical biometric device that enforces origin checks and proximity requirements, phishing kits become irrelevant.
The Reality Every Enterprise Must Face
This is the moment every enterprise must accept. The attackers have evolved and the defenses must evolve too. Legacy MFA cannot survive this threat. Authenticator apps cannot survive this threat. Passkeys struggle under it. Tycoon 2FA proves that any system asking users to enter or approve anything can be defeated in seconds.
Here is the truth in plain language. If your MFA can be fooled by a fake website, it is already compromised. If your authentication can be relayed, it will be. If your system depends on user judgment, it will fail. Biometric hardware based identity that is phishing proof, proximity bound, and domain locked is the only way forward.
The criminals have upgraded. Now it is your turn. Upgrade your identity layer before Tycoon or its successors make you the next headline.
Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’
403 Media
www.404media.co
2025-11-18 14:58:52
"Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all."...
The Department of Homeland Security claimed in court proceedings that nearly two weeks worth of surveillance footage from ICE’s Broadview Detention Center in suburban Chicago has been “irretrievably destroyed” and may not be able to be recovered,
according to court records reviewed by 404 Media
.
The filing was made as part of a class action lawsuit against the Department of Homeland Security by people being held at Broadview, which has become the site of widespread protests against ICE. The lawsuit says that people detained at the facility are being held in abhorrent, “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.”
As part of discovery in the case, the plaintiffs’ lawyers requested surveillance footage from the facility starting from mid September, which is when ICE stepped up its mass deportation campaign in Chicago. In a status report submitted by lawyers from both the plaintiffs and the Department of Homeland Security, lawyers said that nearly two weeks of footage has been “irretrievably destroyed.”
“Defendants have agreed to produce. Video from September 28, 2025 to October 19, 2025, and also from October 31, 2025 to November, 7 2025,” the filing states. “Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all.” Law & Crime
first reported on the filing
.
A screenshot from the court filing
The filing adds that the plaintiffs, who are being represented by lawyers from the American Civil Liberties Union of Illinois, the MacArthur Justice Center, and the Eimer Stahl law firm, hired an IT contractor to work with the government “to attempt to work through issues concerning the missing video, including whether any content is able to be retrieved.”
Surveillance footage from inside the detention center would presumably be critical in a case about the alleged abusive treatment of detainees and inhumane living conditions. The filing states that the plaintiffs' attorneys have “communicated to Defendants that they are most concerned with obtaining the available surveillance videos as quickly as possible.”
ICE did not respond to a request for comment from 404 Media. A spokesperson for the ACLU of Illinois told 404 Media “we don’t have any insight on this. Hoping DHS can explain.”
About the author
Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.
[$] Pouring packages with Homebrew
Linux Weekly News
lwn.net
2025-11-18 14:40:55
The Homebrew project is an
open-source package-management system that comes with a repository of
useful packages for Linux and macOS. Even though Linux distributions
have their own package management and repositories, Homebrew is often
used to obtain software that is not available in a distribution'...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on December 4, 2025)
László Krasznahorkai’s recent Nobel Prize win reignited the perpetual debates about “difficult literature.” Krasznahorkai, if you don’t know, is famous for writing lengthy, dense books with extremely long—
as in sometimes hundreds of pages long
—sentences. The kind of books that a certain type of reader takes up as a challenge and another type of reader (or at least social media poster who identifies as a reader) considers fundamentally fraudulent because books are supposed to be fun and the world is so awful why would you want to suffer and anyone who would read such a book must be pretentious, phony brodernist snob! Obviously, I think the latter position is silly. Challenging oneself is fun. Difficult tasks are pleasurable. Aren’t learning new skills and trying new things sort of the whole point of life? Or at least a good chunk of the point.
Perhaps that’s an old-fashioned view. Last week,
Michelle Santiago Cortés at
The Cut
had a depressing article
about how people are using ChatGPT not just to skip schoolwork or scam people—understandable if unethical uses—but even to cheat at hobbies and leisure activities. Using ChatGPT to skip puzzles in escape rooms or posting AI-generated crocheted items you didn’t crochet to crafting subreddits. I have no doubt some people use LLMs to fill out their crossword or sudoku puzzles while they sip their morning coffee. Perhaps one thing AI is revealing is that a certain percentage of the population has no real interest in doing, learning, or enjoying anything at all. Oh well. Takes all kinds. To each their own. Yada yada. Perhaps the world needs shriveled-up slug people too.
Back to books. I find “difficult” books worthwhile for providing you with a challenge to conquer, but its good to remember the difficulty isn’t the point. Books that deviate from the norms of storytelling in style, structure, or form allow for different reading experiences. Extremely long sentences, strange syntax, unusual structures, etc., don’t exist to punish readers but to provide other aesthetic experiences and different types of stories. Story is never separable from execution. So-called difficult books couldn’t be made into easy books without ruining them. You couldn’t transform a Krasznahorkai into a beach read by adding a couple hundred paragraph breaks and periods. You’d be changing the entire experience.
I also find it strange to even worry about “pretentious” readers or “brodernists” hyping themselves up to read some new, huge tome (
Schattenfroh
this season it seems) in an age when few people read anything at all. Reading a very long book always takes some dedication, some challenging of yourself. So what? That’s good and often rewarding.
Moby-Dick
remains perhaps the best reading experience of my life. Anyway, enough critics have defended long and difficult books. So, I thought I’d write about the pleasures of
short
and difficult books.
Last week when I was headed to the airport I grabbed a small book from my to-read stack:
The Art of Asking Your Boss for a Raise
by Georges Perec (translated by David Bellos). I’ve long admired Perec but didn’t actually know anything about this specific novel. When I opened it, I learned it was Krasznahorkian in that the entire novel was a single sentence. Seeing the dense, unpunctuated prose did make me want to reach for my phone. But I soon lost that feeling once I actually began reading.
The Art of Asking
is a delightful and quick read. Only 80 pages in fact. The single sentence structure is not some random choice but integral to the themes and entire project. The 1968 text apparently originated with an invite from IBM for writers to make works inspired by computers. Perec’s novel is structured as a computer program’s logic of how an office drone employee might request and get a raise, which instead of a choose-your-own-adventure is written “to impose on the reader the recursive iteration of
all
the steps an imagined computer would make as it implemented the instructions contained in the program” (as translator David Bellos says in the introduction). The result is funnier and more human than that sounds. But the text would be something entirely different without this single-sentence logic loop form.
It made me think about what other books might fit into the idea of short and difficult novels. (I will admit I read a lot of short books in part because of my phone-and-internet-brainrot attention span. But that’s not the only reason. I read to teach and it’s easier to teach short books because the students are more likely to read the whole thing and I’m more likely to reread for prep.)
One subcategory of this non-genre would be Oulipian projects. Perec’s book fits here, as he was a central member of that group, which used constraints to generate new types of literature. The most famous example is a different Perec book translated as
A Void
that was written entirely without the letter e. Oulipo’s co-founder Raymond Queneau has a book to include here called
Exercises in Style
(trans. by Barbara Wright), which retells an intentionally banal story 99 times. Most are only a page, so the book is short, though many readers would find it difficult for having no real plot or character and just 99 retellings in different styles. But, if you are a writer it is inspiring to see how style changes story and the endless variations you can create from even the most banal anecdote.
I’m going to have to include Italo Calvino’s Oulipian novel
Invisible Cities
(trans. by William Weaver), since it is a foundational text for me. The book also has no real characters or plot—so is challenging for some readers—and instead consists mostly of 55 descriptions of imagined cities. Two other formally odd books I love: Alejandro Zambra’s
Multiple Choice
(trans. by Megan McDowell), structured as a standardized test with e.g. chapters of fill-in-the-blank questions, and Olga Ravn’s
The Employees
(trans. by Martin Aitken) that takes the form of employee interview transcripts on a spaceship that has encountered bizarre alien life. We might call these novels that are difficult in form, being written in unusual ways to tell stories without the traditional throughline of characters progressing through a linear plot.
Then there are short books that are difficult in style. The prose itself is the source of difficulty. Short books from challenging stylists are often a good entry point into their works. Indeed, my first Krasznahorkai was the very short
The Last Wolf & Herman
(trans. by John Batki and George Szirtes)
although that is two stories and not a novel. Thomas Bernhard tends to toss in some punctuation and a few paragraphs, but also writes dense and structurally unusual novels. Most of them are basically ranting monologues by misanthropes while the present action plot is reduced to almost nothing, such as a man stewing in a wing chair while looking around a party. They’re fantastic. You can’t go wrong with the short
The Loser
(trans. by Mark M. Anderson) as an introduction to Thomas Bernhard. Toni Morrison’s brief
Sula
—
rereleased
with a new cover
this month—is one of her best works and a great starting place for her lush and lyrical style. I don’t really think of Morrison as difficult per se, but I remember the minor controversy after Oprah said she found herself needing to reread passages to understand them and Morrison replying “That, my dear, is called reading.” People got miffed about that. Cormac McCarthy’s
Child of God
is the perfect starter book to see if you enjoy McCarthy’s maximalist prose and macabre images before tackling the longer
Blood Meridian
. (If you’ve only read his later, spare novels like
The Road
and
All the Pretty Horses
you might not know McCarthy’s early books are written in a very different and denser style.)
Then you have books whose difficulty is the storytelling—by which I mean books with confusing events, surrealist dream logic, and elliptical plots. A lot of readers find surreal writing difficult, though personally I eat it up. Some excellent short novels that fit this include Juan Rulfo’s haunted and brilliant
Pedro Páramo
(trans. by Douglas J .Weatherford
,
Stanley Crawford’s surreal prose poem novel
Log of the S.S. the Mrs Unguentine
,
Leonora Carrington’s truly Surreal only novel
The Hearing Trumpet
,
Kafka’s unfinished-but-masterpiece
The Trial
, Philip K. Dick’s mind-bending science-fiction novel
Ubik
,
John Hawkes’s experimental novel
The Lime Twig
,
and Christina Rivera Garza’s poetic noir novel
The Taiga Syndrome
(trans. by Suzanne Jill Levine and Aviva Kana).
I’m also tempted to add the category of short books that are difficult because of their subject matter—e.g., Yukio Mishima’s erotic ode to seppuku
Patriotism
(trans. by Geoffrey W Sargent)—but I fear that could get dangerous fast. So, I’ll end it there. The above are just some short, perhaps difficult, but definitely brilliant novels I love and would recommend if you want a short reading challenge sometime.
There are countless more one could list, of course. Feel free to do so in the comments.
My new novel
Metallic Realms
is available to buy! Reviews have called the book “brilliant” (
Esquire
), “riveting” (
Publishers Weekly
), “hilariously clever” (
Elle
), “a total blast” (
Chicago Tribune
), “unrelentingly smart and inventive” (
Locus
), and “just plain wonderful” (
Booklist
). My previous books are the science fiction noir novel
The Body Scout
and the genre-bending story collection
Upright Beasts
. If you enjoy this newsletter, perhaps you’ll enjoy one or more of those books too.
Rebecca Heineman - from homelessness to porting Doom
A study commissioned by the Department for Transport (DfT) found 97% of people surveyed found they were regularly or sometimes distracted by oncoming vehicles and 96% thought most or some headlights were too bright.
Dr Shaun Helman, who led the research for Berkshire-based Transport Research Laboratory (TRL), said it provides "compelling evidence" that lights' glare is a "genuine issue for UK drivers".
New measures will be included in the government's upcoming Road Safety Strategy, reflecting what is becoming an increasingly fraught issue for road users.
TRL's data suggests that LED and whiter headlamps may be linked to glare and that drivers might find their whiteness harder to cope with.
Of those surveyed, 33% said they had either stopped driving or are driving less at night because of lights, while another 22% said they would like to drive less at night but have no choice.
A total of 1,850 drivers, matched to the age and gender split of the country's licence holding population, were surveyed for their views.
TRL said LED lights used in vehicles are brighter, more concentrated and emit more blue light, which human eyes struggle with more at night.
The RAC's senior policy officer Rod Dennis said: "Having campaigned hard for this study, we welcome its findings which independently confirm what drivers have been telling us – that rather than being an imagined phenomenon, some bright headlights do cause a glare problem.
"While drivers clearly benefit from high-performing headlights, it's important this doesn't lead to others suffering the effects of dazzle, so a balance needs to be struck," he added.
Mr Dennis said that it is "vital" TRL's report is "reviewed carefully to put us on a path towards changes that ultimately benefit all road users."
Denise Voon, a clinical advisor at The College of Optometrists, said the DfT should "take immediate, actionable steps to support drivers and commission more detailed research, specifically into how headlight regulations need to change".
Security updates for Tuesday
Linux Weekly News
lwn.net
2025-11-18 14:08:42
Security updates have been issued by Debian (libwebsockets), Fedora (chromium and fvwm3), Mageia (apache, firefox, and postgresql13, postgresql15), Oracle (idm:DL1), Red Hat (bind, bind9.18, firefox, and openssl), SUSE (alloy, ghostscript, and openssl-1_0_0), and Ubuntu (ffmpeg and freeglut)....
Amazon vs Perplexity: the AI agent war has arrived
Guardian
www.theguardian.com
2025-11-18 14:02:50
A lawsuit over automated shopping reveals a deeper struggle over who will control the next generation of AI and what happens when autonomous agents start acting on our behalf Hello, and welcome to TechScape. I’m your host, Blake Montgomery. Lies, damned lies and AI: the newest way to influence elect...
Hello, and welcome to TechScape. I’m your host, Blake Montgomery.
A tech titan and a startup are fighting over who controls the next phase of artificial intelligence.
Amazon has sued Perplexity AI, a prominent artificial intelligence startup, over a shopping feature in that company’s browser that allows it to automate placing orders for users. Amazon accused Perplexity AI of covertly accessing customer accounts and disguising AI activity as human browsing.
The clash highlights an emerging debate over regulation of the growing use of AI agents, autonomous digital secretaries powered by AI, and their interaction with websites. Perplexity makes a browser called Comet, which includes an AI agent. Amazon does not want to allow Comet to shop for its users. The rejection has foundation in fact: Microsoft has found in
research simulations
that AI agents are quite susceptible to manipulation while shopping.
The suit raises a host of questions. Is Perplexity’s agent a rogue buyer with unacceptable security risks, or is Amazon bullying an insurgent competitor out of the game? Whose interests does a semi-autonomous AI agent represent, the customer or the agent’s maker, and who is liable for its misconduct? The next iteration of AI may hang in the balance of the suit.
Perplexity is no champion of the common man against the overbearing dominance of Amazon. The startup has raised $1.5bn at a $20bn valuation, per
TechCrunch
. In the process, the company has vacuumed up textual content to train its various AI products with little concern for rights holders, clandestinely circumvent explicit prohibitions on unauthorized scraping. Both Forbes and Wired have accused the company of directly plagiarizing their work with convincing documentation.
The Verge
has compiled a long, comprehensive list of Perplexity’s controversies.
The company wants market share and money and seems willing to run roughshod over any competitor it can, tiny or titanic, to get it. Jeff Bezos, founder of Amazon, might have seen something of himself in that attitude; critics used to say he exhibited the same ruthlessness. He has, in fact, invested in Perplexity twice.
A future full of slop rears its heads
Photograph: Brendan McDermid/Reuters
AI made notable incursions into two spheres last week: music and international relations. My colleague Aisha Down reports:
Three songs generated by artificial intelligence topped music charts this week, reaching the highest spots on Spotify and Billboard charts.
Walk My Walk and Livin’ on Borrowed Time by the outfit Breaking Rust topped Spotify’s “Viral 50” songs in the US, which documents the “most viral tracks right now” on a daily basis, according to the streaming service. A Dutch song, We Say No, No, No to an Asylum Center, an anti-migrant anthem by JW “Broken Veteran” that protests against the creation of new asylum centers, took the top position in Spotify’s global version of the viral chart around the same time. Breaking Rust also appeared in the top five on the global chart.
A study published last week by the streaming app Deezer estimates that 50,000 AI-generated songs are uploaded to the platform every day – 34% of all the music submitted.
Podcasts might be next. An AI startup, Inception Point, is churning out 3,000 episodes per week,
the Wrap
reports. The startup’s distribution network has amassed 400,000 subscribers and 12m total episode downloads. The cost of each episode: $1. In total, some 175,000 AI-generated podcast episodes exist on Apple Music and Spotify, per the Wrap.
In diplomacy, AI firm Anthropic announced that it had detected and stopped a cyberattack – nearly entirely automated – by state-linked hackers in China. Aisha again:
The US-based Anthropic said its coding tool, Claude Code, was “manipulated” by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.
This was a “significant escalation” from previous AI-enabled attacks it monitored, it wrote in a blogpost, because Claude acted largely independently: 80 to 90% of the operations involved in the attack were performed without a human in the loop.
“The actor achieved what we believe is the first documented case of a cyber-attack largely executed without human intervention at scale,” it wrote.
The slop hydra rears its head, vomiting into one part of life after another. Though we may stop one automated cyberattack, four more could come just as quickly; if one AI-made album is removed from Spotify, six more may take its place. In the near future, we may find ourselves wading through a daily flood of slop, drowning.
Roblox rolls out age-verification features in Australia as gaming platform insists child social media ban should not apply
Guardian
www.theguardian.com
2025-11-18 14:00:49
Online gaming company says voluntary age assurance technology will limit teens and children messaging users outside their own age groups Get our breaking news email, free app or daily news podcastAs Roblox rolls out new age assurance features to prevent teens and kids from chatting with adults they ...
As Roblox rolls out
new age assurance features
to prevent teens and kids from chatting with adults they do not know, it has insisted Australia’s upcoming under-16s social media ban should not apply to its services.
The company, which is releasing the new features in Australia first, said that from Wednesday users will be able to voluntarily have their age estimated by going through the Persona age estimation technology, built into the
Roblox
app. It will access the camera of a user’s device and take a live estimation of their age based on their facial features.
The feature will become mandatory in Australia, the Netherlands and New Zealand from the first week of December, expanding to the remaining markets in early January.
Once an age check is done, users will be assigned to one of six age groups – under 9, 9-12, 13-15, 16-17, 18-20 or 21+.
Users in each age bracket will only be able to chat to peers in their group or similar groups, Roblox announced.
The changes were first mooted in September, and were heralded by the Australian eSafety commissioner as proof of the success of their efforts to make platforms safer, having been in negotiations with Roblox for several months about safety concerns for the platform.
The regulator has faced pressure to include Roblox in Australia’s under-16s social media ban, due to come into effect on 10 December. Gaming platforms have an exemption to the ban, but Julie Inman Grant said earlier
this month that eSafety had considered
the game’s chat functionality and messaging.
“If the online gameplay is the significant or sole purpose, if that were taken away, would the kids still use that messaging functionality to chat? Probably not,” she said.
Speaking to Australian journalists on the planned changes, Roblox’s chief safety officer, Matt Kaufman, described Roblox as an “immersive gaming platform” but added: “I like to think of it as games being scaffolding for social interaction. Sometimes it doesn’t matter what the game is. What really matters is you’re bringing people together to spend time together.”
When asked whether this meant Roblox should really be considered a social media platform under the ban, Kaufman said Roblox considers social media more to be about posting content into a feed that other people then see.
“Then people come back and they look at the feed, and that feed … creates a fear of missing out,” he said “It’s like a popularity contest in our mind that defines the core of what social media is. Roblox is two friends coming home after school and playing a game together. That is not social media.
“And so we do not believe that the social media laws within Australia apply to Roblox.”
Asked if the new features were offered up to eSafety as a means to avoid being included the ban, Kaufman said the company has had a “constructive dialogue” with the regulator and through the change has been able to offer eSafety the largest example of a platform using age estimation for its whole customer base.
Persona – the age assurance company used by Roblox –
participated in Australia’s age assurance technology trial
. The results revealed a 61.11% false positive rate for 15-year-olds who were told by the technology they were 16, and 44.25% for 14-year-olds.
Kaufman said the technology is good within one-to-two years of estimation, and if users disagree with a ruling they can correct it using government ID or use of parental controls to set age. He said there were “strict requirements” to delete the data once age is verified. Roblox said ID images are kept for 30 days for purposes such as detecting fraud or abuse and subsequently deleted.
People who do not wish to go through age assurance will still be able to use Roblox, but will not be able to use features such as chat.
More than 150 million people play Roblox every day in 180 countries across the world, including Australia. Kaufman said two-thirds of the users are over 13 years of age.
Experiment: Making TypeScript Immutable-by-Default
I like programming languages where variables are immutable by default. For example,
in Rust
,
let
declares an immutable variable and
let mut
declares a mutable one. I’ve long wanted this in other languages, like TypeScript, which is mutable by default—the opposite of what I want!
I wondered:
is it possible to make TypeScript values immutable by default?
My goal was to do this purely with TypeScript, without changing TypeScript itself. That meant no lint rules or other tools. I chose this because I wanted this solution to be as “pure” as possible…and it also sounded more fun.
I spent an evening trying to do this.
I failed but made progress! I made arrays and
Record
s immutable by default, but I couldn’t get it working for regular objects.
If you figure out how to do this completely,
please contact me
—I must know!
Step 1: obliterate the built-in libraries
TypeScript has built-in type definitions for JavaScript APIs like
Array
and
Date
and
String
. If you’ve ever changed the
target
or
lib
options in your TSConfig, you’ve tweaked which of these definitions are included. For example, you might add the “ES2024” library if you’re targeting a newer runtime.
My goal was to swap the built-in libraries with an immutable-by-default replacement.
The first step was to stop using any of the built-in libraries. I set the
noLib
flag in my TSConfig, like this:
{
"compilerOptions": {
"noLib": true }
}
Then I wrote a very simple script and put it in
test.ts
:
console.log("Hello world!");
When I ran
tsc
, it gave a bunch of errors:
Cannot find global type 'Array'.
Cannot find global type 'Boolean'.
Cannot find global type 'Function'.
Cannot find global type 'IArguments'.
Cannot find global type 'Number'.
Cannot find global type 'Object'.
Cannot find global type 'RegExp'.
Cannot find global type 'String'.
Progress! I had successfully obliterated any default TypeScript libraries, which I could tell because it couldn’t find core types like
String
or
Boolean
.
Time to write the replacement.
Step 2: a skeleton standard library
This project was a prototype. Therefore, I started with a minimal solution that would type-check. I didn’t need it to be good!
I created
lib.d.ts
and put the following inside:
// In lib.d.ts:
declarevar console: any;
interface Boolean {}
interface Function {}
interface IArguments {}
interface Number {}
interface RegExp {}
interface String {}
interface Object {}
// TODO: We'll update this soon.
interface Array<T> {}
Now, when I ran
tsc
, I got no errors! I’d defined all the built-in types that TypeScript needs, and a dummy
console
object.
As you can see, this solution is impractical for production. For one, none of these interfaces have any properties!
"foo".toUpperCase()
isn’t defined, for example. That’s okay because this is only a prototype. A production-ready version would need to define all of those things—tedious, but should be straightforward.
Step 3: making arrays immutable
I decided to tackle this with a test-driven development style. I’d write some code that I want to type-check, watch it
fail
to type-check, then fix it.
I updated
test.ts
to contain the following:
// In test.ts:
const arr = [1, 2, 3];
// Non-mutation should be allowed.
console.log(arr[1]);
console.log(arr.map((n) => n + 1));
// @ts-expect-error Mutation should not be allowed.
arr[0] = 9;
// @ts-expect-error Mutation should not be allowed.
arr.push(4);
This tests three things:
Creating arrays with array literals is possible.
Non-mutating operations, like
arr[1]
and
arr.map()
, are allowed.
Operations that mutate the array, like
arr[1] = 9
, are disallowed.
When I ran
tsc
, I saw two errors:
arr[0] = 9
is allowed. There’s an unused
@ts-expect-error
there.
arr.map
doesn’t exist.
So I updated the
Array
type in
lib.d.ts
with the following:
// In lib.d.ts:
interface Array<T> {
readonly [n: number]: T;
map<U>(
callbackfn: (value: T, index: number, array: readonly T[]) => U,
thisArg?: any ): U[];
}
The property accessor—the
readonly [n: number]: T
line—tells TypeScript that you can access array properties by numeric index, but they’re read-only. That should make
arr[1]
possible but
arr[1] = 9
impossible.
The
map
method definition is
copied from the TypeScript source code
with no changes (other than some auto-formatting). That should make it possible to call
arr.map()
.
Notice that I did
not
define
push
. We shouldn’t be calling that on an immutable array!
I ran
tsc
again and…success! No errors! We now have immutable arrays!
At this stage, I’ve shown that
it’s possible to configure TypeScript to make all arrays immutable with no extra annotations
. No need for
readonly string[]
or
ReadonlyArray<number>
! In other words, we have some immutability by default.
Now, I had mutable and immutable arrays, with immutability as the default.
Again, this is simplistic, but good enough for this proof-of-concept!
This was exciting to me. It was possible to configure TypeScript to be immutable by default, for arrays at least. I didn’t have to fork the language or use any other tools.
Could I make more things immutable?
Step 5: the same for
Record
I wanted to see if I could go beyond arrays. My next target was the
Record
type, which is
a TypeScript utility type
. So I defined another pair of test cases similar to the ones I made for arrays:
// In test.ts:
// Immutable records
const obj1: Record<string, string> = { foo: "bar" };
console.log(obj1.foo);
// @ts-expect-error Mutation should not be allowed.
obj1.foo = "baz";
// Mutable records
const obj2: MutableRecord<string, string> = { foo: "bar" };
obj2.foo = "baz";
TypeScript complained that it couldn’t find
Record
or
MutableRecord
. It also complained about an unused
@ts-expect-error
, which meant that mutation was allowed.
I rolled up my sleeves and fixed those errors like this:
// In lib.d.ts:
declaretype PropertyKey = string | number | symbol;
type Record<KeyT extends PropertyKey, ValueT> = {
readonly [key in KeyT]: ValueT;
};
type MutableRecord<KeyT extends PropertyKey, ValueT> = {
[key in KeyT]: ValueT;
};
Now, we have
Record
, which is an immutable key-value pair, and the mutable version too. Just like arrays!
You can imagine extending this idea to other built-in types, like
Set
and
Map
. I think it’d be pretty easy to do this the same way I did arrays and records. I’ll leave that as an exercise to the reader.
Failed step 6: plain objects
My final test was to make regular objects (not records or arrays) immutable. Unfortunately for me, I could not figure this out.
Here’s the test case I wrote:
// In test.ts:
const obj = { foo: "bar" };
console.log(obj.foo);
// @ts-expect-error Mutation should not be allowed.
obj.foo = "baz";
This stumped me. No matter what I did, I could not write a type that would disallow this mutation. I tried modifying the
Object
type every way I could think of, but came up short!
There are ways to annotate
obj
to make it immutable, but that’s not in the spirit of my goal. I want it to be immutable by default!
Alas, this is where I gave up.
Can you figure this out?
I wanted to make TypeScript immutable by default. I was able to do this with arrays,
Record
s, and other types like
Map
and
Set
. Unfortunately, I couldn’t make it work for plain object definitions like
obj = { foo: "bar" }
.
There’s probably a way to enforce this with lint rules, either by disallowing mutation operations or by requiring
Readonly
annotations everywhere. I’d like to see what that looks like.
If
you
figure out how to make TypeScript immutable by default
with no other tools
, I would love to know, and I’ll update my post. I hope my failed attempt will lead someone else to something successful.
Again,
please contact me
if you figure this out, or have any other thoughts.
Cloudflare outage causes error messages across the internet
Guardian
www.theguardian.com
2025-11-18 13:08:35
US company that defends millions of websites against malicious attacks suffers unidentified problem A key piece of the internet’s usually hidden infrastructure suffered a global outage on Monday, causing error messages to flash up across websites. Cloudflare, a US company whose services include defe...
A key piece of the internet’s usually hidden infrastructure suffered a global outage on Monday, causing error messages to flash up across websites.
Cloudflare, a US company whose services include defending millions of websites against malicious attacks, suffered an unidentified problem on Tuesday, which meant internet users could not access some customers’ websites.
Neither could some site owners access their performance dashboards. Sites including X and Open AI suffered increased outages at the same time as Cloudflare’s problems, according to
Downdetector
.
The outage is ongoing but as of 12.21pm GMT, the company said: “We are seeing services recover, but customers may continue to observe higher-than-normal error rates as we continue remediation efforts.”
A further message said: “Update: we are continuing to investigate this issue.”
A spokesperson for Cloudflare said: “We saw a spike in unusual traffic to one of Cloudflare’s services beginning at 11:20am. That caused some traffic passing through Cloudflare’s network to experience errors. While most traffic for most services continued to flow as normal, there were elevated errors across multiple Cloudflare services.
“We do not yet know the cause of the spike in unusual traffic. We are all hands on deck to make sure all traffic is served without errors. After that, we will turn our attention to investigating the cause of the unusual spike in traffic.”
Cloudflare’s engineers had been scheduled to carry out maintenance on Tuesday on datacentres in Tahiti, Los Angeles, Atlanta and Santiago in Chile, but it is not clear if their activities were related to the outage.
As it tries to fix the problem it disabled an encryption service called WARP in London and said: “Users in London trying to access the internet via WARP will see a failure to connect.”
Couldflare was described as “the biggest company you’ve never heard of” by Alan Woodward, professor at the Surrey Centre for Cyber Security. The company says it provides services to “protect your websites, apps, APIs, and AI workloads while accelerating performance”.
Woodward described it as a “gatekeeper” and said its roles include monitoring traffic to sites to defend them against distributed denial of service attacks when malicious actors try to overwhelm sites with requests. It also checks users are human.
The problems at Cloudflare come less than a month after the outage of Amazon Web Services which brought down thousands of sites.
“We’re seeing how few of these companies there are in the infrastructure of the internet, so that when one of them fails it becomes really obvious quickly,” Woodward said.
While the cause remains unclear, Woodward said it was unlikely to be a cyber-attack as a service so large is unlikely to have a single point of failure.
Do Not Put Your Site Behind Cloudflare if You Don't Need To
At the time of writing 12:43 UTC on Tue 18 Nov, Cloudflare has taken many sites down.
I'm trying to browse the web, but about half of the sites show an error:
Most of these sites are not even that big.
I expect maybe a few thousand visitors per month.
This demonstrates again a simple fact: if you put your site behind a centralized service, then that service can take your site down together with half the internet.
Most people use Cloudflare because they have been scared into the idea that you need DDoS protection.
Well, maybe you do, but probably you don't.
As they say in security, "no one will burn a zero day on you!".
For your small blog with one hundred visitors per month, it's probably the same:
"no one will burn their DDoS capabilities on you!"
I don't know how else to say it.
Many people keep talking about the importance of a decentralized web, and then continue putting their site behind Cloudflare.
Maybe that's the core of this message.
Face your fears.
Put your service on the internet.
Maybe it goes down, but at least not by yet another Cloudflare outage.
Compilers are sophisticated software artifacts that transform a source-language program into a target-language program, usually by taking a series of passes over the input program. Each compiler pass may perform a transformation, such as closure conversion; it may perform an optimization, such as dead code elimination; or it may perform an analysis, the results of which can be used in later transformation and optimization passes.
We sometimes think of the number of passes in a compiler as a measure of the compiler’s complexity. The classic paper
“From System F to Typed Assembly Language”
, for example, explains that a compiler “may make as many as 20 passes over a single program, performing sophisticated analyses and transformations such as CPS conversion, closure conversion, unboxing, subsumption elimination, or region inference.” In this context, 20 is intended to sound like a large number, and indeed, it does sound a bit daunting. But what if we could make compiler development more approachable by fully embracing the idea that a compiler should be structured as a large number of small passes, each performing a
single specific task?
The nanopass approach
The first compiler I ever worked on was the Scheme-to-x86-64 compiler I wrote for
Kent Dybvig
‘s
compilers course, known as P523
, in my first year as a grad student at Indiana University, back in spring 2009. Actually, I didn’t write just one compiler that semester; I wrote fifteen compilers, one for each week of the course. The first week, my compiler had an input language that was more or less just parenthesized assembly language, and its target language was x86-64 assembly. Each week, we added more passes to the front of the previous week’s compiler, resulting in a new compiler with the same target language as the compiler of the previous week, but a slightly higher-level input language.
By the end of the course, I had a compiler that compiled a substantial subset of Scheme to x86-64, structured as 43 small passes. Each pass translated from its input language to a slightly lower-level language, or had the same input and output language but performed some analysis or optimization on it. (I named my compiler SALL-E, which stood for “Space Allocation Lambda Lifter, Earth-class”, riffing on a
recent-at-the-time movie
.)
The nanopass approach was originally described in the
ICFP ’04 paper “A Nanopass Infrastructure for Compiler Education”
by Dipa Sarkar, Oscar Waddell, and Dybvig (an
expanded version
of which later appeared in JFP). Interestingly, the nanopass work was originally not intended to be specifically for education, but the ICFP reviewers required that the authors position it that way in the paper out of concern that a nanopass-style compiler would not be efficient enough for actual production use. Nine years later, Andy Keep and Dybvig documented this bit of history (and refuted the efficiency concern) in their ICFP ’13 paper
“A Nanopass Framework for Commercial Compiler Development”
, which describes their rewrite of the Chez Scheme compiler using the nanopass approach. Chez Scheme itself was
open-sourced
in 2016 and, excitingly, is now the
foundation of Racket
.
I like to think of the nanopass approach as taking the idea of
parser combinator
libraries and extending that idea to the development of an entire compiler. With a parser combinator library, you write a parser by starting with a bunch of primitive parsers (say, that parse numbers or characters) and combining them, eventually building up the ability to parse a sophisticated language. The language one can parse gets fancier and fancier, but at every step of the way, the thing one has
is
a parser. Likewise, when developing a compiler, it’s useful to be able to think of the thing that you have at each stage of the process as
already
being a compiler; as you go along, it becomes a compiler for a language that’s increasingly different from the target language.
Backend-first compiler development
Although the nanopass approach doesn’t specifically mandate implementing a compiler in a back-to-front manner — starting with code generation and working upward from there — the back-to-front approach was a hallmark of P523 in the year I took it. For me, a first-year grad student who had never worked on compilers before, this way of organizing the work was incredibly motivating: at the end of week one of the course (and at the end of week two, and so on for each week),
I had written a compiler
! Admittedly, what I had at the end of week one was a compiler for an input language that wasn’t very different from the output language. But it converted code in its input language to honest-to-goodness x86 assembly code on which I could then run an off-the-shelf assembler and produce a working executable.
Some compiler-development experiences are long slogs where you write code for months without ever having a thing that produces an actual executable that you can run. But with the back-to-front nanopass approach, we got that hit of gratification every week! Furthermore, thinking of each component of the compiler as itself being a compiler was useful because it encouraged us to structure our code in a readable, modular, and maintainable way, in much the same way that parser combinator libraries support the development of readable, modular, maintainable parsers.
It’s unusual to see compiler courses or books structured in this back-to-front way. The innovative
“From NAND to Tetris” course
seems to come close – projects
7
and
8
cover the back end of a compiler, while projects
10
and
11
cover the front end – but, even then, projects 10 and 11 go in front-to-back order, rather than back-to-front.
A 2006 Scheme Workshop paper
by Aziz Ghuloum, though, advocates another approach to incremental compiler development that is a cousin to the back-to-front nanopass approach, which is to implement a complete compiler for a subset of the source language and then gradually expand that subset. (Nada Amin has
a repository
containing such a step-by-step development, and
Indiana’s current compiler course
still uses such an approach.)
Both Ghuloum’s incremental approach and the back-to-front nanopass approach share the property that each step produces real assembly code that can be executed directly on the hardware after assembly, and each step results in a working compiler (for increasingly bigger subsets of the source language for the former; for increasingly higher-level languages for the latter). Ghuloum convincingly argues that this way of doing compiler development can make writing a compiler as approachable as writing an interpreter, concluding, “Compiler construction is not as complex as it is commonly perceived to be. […] Once the basic compiler is mastered, the novice implementor is better equipped for tackling more ambitious tasks.”
From Scheme to Rust
For me, seeing how a compiler’s implementation could be broken down into a series of relatively small, well-defined, and approachable steps was vital to my career’s development. I began
contributing to the implementation of Rust
as an intern at Mozilla Research starting in 2011. I learned a great deal from working on Rust for two summers, and even more importantly, I got to know a lot of people whose presence in my life has helped me build a research career.
Mozilla didn’t hire me to work on Rust because of any specific compiler implementation skill that I learned in P523; in fact, there was very little overlap between what I did in the course and what I did working on Rust. For the P523 compiler, for instance, I implemented register allocation, whereas Rust compiles to LLVM, which takes care of register allocation for you. Conversely, since Scheme is an S-expression-based language, the parser for the P523 compiler was incredibly simple, whereas parsing Rust is pretty involved; and because Scheme is not statically typed, we didn’t implement type checking in P523, whereas much of my time working on Rust was spent on the parts of the compiler responsible for type checking and type inference.
Nevertheless, it was only because of having taken P523 that I even considered applying to work on Rust at Mozilla, because P523 made me believe that a compiler was something that I
could
work on and
wanted
to work on. I count myself lucky that my first exposure to compiler implementation showed me that writing a real compiler doesn’t necessarily have to be a monolithic and unapproachable task that only a heroic few people could ever hope to accomplish.
Bio:
Lindsey Kuper
is an assistant professor at UC Santa Cruz, where she works on language-based approaches to building parallel and distributed software systems that are correct and efficient. She co-founded
!!Con
, the
conference of ten-minute talks on the joy, excitement, and surprise of computing, and its sister conference
!!Con West
. A preliminary version of this post appeared
on her blog
in 2017.
Disclaimer:
These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.
Cloudflare hit by outage affecting global network services
Bleeping Computer
www.bleepingcomputer.com
2025-11-18 12:24:59
Cloudflare is investigating an outage affecting its global network services, with users encountering "internal server error" messages when attempting to access affected websites and online platforms. [...]...
Cloudflare is investigating an outage affecting its global network services, with users encountering "internal server error" messages when attempting to access affected websites and online platforms.
Cloudflare's Global Network is a distributed infrastructure of servers and data centers located in over 330 cities across more than 120 countries, delivering content delivery, security, and performance optimization services.
It has 449 Tbps global network edge capacity and connects Cloudflare to over 13,000 networks, including every major ISP, cloud provider, and enterprise worldwide.
The Internet infrastructure firm
first acknowledged
these ongoing issues just over 40 minutes ago, reporting that its support portal was experiencing availability issues.
Less than half an hour later, at 11:48 UTC, it added a
new incident report
warning customers that the Cloudflare Global Network was also experiencing problems.
"Cloudflare is aware of, and investigating an issue which impacts multiple customers: Widespread 500 errors, Cloudflare Dashboard and API also failing," the company said. "We are working to understand the full impact and mitigate this problem. More updates to follow shortly."
While Cloudflare has yet to share more information on the extent of this incident, in BleepingComputer's tests, Cloudflare nodes across Europe are currently down, including those in Bucharest, Zurich, Warsaw, Oslo, Amsterdam, Berlin, Frankfurt, Vienna, Stockholm, and Hamburg.
Outage monitoring service Downdetector has also received tens of thousands of reports since the outage began, with impacted users experiencing issues with server connections, websites, and hosting.
While not
necessarily related to this ongoing outage, hundreds of thousands of other Downdetector users
have also reported issues
when attempting to use and connect to various
online services, including Spotify, Twitter, OpenAI, League of Legends, Valorant, AWS, and Google.
The company
mitigated another massive outage
in June that caused Zero Trust WARP connectivity issues and Access authentication failures across multiple regions, and also took down Google Cloud infrastructure.
In October, Cloudflare also addressed
an outage
caused by a
major DNS failure
that disrupted connectivity to millions of websites and online platforms on its Amazon Web Services (AWS) cloud computing platform.
Update November 18, 07:29 EST:
Cloudflare is now seeing some signs of recovery.
"We are seeing services recover, but customers may continue to observe higher-than-normal error rates as we continue remediation efforts," it said.
Update November 18, 08:47 EST:
In a new update, Cloudflare states that some services have been restored, while it continues to work on the remaining ones.
"We have made changes that have allowed Cloudflare Access and WARP to recover. Error levels for Access and WARP users have returned to pre-incident rates. We have re-enabled WARP access in London," it said.
"We are continuing working on restoring service for application services customers."
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
Crypto market sheds more than $1tn in six weeks amid fears of tech bubble
Guardian
www.theguardian.com
2025-11-18 12:03:57
Bitcoin price at lowest level since April while FTSE 100 falls as Google boss warns there is ‘irrationality’ in AI boomBusiness live – latest updatesMore than $1tn (£760bn) has been wiped off the value of the cryptocurrency market in the past six weeks amid fears of a tech bubble and fading expectat...
More than $1tn (£760bn) has been wiped off the value of the cryptocurrency market in the past six weeks amid fears of a tech bubble and fading expectations for a US rate cut next month.
Tracking more than 18,500 coins, the value of the crypto market has fallen by a quarter since a high in early October, according to the data company CoinGecko.
Bitcoin has fallen by 27% over the same period to $91,212, its lowest level since April.
The UK’s blue-chip FTSE 100 index was down by 1.2% on Tuesday, its fourth day in the red in a row. The Stoxx Europe 600, which tracks the biggest companies on the continent, fell 1.2%.
It follows steeper falls in Asia, where in the Japan the Nikkei 225 index shed 3.2%. Hong Kong’s Hang Seng index dropped 1.7%.
Sundar Pichai, the head of Google’s parent company, Alphabet, said in an interview with the BBC that there was
“irrationality” in the current AI boom
. He warned that in the event that the AI bubble bursts, “no company is going to be immune, including us”.
The chief executive of Klarna, Sebastian Siemiatkowski, also sounded the alarm this week, warning that huge sums being poured into computing infrastructure made him “nervous”.
He told the Financial Times: “I think [OpenAI] can be very successful as a company but at the same time I’m very nervous about the size of these investments in these datacentres. That’s the particular thing that I am concerned about.”
The Klarna co-founder added that the rising valuation of AI companies, including the chipmaker Nvidia, was also a source of concern. Nvidia became the first company to hit a market value of $4tn this year, later followed by Apple and Microsoft.
“That makes me nervous, because of the amount of wealth that is currently automatically allocated into this trend, without some more thoughtful thinking,” Siemiatkowski said.
“You can say, ‘I disagree with the fact that Nvidia is worth that much and I don’t care, some rich people are going to lose some money.’ But the truth is, because of the index funds and how this works, your pension right now is going into that theory that it is a good investment.”
An AI bubble is now seen as one of the most serious risks in the stock market, and a survey by the Bank of America found that 45% of its polled fund managers believe it is the biggest
tail risk
.
The price of gold, which is traditionally seen as a safe haven asset, is also falling. The spot price fell by 0.3% to $4,033.29 an ounce on Tuesday morning, after earlier hitting its lowest level in a week.
The drop comes amid fading expectations that the US Federal Reserve will cut interest rates next month. Higher interest rates make gold relatively less appealing as the metal does not pay a yield.
However, Giovanni Staunovo, an analyst at the Swiss investment bank UBS, said the gold price was likely to fall further but would soon recover.
“I would expect gold prices to bottom out soon, as I still see the Fed cutting rates several times over the coming quarters, and central banks’ diversification into gold remains strong,” he said.
AI and Voter Engagement
Schneier
www.schneier.com
2025-11-18 12:01:44
Social media has been a familiar, even mundane, part of life for nearly two decades. It can be easy to forget it was not always that way.
In 2008, social media was just emerging into the mainstream. Facebook reached 100 million users that summer. And a singular candidate was integrating social media...
Social media has been a familiar, even mundane, part of life for nearly two decades. It can be easy to forget it was not always that way.
In 2008, social media was just emerging into the mainstream.
Facebook
reached
100 million users
that summer. And a singular candidate was integrating social media into his political campaign: Barack Obama. His campaign’s use of social media was so bracingly innovative, so impactful, that it was viewed by journalist
David Talbot
and others as the strategy that enabled the first term Senator to win the White House.
Over the past few years, a new technology has become mainstream:
AI
. But still, no candidate has unlocked AI’s potential to revolutionize political campaigns. Americans have three more years to wait before casting their ballots in another Presidential election, but we can look at the 2026 midterms and examples from around the globe for signs of how that breakthrough might occur.
How Obama Did It
Rereading the contemporaneous reflections of the
New York Times’
late media critic,
David Carr
, on Obama’s campaign reminds us of just how new social media felt in 2008. Carr positions it within a now-familiar lineage of revolutionary communications technologies from newspapers to radio to television to the internet.
The Obama campaign and administration demonstrated that social media was different from those earlier communications technologies, including the pre-social internet. Yes,
increasing numbers
of voters were getting their news from the internet, and content about the then-Senator sometimes made a splash by going
viral
. But those were still broadcast communications: one voice reaching many. Obama found ways to connect voters to each other.
In describing what social media revolutionized in campaigning, Carr quotes campaign vendor Blue State Digital’s Thomas Gensemer: “People will continue to expect a conversation, a two-way relationship that is a give and take.”
The Obama team made some earnest efforts to realize this vision. His transition team launched
change.gov
, the website where the campaign collected a “Citizen’s Briefing Book” of public comment. Later, his administration built
We the People
, an online petitioning platform.
But the lasting legacy of Obama’s 2008 campaign, as political scientists Hahrie Han and Elizabeth McKenna chronicled, was pioneering online “
relational organizing
.” This technique enlisted individuals as organizers to activate their friends in a self-perpetuating web of relationships.
Perhaps because of the Obama campaign’s close association with the method, relational organizing has been touted repeatedly as the linchpin of Democratic campaigns: in
2020
,
2024
, and
today
. But
research
by non-partisan groups like
Turnout Nation
and right-aligned groups like the
Center for Campaign Innovation
has also empirically validated the effectiveness of the technique for inspiring voter turnout within connected groups.
The Facebook of 2008 worked well for relational organizing. It gave users tools to connect and promote ideas to the people they know: college classmates, neighbors, friends from work or church. But the nature of social networking has changed since then.
For the past decade, according to
Pew Research
, Facebook use has stalled and lagged behind YouTube, while Reddit and TikTok have surged. These platforms are less useful for relational organizing, at least in the traditional sense. YouTube is organized more like broadcast television, where content creators produce content disseminated on their own channels in a largely one-way communication to their fans. Reddit gathers users worldwide in forums (subreddits) organized primarily on topical interest. The endless feed of TikTok’s “For You” page disseminates engaging content with little ideological or social commonality. None of these platforms shares the essential feature of Facebook c. 2008: an organizational structure that emphasizes direct connection to people that users have direct social influence over.
AI and Relational Organizing
Ideas and messages might spread virally through modern social channels, but they are not where you convince your friends to show up at a campaign rally. Today’s platforms are spaces for
political hobbyism
, where you express your political feelings and see others express theirs.
Relational organizing works when one person’s action inspires others to do this same. That’s inherently a chain of human-to-human connection. If my AI assistant inspires your AI assistant, no human notices and one’s vote changes. But key steps in the human chain can be assisted by AI. Tell your phone’s AI assistant to
craft a personal message
to one friend—or a hundred—and it can do it.
So if a campaign hits you at the right time with the right message, they might persuade you to task your AI assistant to ask your friends to donate or volunteer. The result can be something more than a form letter; it could be automatically drafted based on the entirety of your email or text correspondence with that friend. It could include references to your discussions of recent events, or past campaigns, or shared personal experiences. It could sound as authentic as if you’d written it from the heart, but scaled to everyone in your address book.
Research
suggests that AI can generate and perform written political messaging about as well as humans. AI will surely play a
tactical role
in the 2026 midterm campaigns, and some candidates may even use it for relational organizing in this way.
(Artificial) Identity Politics
For AI to be truly transformative of politics, it must change the way campaigns work. And we are starting to see that in the US.
The earliest uses of AI in American political campaigns are, to be polite, uninspiring. Candidates viewed them as just
another tool
to optimize an endless stream of email and text message appeals, to ramp up political
vitriol
, to
harvest data
on voters and donors, or merely as a
stunt
.
Of course, we have seen the rampant production and spread of AI-powered deepfakes and
misinformation
. This is already impacting the key 2026 Senate races, which are likely to attract
hundreds of millions
of dollars in financing.
Roy Cooper
, Democratic candidate for US Senate from North Carolina, and
Abdul El-Sayed
, Democratic candidate for Senate from Michigan, were both targeted by viral deepfake attacks in recent months. This may
reflect
a growing trend in Donald Trump’s Republican party in the use of AI-generated imagery to build up GOP candidates and assail the opposition.
And yet, in the global elections of 2024, AI was used more
memetically
than deceptively. So far, conservative and far right parties seem to have adopted this most aggressively. The ongoing rise of Germany’s far-right populist AfD party has been credited to its use of AI to generate
nostalgic and evocative
(and, to many, offensive) campaign images, videos, and music and, seemingly as a result, they have
dominated TikTok
. Because most social platforms’ algorithms are tuned to reward media that generates an emotional response, this counts as a double use of AI: to generate content and to manipulate its distribution.
AI can also be used to generate politically useful, though artificial, identities. These identities can fulfill different roles than humans in campaigning and
governance
because they have differentiated traits. They can’t be imprisoned for speaking out against the state, can be positioned (legitimately or not) as unsusceptible to bribery, and can be forced to show up when humans will not.
In
Venezuela
, journalists have turned to AI avatars—artificial newsreaders—to report anonymously on issues that would otherwise elicit government retaliation. Albania recently “
appointed
” an AI to a ministerial post responsible for procurement, claiming that it would be less vulnerable to bribery than a human. In Virginia, both in
2024
and again
this year
, candidates have used AI avatars as artificial stand-ins for opponents that refused to debate them.
And yet, none of these examples, whether positive or negative, pursue the promise of the Obama campaign: to make voter engagement a “two-way conversation” on a massive scale.
The closest so far to fulfilling that vision anywhere in the world may be Japan’s new political party,
Team Mirai
. It started in 2024, when an independent Tokyo gubernatorial candidate,
Anno Takahiro
, used an AI avatar on YouTube to respond to 8,600 constituent questions over a seventeen-day continuous livestream. He collated hundreds of comments on his campaign manifesto into a revised policy platform. While he didn’t win his race, he shot up to a
fifth place
finish among a record 56 candidates.
Anno was RECENTLY
elected
to the upper house of the federal legislature as the founder of a new party with a
100 day plan
to bring his vision of a “public listening AI” to the whole country. In the early stages of that plan, they’ve invested their share of Japan’s 32 billion yen in
party grants
—public subsidies for political parties—to hire engineers building digital civic infrastructure for Japan. They’ve already created platforms to provide
transparency
for party expenditures, and to use AI to make
legislation
in the Diet easy, and are meeting with engineers from US-based Jigsaw Labs (a Google company) to
learn from international examples
of how AI can be used to power participatory democracy.
Team Mirai has yet to prove that it can get a second member elected to the Japanese Diet, let alone to win substantial power, but they’re innovating and demonstrating new ways of using AI to give people a way to participate in politics that we believe is likely to spread.
Organizing with AI
AI could be used in the US in similar ways. Following American federalism’s longstanding model of “laboratories of democracy,” we expect the most aggressive campaign innovation to happen at the state and local level.
D.C. Mayor Muriel Bowser is
partnering
with MIT and Stanford labs to use the AI-based tool
deliberation.io
to capture wide scale public feedback in city policymaking about AI. Her administration
said
that using AI in this process allows “the District to better solicit public input to ensure a broad range of perspectives, identify common ground, and cultivate solutions that align with the public interest.”
It remains to be seen how central this will become to Bowser’s
expected
re-election campaign in 2026, but the technology has legitimate potential to be a prominent part of a broader program to rebuild trust in government. This is a trail blazed by Taiwan a decade ago. The
vTaiwan
initiative showed how digital tools like
Pol.is
, which uses
machine learning
to make sense of real time constituent feedback, can scale participation in democratic processes and radically improve trust in government. Similar AI listening processes have been used in
Kentucky
,
France
, and
Germany
.
Even if campaigns like Bowser’s don’t adopt this kind of AI-facilitated listening and dialog, expect it to be an increasingly prominent part of American public debate. Through a partnership with Jigsaw, Scott Rasmussen’s Napolitan Institute will use AI to elicit and synthesize the views of at least five Americans from every Congressional district in a project called “
We the People
.” Timed to coincide with the country’s 250th anniversary in 2026, expect the results to be promoted during the heat of the midterm campaign and to stoke interest in this kind of AI-assisted political sensemaking.
In the year where we celebrate the American republic’s semiquincentennial and continue a decade-long debate about whether or not Donald Trump and the Republican party remade in his image is fighting for the interests of the working class, representation will be on the ballot in 2026. Midterm election candidates will look for any way they can get an edge. For all the risks it poses to democracy, AI presents a real opportunity, too, for politicians to engage voters en masse while factoring their input into their platform and message. Technology isn’t going to turn an uninspiring candidate into Barack Obama, but it gives any aspirant to office the capability to try to realize the promise that swept him into office.
This essay was written with Nathan E. Sanders, and originally appeared in
The Fulcrum
.
We are pleased to announce the release of Ruby 4.0.0-preview2. Ruby 4.0 updates its Unicode version to 17,0.0, and so on.
Language changes
*nil
no longer calls
nil.to_a
, similar to how
**nil
does
not call
nil.to_hash
. [[Feature #21047]]
Core classes updates
Note: We’re only listing notable updates of Core class.
Binding
Binding#local_variables
does no longer include numbered parameters.
Also,
Binding#local_variable_get
and
Binding#local_variable_set
reject to handle numbered parameters.
[[Bug #21049]]
IO
IO.select
accepts +Float::INFINITY+ as a timeout argument.
[[Feature #20610]]
String
Update Unicode to Version 17.0.0 and Emoji Version 17.0. [[Feature #19908]][[Feature #20724]][[Feature #21275]]
(also applies to Regexp)
Standard Library updates
Note: We’re only listing notable updates of Standard librarires.
ostruct 0.6.1
pstore 0.2.0
benchmark 0.4.0
logger 1.7.0
rdoc 6.13.1
win32ole 1.9.2
irb 1.15.2
reline 0.6.1
readline 0.0.4
fiddle 1.1.6
Compatibility issues
Note: Excluding feature bug fixes.
Standard library compatibility issues
C API updates
JIT
YJIT
YJIT stats
ratio_in_yjit
no longer works in the default build.
Use
--enable-yjit=stats
on
configure
to enable it on
--yjit-stats
.
Add
invalidate_everything
to default stats, which is
incremented when every code is invalidated by TracePoint.
Add
mem_size:
and
call_threshold:
options to
RubyVM::YJIT.enable
.
ZJIT
Add an experimental method-based JIT compiler.
Use
--enable-zjit
on
configure
to enable the
--zjit
support.
As of Ruby 4.0.0-preview2, ZJIT is not yet ready for speeding up most benchmarks.
Please refrain from evaluating ZJIT just yet. Stay tuned for the Ruby 4.0 release.
RJIT
--rjit
is removed. We will move the implementation of the third-party JIT API
to the
ruby/rjit
repository.
Ruby was first developed by Matz (Yukihiro Matsumoto) in 1993,
and is now developed as Open Source. It runs on multiple platforms
and is used all over the world especially for web development.
Our Engineering team is actively investigating an issue impacting multiple DigitalOcean services caused by an upstream provider incident. This disruption affects a subset of Gen AI tools, the App Platform, Load Balancer, and Spaces. Users may experience degraded performance or intermittent failures within these services.
We acknowledge the inconvenience this may cause and are working diligently to restore normal operations. Signs of recovery are starting to appear, with most requests beginning to succeed. We will continue to monitor the situation closely and provide timely updates as more information becomes available. Thank you for your patience as we work towards full service restoration
Posted
Nov
18
,
2025
-
12:26
UTC
This incident affects: API, App Platform (Global), Load Balancers (Global), and Spaces (Global).
Disclosure: Amplify Partners is an investor in Antithesis.
Since the dawn of Middle Earth, it has been somewhat widely accepted that it takes 8-10 years to build a real database. There’s a lot that has to go right; databases need to be rock solid. They need to be battle tested, consistent, fault tolerant, and most of all, trustworthy. These are things that take time to dial in.
But in 2009 a small team of childhood friends in Virginia somehow broke the rule. In a few short years the Daves (Rosenthal and Scherer) and Nick built something that everyone said they couldn’t: a distributed storage engine with ACID guarantees. It was called FoundationDB, it took the world by storm, and then it got acquired in 2015 by Apple (you can still download and use it
here
).
The FoundationDB founders, not a 90’s alternative punk band.
All in all, from their first LOC to a production ready version with capabilities that no other DB on earth had at the time, this very real database took only [x] years to build. How did they do it?
It wasn’t because they had a massive, well resourced team (they didn’t). It wasn’t because they owned any preexisting intellectual property (they didn’t). And it certainly wasn’t because databases had gotten easier to build (they hadn’t). No, if you ask the FoundationDB team why they were able to pull off what they pulled off, they’d all give you pretty much the same answer:
Deterministic Simulation Testing
. They had simply figured out how to test well.
DST is one of the most profoundly transformative pieces of technology developed over the past decade. It is responsible for an exponential speedup in how we test systems for correctness and performance, which is having a cascading effect on how quickly startups can even the most complex of systems. DST is now
standard practice at AWS
and countless startups: for example,
enabling TigerBeetle
to become Jepsen-passing in just 3 years.
And yet, this incredible thing is somehow still underdiscussed and surprisingly poorly documented. This post – alongside the noble
work being done
down south by Antithesis – will attempt to rectify that. I’m going to go through what DST is, how it was developed, how it works, and how you can get started with it yourself.
Testing, the true Sisyphean task of programming
Building a deterministic environment for your code (single threaded pseudo concurrently, simulated implementation of failures, deterministic code)
OK, but how do you find the bugs?
Getting started with DST yourself, and a disclaimer
Let’s dive in.
Testing, the true Sisyphean task of programming
Everyone complains that distributed systems are hard to build and debug because they’re
complicated
. Debugging complicated things is harder than debugging simple things. But the real reason that they’re hard to build is that they’re
non-deterministic
(and also because they’re complicated). There are simply too many factors out of your control to properly test and debug.
Allow me to illustrate. Imagine you’ve got two servers, and one (Server A) is sending a packet to another (Server B). Any readers who have spent time building distsys can count the number of ways in which even such a simple operation as this can go wrong. One common one is
network problems
. The packet gets stuck between the two servers, so Server A decides to re-send a new packet…only for the stuck packet to finally get
unstuck
. Now Server B has duplicated data.
This particular bug is easy enough to fix in isolation, save for an important caveat: it’s non-deterministic. You might run this whole scenario again to debug, and nothing happens. The network performs perfectly, as it does 99% of the time. In fact, you might have never found it in the first place because the network worked when you tested it. Even if you found it, you might be unable to reproduce it. And even if you reproduced it, you might be unable to verify that you fixed it.
Whether it’s disk, network, OS, or any other of these kinds of non-deterministic bugs, you are in a pickle: things failing because of conditions outside of your code breaks our mental model of testing. The messy, dirty universe intrudes upon our world of pristine functions and introduces a source of randomness. And this is precisely why it takes so long to build foundational pieces of technology: you and your team essentially need to find as many of these bugs as you can manually. Or do you?
Engineers are smart and know that the litany of unit, integration, and other tests that verify their code itself are limited. They only test for the things you
know
can go wrong with your code, not the things that
can
go wrong with your code.
This is why there has been a fairly long history of attempts at solving the non-deterministic problem. One old school one that’s seeing a bit of resurgence these days (thanks to AI agents writing code) is
Formal Verification
. To formally verify, an engineer will model their functions using mathematical precision and logical theorems. You can then be
absolutely sure,
mathematically sure, that your function will always act as expected (at least for the functions you were able to model).
Formal methods, when you boil them down, are essentially a copy of your code that you test. They’re also generally an incomplete copy, both because of state space explosion and bugs in / misunderstandings of your original code. And that’s likely why none of them ended up becoming the default for testing. When you simulate your code, you end up just testing the simulation, not your code; to say nothing about how incredibly labor intensive it is to build in the first place.
As far as I can tell around the same time but completely independently, Amazon and the FoundationDB team came up with a similar idea…
what if your code
was
your simulation?
Building a deterministic environment for your code
The core of DST is very simple. Instead of building a model of your code – which is difficult and kind of misses the point – we’re just going to take your
real
code, and make
it
into the model.
This idea is, of course, insane. We are talking about not just simulating a
process
in your code, but an entire
network
of processes, plus all of their interactions with the environment like disks, operating systems, and network. And to build it the FoundationDB team needed to solve 3 pretty gnarly problems.
1) Single threaded pseudo-concurrency
Any particular simulation needs to run in a single process, because if things were actually concurrent you’d be introducing non-determinism. And yet of course within said process, you need to be
simulating
concurrency, especially if you’re building and testing a distributed system. This turns out to be either very hard or not very hard to implement depending on which programming language you are working in.
Some languages like Go are entirely built around transparent multi-threading and blocking IO. Polar Signals
solved
this for DST by compiling their application to WASM where it would run on a single thread. But that wasn't enough. Even on a single thread, the Go runtime intentionally schedules goroutines randomly. So Polar Signals forked the Go runtime to control this randomness with an environment variable. That's kind of crazy. Resonate took
another approach
that also looks cumbersome. I'm not going to attempt to describe it. Go seems like a difficult choice of a language if you want to do DST.
The FoundationDB team decided to use C++ for a few reasons. C++ is, of course, fast, but it is not well suited for the purpose of simulating concurrency. So the team built
Flow
, a syntactic extension to C++ that allows you to
model
concurrency while the actual implementation under the hood is all single-threaded (using callbacks).
2) Simulated implementations of your program’s interactions with the world
With DST, all of the randomness and entropy in your program is randomness and entropy that you put there on purpose.
Let’s use a network connection as an example. With DST, instead of making an
actual
network connection you make a simulated one. The simulated connection is just like a real one: it can wait a bit to simulate latency, copy some bytes from here to there, and do other things that a “real” network connection would do. But unlike a real network connection there is a chance on every execution that any number of pre-programmed things can go wrong: your peer closes the connection, a random error is introduced (you don’t know exactly what happened), etc. How foolish to assume the network is reliable!
This same simulation needs to exist for all of the interactions your program can have with the physical world outside of the network: disk, OS, and even data center.
3) Your code itself needs to be deterministic
Most developers would say their code
is
deterministic, but:
Do you ever have a random number involved in your program?
Do you check the time in an if statement?
Do you check how much disk space you have free?
If so, your program is non-deterministic, and running it twice could produce two completely different outcomes. For DST to work, your program must be deterministic just like the environment it’s running in.
This is fixable, but takes some effort. In the random number example, to make your program deterministic, you’d need to make sure that your random number actually comes from a
pseudo
random number generator that you control and you seeded, and that the seed becomes part of the input in the program.
The same is true for time
. All randomness in your code must come from the same seed that you can track and plug in again for a subsequent run.
OK, but how do you find the bugs?
A completely deterministic simulation is like the most pristine, no-expense-spared tool kit. The tools themselves can only take you so far, it is what you
do
with them that is most important. It is also tricky to make sure you are actually running all of your most important code in your tests; there are both normal software engineering problems and tricky math problems here.
FoundationDB’s approach to using the tools is encapsulated in something they called
test files
. A test file looks kind of like a series of config blocks. Perhaps you wanted to randomly clog the network:
The test file declares a set of stuff that the system is going to try to achieve (in our case, TPS) and then a set of stuff that’s going to prevent it from achieving that (in our case, random clogging). Swizzling, by the way, is when you stop a subset of network connections on a rolling basis and then bring them back up in reverse or random order. For reasons we don’t entirely understand, this is better at finding bugs than normal clogging.
FoundationDB designed primitives for all different kinds of tests. There are tests for broken machines, full data center failures, and even common sysadmin mistakes. But the real power is when you combine
multiple
of these tests into a single test file:
In this case your database is going to try to run 1K TPS
while
dealing with random clogging, dead machines, and config changes…good luck!
“One of the most fun parts of my job is thinking up new and creative ways to torture our database.”
-
– Will Wilson, Antithesis cofounder and former FoundationDB engineer
You can start to see how quickly a framework like this is going to uncover bugs that would take years of customer experience to uncover in the wild. Speaking of which…
A DST system is only as valuable as it is fast. Your customers can also be thought of as randomly injecting faults into your code that you will eventually uncover and fix. DST is only useful if it does so at a dramatically faster rate, and there are a few ways to make sure that happens:
Make failures happen more often
. In the real world, a disk might fail every 2-3 years. With DST, you can make it happen every two minutes. You can also
literally
speed up time and make many more simulated world seconds pass than real world seconds.
Buggification
: you can simply add bugs to your code. The logic would look something like “if in buggify mode” send something in the wrong order, never send a timeout, things like that.
Beyond just brute speedup force, you can also make clever use of the
Hurst Exponent
. In the 1950’s, a hydrologist (this is a water expert) named Harris Hurst was trying to figure out how to model the Nile River so that architects could build a correctly sized reservoir system. He intuited that weather events leading to swells in the river – namely rain – were not statistically independent, despite the fact that most prevailing models of the day assumed they were. Eventually, a statistical measure of this correlation – essentially the long term memory of a time series – was coined as the Hurst Exponent.
So what does this have to do with distributed systems? Hardware failures are, like the rain beating on the surface of the Nile, not random independent events. If a hard drive fails in a rack, the first thing you do is check
every other
hard drive in that rack; it could have been a bad batch, there could be a humidity problem in the data center, there could be a power issue, etc. With manual testing it’s almost impossible to test for cascading failures like this. With DST it’s absolutely possible, and FoundationDB manipulated the Hurst Exponent quite a bit to make sure that their database ran up against exactly such clusters of failures.
When you put all of this together, you end up with many more real world hours per hour of simulation. The FoundationDB team ran trillions of real world hours of tests in theirs, routinely racking up 5-10M simulation hours per night. TigerBeetle’s largest of its kind DST cluster – running on 1,000 CPU cores 24x7x365 – goes through 2 millennia of simulated runtime per day.
DST will not fix him
At this point it would be judicious to mention that DST is not a silver bullet, it does not
completely
protect you from bugs, and it is not without
its limitations
:
Your code most likely relies on external systems, which no amount of DST is going to make bug free (unless you convince
them
to use it??).
A simulator is only as good as your ability to use it well, and it’s very difficult and time consuming to design test suites that
really
push your code to the limit.
DST takes time and compute, it’s not free (literally or temporally).
But, you know, it’s still pretty great.
DST-as-a-service and the Antithesis story
DST was a revelation for the FoundationDB team. It gave them the ability to find
all of the bugs
in their database. I know this sounds ridiculous, but it was (at least mostly) literally true – in the history of the company, they only had 1 or 2 bugs reported by a customer. Even Kyle Kingsbury didn’t bother Jepsen testing it because, well, he didn’t think he’d find much.
Once you’ve found all the bugs in something, and you have very powerful tests which can find any new ones, programming feels completely different. I’ve only gotten to do this a few times in my career, and it’s hard to convey the feeling in words, but I have to try. It’s like being half of a cyborg, or having a jetpack, or something. You write code, and then you ask the computer if the code is correct, and if not then you try again. Can you imagine having a genie, or an oracle, which just tells you whether you did something wrong?
This is like some sort of drug, and once you experience it it’s really hard to go back to life without it. So much so that after the FoundationDB team dispersed to various tech companies after the acquisition, they were shocked to find that nothing like DST existed at even the most sophisticated of engineering organizations. So in 2018, Dave Scherer and Will Wilson (FoundationDB engineer) started Antithesis.
The mission is to bring DST to the masses
.
This mission is important because it’s actually
quite hard to get started with DST yourself
in 2025. You can build a basic toy sandbox, but if you are serious about getting DST into your production loop, we are talking about several months of engineering time. This in turn is part of why it took Antithesis 5 years to get out of stealth: building a deterministic hypervisor is a lot of work.
Antithesis is generally available now and I can say without hesitation that it’s the easiest and most foolproof way to make your code bug free. Confluent, MongoDB, Ramp, Palantir, and Ethereum have used them to implement DST and ship bug free code faster. It’s not the kind of thing that you can just sign up for and use, for several obvious reasons. But it’s not exactly hard to
get in touch
either. In my many years of working on marketing for technical tools, Antithesis is the first company I’ve seen publish
specific information on their POC process
:
Roblox to block children from talking to adult strangers after string of lawsuits
Guardian
www.theguardian.com
2025-11-18 12:00:52
Gaming platform to use facial age estimation to limit chats to similar age groups, as allegations of grooming grow The online games platform Roblox is to start blocking children from talking to adult and much older teen strangers from next month as it faces fresh lawsuits alleging the platform has b...
The online games platform
Roblox
is to start blocking children from talking to adult and much older teen strangers from next month as it faces fresh lawsuits alleging it has been exploited by predators to groom children as young as seven.
Roblox has reached 150 million daily players of games including viral hits Grow a Garden and Steal a Brainrot but has been hit by legal claims alleging the system’s design has made “children easy prey for paedophiles”.
From next month it will start enforcing facial age estimation to allow children to chat with strangers only if they are in their broad age group.
Roblox said it would be the first online gaming or communication platform to require age checks for communication. Similar checks were introduced for users of pornography sites in the UK this summer under Online Safety Act measures to prevent under-18s from seeing explicit content.
Roblox compared its new system to school cohorts such as elementary, middle school and high school. It will be introduced first in Australia, New Zealand and the Netherlands, where children will be blocked from privately chatting with adults they do not know in real life from next month, and in the rest of the world in early January.
Users will be placed into the following groups: under nine, nine to 12, 13 to 15, 16 to 17, 18 to 20, or 21 and over.
Children
will be able to chat only with others in their age group and similar ones. For example, a child with an estimated age of 12 will be able to chat only with under-16s. Images and video used for the checks would not be stored, Roblox said.
“We see it as a way for our users to have more trust in who the other people they are talking with are in these games,” said Matt Kaufman, Roblox’s chief safety officer. “And so we see it as a real opportunity to build confidence in the platform and build confidence amongst our users.”
It comes amid allegations from lawyers for families alleging the “systemic predation of minors” on Roblox. Matt Dolman, a Florida lawyer who has filed 28 suits against the company, which boomed during the pandemic and has kept growing, said the “principal allegations concern the systemic predation of minors”.
One of the latest cases he filed in the US district court of Nevada came from the family of a 13-year-old girl alleging Roblox “recklessly and deceptively” ran its business “in a way that led to the sexual exploitation of the plaintiff”.
It is alleged the girl, an avid Roblox user from Washoe county, Nevada, was targeted by a “dangerous child predator” who posed as a child, built a false emotional connection and manipulated the child into giving him her mobile phone number, to which he sent graphic messages. He then coerced her into sending explicit pictures and videos of herself.
The claim alleges that “had [Roblox] taken any steps to screen users before allowing them on the apps [the girl] would not have been exposed to the large number of predators trolling the platform”, and that she would not have been harmed if age and identity verification had been in place.
Two other US district court cases filed in recent days in the northern district of California concerned a seven-year-old girl in Philadelphia and a 12-year-old in Texas who were allegedly groomed on Roblox by predators to send explicit images of themselves.
A spokesperson for Roblox said it was “deeply troubled by any incident that endangers any user” and that “we prioritise the safety of our community”.
“This is why our policies are purposely stricter than those found on many other platforms,” they said. “We limit chat for younger users, don’t allow user-to-user image sharing, and have filters designed to block the sharing of personal information.
“We also understand that no system is perfect and that is why we are constantly working to further improve our safety tools and platform restrictions to ensure parents can trust us to help keep their children safe online, launching 145 new initiatives this year alone.”
Kaufman said: “It’s not enough just for one platform to hold a high standard for safety. We really hope the rest of the industry follows suit with some of the things that we’re doing, to really raise the protections for kids and teens online everywhere.”
Beeban Kidron, the UK founder of the 5Rights Foundation, which campaigns for children’s digital rights, said: “It is time for gaming companies to put their responsibilities to children at the centre of their services.
“Roblox’s announcement claims that what they are introducing will set best practice for the sector – a bold assertion from a company that has been slow to address predatory behaviour and has allowed adult strangers, and older children, easy access to millions of younger users. I hope they are right.”
Master System at 40: the truth about Sega’s most underrated console
Guardian
www.theguardian.com
2025-11-18 12:00:51
Forty years ago, the Nintendo Entertainment System dominated the markets in Japan and the US. But in Europe, a technologically superior rival was making it look like an ancient relic There’s an old maxim that history is written by the victors, and that’s as true in video games as it is anywhere els...
T
here’s an old maxim that history is written by the victors, and that’s as true in video games as it is anywhere else. Nowadays you’d be forgiven for thinking that the
Nintendo
Entertainment System was the only console available in the mid-to-late 1980s. If you were brought up in Nintendo’s target markets of Japan and North America, this chunky contraption essentially
was
the only game in town – the company had Mario after all, and its vice-like hold on third-party developers created a monopoly for major titles of the era. But in Europe, where home computers ruled the era, the NES was beaten by a technologically superior rival.
The
Sega
Master System was originally released in Japan in the autumn of 1985 as the Sega Mark III. Based around the famed Z80 CPU (used in home computers such as the Spectrum, Amstrad and TRS-80) and a powerful Sega-designed video display processor, it boasted 8kb of RAM, a 64-colour palette and the ability to generate 32 sprites on screen at one time – making the NES (based on the older 6502 processor) look like an ancient relic.
At first it was marketed domestically as a continuation of Sega’s SG-1000 series of machines, which were closer to affordable home computers than games consoles, with their optional keyboards and printers. But as the NES exploded in both Japan and the US, Sega had a rethink, removed some computing features and re-released the Mark III in 1986 as the Master System – an unapologetic games machine with a sleek, slimline, angular look, contrasting the beige Betamax visage of the NES.
Sega Master System games came on two formats: a cartridge and a Sega Card for shorter, cheaper titles.
Photograph: booksR/Alamy
It also came with a light gun, and Sega even released a pair of 3D glasses for the system and a range of compatible games. “I am going to call out the 3D version of OutRun,” says coder Chris White, who wrote a Master System emulator later used by Sega on its PlaySega website. “Sure it made your head hurt and the alternate flickering of the lenses was enough to trigger a mild seizure, but it’s reflective of an era when Sega wasn’t afraid to try wild experiments.”
Sega oversaw the distribution of the Master System in the US (at least initially), but looked to local companies to tackle the more fragmented European market. For the UK and France (and later Spain), that role would go to Virgin Mastertronic. “Sega’s partners had better marketing positioning in Europe,” says Nick Alexander, who was Virgin Mastertronic’s managing director at the time. “They also had better retail and distribution relationships than Nintendo did in those days. The video game industry magazine Computer Trade Weekly had a running joke that Nintendo saw Europe as where the dragons lived – they didn’t understand it, they were nervous of it. So they put their effort into the US.”
Alexander, who had run Virgin
Games
since 1983, embraced that company’s edgy, youth-conscious approach. “I was trying to think of the video game equivalent of a band going on tour,” he explains. “So we bought a double-decker bus and drove it around the country. We took it to school playgrounds and shopping centres. It got an awful lot of coverage. Nintendo had always marketed their games as family entertainment, but the only market in Europe where that worked was Germany. We pitched to teenagers and we knew if we got them, their younger siblings would want a Master System too. That’s how we beat Nintendo in Europe.”
And while Nintendo had Mario, Sega had a valuable asset of its own: its arcade heritage. The company set out to bring many of its hugely popular coin-op hits to the machine including Space Harrier, OutRun, Golden Axe and After Burner, marketing its new machine in the west as an arcade in your own living room. Although hardly perfect ports of the original titles, these were much faster and more colourful than any earlier home computer translations. To those of us who were teenage arcade addicts at the time, it felt wildly futuristic.
‘An arcade in your own living room’ … Shinobi on Sega Master System.
Photograph: ArcadeImages/Alamy
“The games are visually superior to other Z80 based systems as a result of Sega’s graphics hardware,” says White. “It presents the programmer with scrollable tilemaps and freely placeable sprites. They are both easy to use and offload a significant amount of processing from the CPU. The design had a number of parallels to Sega’s arcade hardware. In fact, the Master System’s graphics chip is actually based on the
TMS9918
, used by Sega’s older arcade machines.”
For European developers, the Master System hardware was a dream. “We’d been working on the Spectrum and Amstrad and our games were being ported to the C64,” says Andrew Oliver, who with his brother Philip was making the Dizzy games for Codemasters. “We went to the CES show in Las Vegas and I remember seeing the Sega stand. It was massive, and right alongside Nintendo – and their message was: ‘It’s all about speed.’ Back in the day, computers really were all about what was colourful and fast. So Codemasters cut a licensing deal, we got the dev kits – it’s a Z80, so we program it like a Spectrum, but the graphics chip is like a C64. The code ran really fast and you had all the nice parallax scrolling and sprites. It was very easy.”
UK developers also found Sega to be more helpful than Nintendo. Mike Simpson was a coder at the British publisher Personal Software Services, later owned by Mirrorsoft. “We had set up a little internal development studio in Coventry, only about 20 people, and we were doing a variety of ports,” he says. “Someone asked us to port Xenon 2, a really high-end 16bit Amiga game, on to the Master System. It looked impossible, so we had to have a go! I was actually invited to Japan to learn how to program it: I spent a week at Sega in Tokyo being taught by Mark Cerny [later lead architect of the PlayStation 4 and 5]. I remember rows and rows of tightly packed desks, and the meeting room chairs were all being used to sleep on!”
Arcade classic … Sonic the Hedgehog.
Photograph: Sega
Even when the Mega Drive arrived, the Master System’s popularity in Europe (and later Brazil), meant that it continued to be supported with simplified versions of Mega Drive games such as Sonic the Hedgehog. The spin-off title Sonic Chaos, developed for both the Master System and Sega’s Game Gear handheld (which was based on the same hardware as the Master System), was one of the highlights of the series. Later, Sega rolled out a redesigned sequel, Master System 2, priced at a relatively affordable £50 with Sonic thrown in.
But the Master System wasn’t just a repository for arcade classics and ports from other machines. It has its own heritage. The beautiful platformers Wonder Boy III: The Dragon’s Trap, Psycho Fox, Fantasy Zone and Alex Kidd in Miracle World, the seminal role-playing adventure Phantasy Star, the excellent Zelda-alike Golvellius: Valley of Doom – these are genuine classics, up there with the NES-era titles for which they are often overlooked. For modern collectors, they are also more accessible, being free of the inflated pricing attached to many classic Nintendo titles.
It’s true that in the US, the NES was so dominant that the word “Nintendo” became a synonym for gaming. But in Europe, Brazil and elsewhere, the Master System won out. The history books have been cruel to it, but for those of us who were there, who read European gaming mags, or who studied the annual Argos and Grattan Christmas catalogues for Sega goodies, the Master System was the home arcade machine that hinted at the future of gaming. It was the promise that the Mega Drive would go on to fulfil.
We are seeing services recover, but customers may continue to observe higher-than-normal error rates as we continue remediation efforts.
Posted
Nov
18
,
2025
-
12:21
UTC
Update
We are continuing to investigate this issue.
Posted
Nov
18
,
2025
-
12:03
UTC
Investigating
Cloudflare is aware of, and investigating an issue which impacts multiple customers: Widespread 500 errors, Cloudflare Dashboard and API also failing.
We are working to understand the full impact and mitigate this problem. More updates to follow shortly.
Posted
Nov
18
,
2025
-
11:48
UTC
This incident affects: Cloudflare Sites and Services (Network).
Update
-
We are seeing services recover, but customers may continue to observe higher-than-normal error rates as we continue remediation efforts.
Nov
18
,
2025
-
12:21
UTC
Update
-
We are continuing to investigate this issue.
Nov
18
,
2025
-
12:03
UTC
Investigating
-
Cloudflare is aware of, and investigating an issue which impacts multiple customers: Widespread 500 errors, Cloudflare Dashboard and API also failing.
We are working to understand the full impact and mitigate this problem. More updates to follow shortly.
Nov
18
,
2025
-
11:48
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
2025
-
12:02
UTC
Scheduled
-
We will be performing scheduled maintenance in SCL (Santiago) datacenter on 2025-11-18 between 12:00 and 15:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
2025
-
12:00
UTC
Scheduled
-
We will be performing scheduled maintenance in PPT (Tahiti) datacenter on 2025-11-18 between 12:00 and 16:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Investigating
-
Our support portal provider is currently experiencing issues, and as such customers might encounter errors viewing or responding to support cases. Responses on customer inquiries are not affected, and customers can still reach us via live chat (Business and Enterprise) through the Cloudflare Dashboard, or via the emergency telephone line (Enterprise).
We are working alongside our 3rd party provider to understand the full impact and mitigate this problem. More updates to follow shortly.
Nov
18
,
2025
-
11:17
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
2025
-
10:00
UTC
Scheduled
-
We will be performing scheduled maintenance in LAX (Los Angeles) datacenter on 2025-11-18 between 10:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
2025
-
07:02
UTC
Scheduled
-
We will be performing scheduled maintenance in ATL (Atlanta) datacenter between 2025-11-18 07:00 and 2025-11-19 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Cloudflare provides performance and security to website owners via its intelligent global network. This is the system status for the Cloudflare service, both edge network and dashboard/APIs for management.
We will be performing scheduled maintenance in SYD (Sydney) datacenter on 2025-11-18 between 15:00 and 19:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2025-11-18 between 17:00 and 22:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2025-11-18 between 16:00 and 22:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2025-11-18 between 16:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in KUL (Kuala Lumpur) datacenter on 2025-11-18 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in HKG (Hong Kong) datacenter on 2025-11-18 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in MAA (Chennai) datacenter between 2025-11-18 21:00 and 2025-11-19 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in KTM (Kathmandu) datacenter between 2025-11-18 21:00 and 2025-11-19 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-19 between 00:30 and 07:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in AMS (Amsterdam) datacenter on 2025-11-19 between 01:00 and 06:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in GRU (São Paulo) datacenter on 2025-11-19 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-19 between 05:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-19 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in MIA (Miami) datacenter on 2025-11-19 between 06:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in MIA (Miami) datacenter on 2025-11-19 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in MDE (Medellín) datacenter on 2025-11-19 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in DFW (Dallas) datacenter on 2025-11-19 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in SJO (San José) datacenter on 2025-11-19 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in QRO (Queretaro) datacenter on 2025-11-19 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in SJC (San Jose) datacenter on 2025-11-19 between 10:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in KIX (Osaka) datacenter on 2025-11-19 between 17:00 and 21:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2025-11-19 between 17:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2025-11-19 between 16:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in HKG (Hong Kong) datacenter on 2025-11-19 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in BOM (Mumbai) datacenter between 2025-11-19 19:00 and 2025-11-20 00:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in HYD (Hyderabad) datacenter between 2025-11-19 21:00 and 2025-11-20 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in JDO (Juazeiro do Norte) datacenter on 2025-11-20 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in SCL (Santiago) datacenter on 2025-11-20 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in EWR (Newark) datacenter on 2025-11-20 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in PHL (Philadelphia) datacenter on 2025-11-20 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in MIA (Miami) datacenter on 2025-11-20 between 07:00 and 10:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in GYE (Guayaquil) datacenter on 2025-11-20 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in OMA (Omaha) datacenter on 2025-11-20 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in DFW (Dallas) datacenter on 2025-11-20 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-20 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in SJO (San José) datacenter on 2025-11-20 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in KIX (Osaka) datacenter on 2025-11-20 between 17:00 and 21:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in HKG (Hong Kong) datacenter on 2025-11-20 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in GIG (Rio de Janeiro) datacenter on 2025-11-21 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in SCL (Santiago) datacenter on 2025-11-21 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in POA (Porto Alegre) datacenter on 2025-11-21 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in ATL (Atlanta) datacenter on 2025-11-21 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in EWR (Newark) datacenter on 2025-11-21 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in IAD (Ashburn) datacenter on 2025-11-21 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-21 between 08:00 and 13:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in SJC (San Jose) datacenter on 2025-11-21 between 10:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in LAX (Los Angeles) datacenter on 2025-11-21 between 10:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in ISB (Islamabad) datacenter on 2025-11-21 between 18:00 and 21:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in HKG (Hong Kong) datacenter on 2025-11-21 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in VNO (Vilnius) datacenter on 2025-11-24 between 00:00 and 04:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-24 between 00:30 and 07:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-24 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in BTS (Bratislava) datacenter on 2025-11-24 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in ARN (Stockholm) datacenter on 2025-11-24 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in WAW (Warsaw) datacenter on 2025-11-24 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
We will be performing scheduled maintenance in BSB (Brasilia) datacenter on 2025-11-24 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
12:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
09:00
UTC
Scheduled
-
We will be performing scheduled maintenance in MIA (Miami) datacenter on 2025-11-18 between 09:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
12:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
08:00
UTC
Scheduled
-
We will be performing scheduled maintenance in GUA (Guatemala City) datacenter on 2025-11-18 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
12:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
05:01
UTC
Update
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-18 between 05:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Update
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-18 between 08:00 and 21:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-18 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
11:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
07:01
UTC
Scheduled
-
We will be performing scheduled maintenance in EWR (Newark) datacenter on 2025-11-18 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
11:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
07:01
UTC
Scheduled
-
We will be performing scheduled maintenance in UIO (Quito) datacenter on 2025-11-18 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
10:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
06:01
UTC
Scheduled
-
We will be performing scheduled maintenance in STI (Santiago de los Caballeros) datacenter on 2025-11-18 between 06:00 and 10:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
09:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
05:00
UTC
Scheduled
-
We will be performing scheduled maintenance in EZE (Buenos Aires) datacenter on 2025-11-18 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
18
,
08:29
UTC
Investigating
-
DEX is currently experiencing consumption lag for a subset of synthetic test results. Scheduled tests will continue to run but results may not be available until lag has caught up. Customers who have alerts configured for synthetic tests may receive some notifications at this time. Most customers are not affected.
Nov
17
,
23:36
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
07:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
00:30
UTC
Scheduled
-
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-18 between 00:30 and 07:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
06:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
18
,
00:00
UTC
Scheduled
-
We will be performing scheduled maintenance in LHR (London) datacenter on 2025-11-18 between 00:00 and 06:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
18
,
01:27
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
17
,
23:12
UTC
Identified
-
The cause of this issue has been identified and a fix is being implemented.
Nov
17
,
22:43
UTC
Investigating
-
We are currently investigating an issue where we are observing a high error rate and increased latency for @cf/meta/llama-3.3-70b-instruct-sd.
Nov
17
,
22:24
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
01:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
21:00
UTC
Scheduled
-
We will be performing scheduled maintenance in KTM (Kathmandu) datacenter between 2025-11-17 21:00 and 2025-11-18 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
18
,
00:30
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
19:30
UTC
Scheduled
-
We will be performing scheduled maintenance in SIN (Singapore) datacenter between 2025-11-17 19:30 and 2025-11-18 00:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
23:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
19:30
UTC
Scheduled
-
We will be performing scheduled maintenance in KUL (Kuala Lumpur) datacenter on 2025-11-17 between 19:30 and 23:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
18:00
UTC
Scheduled
-
We will be performing scheduled maintenance in MNL (Manila) datacenter on 2025-11-17 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
16:00
UTC
Scheduled
-
We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2025-11-17 between 16:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
An effective fix has been applied and this incident is now resolved.
Nov
17
,
21:56
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
17
,
21:05
UTC
Investigating
-
Cloudflare is investigating an increased level of errors for customers running Workers scripts in the Frankfurt, Germany area.
We are working to analyse and mitigate this problem. More updates to follow shortly.
Nov
17
,
20:51
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
14:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
10:00
UTC
Scheduled
-
We will be performing scheduled maintenance in LAX (Los Angeles) datacenter on 2025-11-17 between 10:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
12:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
08:01
UTC
Scheduled
-
We will be performing scheduled maintenance in FSD (Sioux Falls) datacenter on 2025-11-17 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
12:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
08:00
UTC
Scheduled
-
We will be performing scheduled maintenance in SAP (San Pedro Sula) datacenter on 2025-11-17 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
11:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
07:01
UTC
Scheduled
-
We will be performing scheduled maintenance in ATL (Atlanta) datacenter on 2025-11-17 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
11:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
07:02
UTC
Scheduled
-
We will be performing scheduled maintenance in MIA (Miami) datacenter on 2025-11-17 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
11:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
07:00
UTC
Scheduled
-
We will be performing scheduled maintenance in IND (Indianapolis) datacenter on 2025-11-17 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
11:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
07:00
UTC
Scheduled
-
We will be performing scheduled maintenance in GYE (Guayaquil) datacenter on 2025-11-17 between 07:00 and 11:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
09:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
05:00
UTC
Scheduled
-
We will be performing scheduled maintenance in REC (Recife) datacenter on 2025-11-17 between 05:00 and 09:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
07:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
00:30
UTC
Scheduled
-
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-17 between 00:30 and 07:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
17
,
06:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
17
,
00:00
UTC
Scheduled
-
We will be performing scheduled maintenance in LHR (London) datacenter on 2025-11-17 between 00:00 and 06:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
15
,
06:30
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
06:45
UTC
Scheduled
-
We will be performing scheduled maintenance in SIN (Singapore) datacenter between 2025-11-14 06:45 and 2025-11-15 06:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
14
,
23:30
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
14
,
20:58
UTC
Identified
-
We are continuing to work on a fix for this issue.
Nov
13
,
17:43
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
13
,
16:58
UTC
Update
-
We are continuing to work on a fix for this issue.
Nov
13
,
10:00
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
12
,
17:32
UTC
Update
-
We are continuing to investigate this issue.
Nov
12
,
16:35
UTC
Investigating
-
Cloudflare is aware of, and investigating an issue which potentially impacts multiple customers: Our Page shield feature is currently experiencing issues with report ingestion. We are currently working on a resolution and will provide an update as soon as possible.
Nov
12
,
11:42
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
23:30
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
19:30
UTC
Scheduled
-
We will be performing scheduled maintenance in MAA (Chennai) datacenter on 2025-11-14 between 19:30 and 23:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
18:00
UTC
Scheduled
-
We will be performing scheduled maintenance in MNL (Manila) datacenter on 2025-11-14 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
18:00
UTC
Scheduled
-
We will be performing scheduled maintenance in MFM (Macau) datacenter on 2025-11-14 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
14
,
20:08
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
14
,
19:57
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
14
,
19:53
UTC
Investigating
-
Cloudflare is aware of, and investigating an issue with Durable Objects which potentially impacts come customers in the Western North American region. Durable Objects are experiencing an elevated level of overload errors in the aforementioned region. We are currently investigating this issue.
Nov
14
,
19:46
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
13:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
10:00
UTC
Scheduled
-
We will be performing scheduled maintenance in BOS (Boston) datacenter on 2025-11-14 between 10:00 and 13:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
14
,
08:09
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
14
,
07:19
UTC
Investigating
-
Cloudflare Load Balancing Analytics may falsely report that there are no load balancers for zones.
These issue does not affect the serving of load balancing traffic via the Cloudflare CDN or other security features at the Cloudflare Edge.
Investigation is ongoing. More updates to follow shortly.
Nov
14
,
06:00
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
05:02
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
01:02
UTC
Scheduled
-
We will be performing scheduled maintenance in WAW (Warsaw) datacenter on 2025-11-14 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
05:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
01:00
UTC
Scheduled
-
We will be performing scheduled maintenance in ARN (Stockholm) datacenter on 2025-11-14 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
05:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
14
,
01:01
UTC
Scheduled
-
We will be performing scheduled maintenance in MXP (Milan) datacenter on 2025-11-14 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
03:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
23:01
UTC
Scheduled
-
We will be performing scheduled maintenance in AMM (Amman) datacenter between 2025-11-13 23:00 and 2025-11-14 03:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
03:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
23:01
UTC
Scheduled
-
We will be performing scheduled maintenance in KWI (Kuwait City) datacenter between 2025-11-13 23:00 and 2025-11-14 03:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
03:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
23:02
UTC
Scheduled
-
We will be performing scheduled maintenance in ADB (Izmir) datacenter between 2025-11-13 23:00 and 2025-11-14 03:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
02:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
22:01
UTC
Scheduled
-
We will be performing scheduled maintenance in EVN (Yerevan) datacenter between 2025-11-13 22:00 and 2025-11-14 02:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
01:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
21:00
UTC
Scheduled
-
We will be performing scheduled maintenance in KTM (Kathmandu) datacenter between 2025-11-13 21:00 and 2025-11-14 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
01:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
21:00
UTC
Scheduled
-
We will be performing scheduled maintenance in KHI (Karachi) datacenter between 2025-11-13 21:00 and 2025-11-14 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
14
,
01:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
21:01
UTC
Scheduled
-
We will be performing scheduled maintenance in KNU (Kanpur) datacenter between 2025-11-13 21:00 and 2025-11-14 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
We have determined that the impact from this issue has ended. This incident is now resolved.
Nov
13
,
23:50
UTC
Investigating
-
Cloudflare is investigating issues with Cloudflare WARP and Cloudflare Zero Trust. Cloudflare WARP and Zero Trust users relying on older client versions may experience connectivity issues or a degraded Internet experience. More details will be shared when available.
We recommend that affected users upgrade WARP to the latest version.
Nov
13
,
21:38
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
23:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
19:00
UTC
Scheduled
-
We will be performing scheduled maintenance in KJA (Krasnoyarsk) datacenter on 2025-11-13 between 19:00 and 23:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
23:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
18:00
UTC
Scheduled
-
We will be performing scheduled maintenance in SIN (Singapore) datacenter on 2025-11-13 between 18:00 and 23:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
18:00
UTC
Scheduled
-
We will be performing scheduled maintenance in NRT (Tokyo) datacenter on 2025-11-13 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
An effective fix for this issue was applied and this incident is now resolved.
Nov
13
,
18:26
UTC
Investigating
-
Cloudflare is aware of, and investigating an issue which impacts customers in Iraq.
Nov
13
,
18:17
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
18:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
01:00
UTC
Scheduled
-
We will be performing scheduled maintenance in JNB (Johannesburg) datacenter on 2025-11-13 between 01:00 and 18:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
An effective fix was applied for this issue and this incident is now resolved.
Nov
13
,
17:23
UTC
Identified
-
The cause of this issue has been identified and a fix is being implemented.
Nov
13
,
17:16
UTC
Update
-
We are continuing to investigate this issue.
Nov
13
,
17:08
UTC
Update
-
We are continuing to investigate this issue.
Nov
13
,
16:57
UTC
Update
-
We are continuing to investigate this issue.
Nov
13
,
16:42
UTC
Update
-
We are continuing to investigate this issue.
Nov
13
,
16:35
UTC
Investigating
-
Cloudflare is investigating issues with parts of the Cloudflare Dashboard and multiple APIs.
These issues do not affect the serving of cached files via the Cloudflare CDN or other security features at the Cloudflare Edge.
Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.
Nov
13
,
16:18
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
14:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
09:00
UTC
Update
-
We will be performing scheduled maintenance in DEN (Denver) datacenter on 2025-11-13 between 09:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in DEN (Denver) datacenter on 2025-11-13 between 09:00 and 13:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
13
,
12:58
UTC
Update
-
We are continuing to work on a fix for this issue.
Nov
12
,
16:35
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
12
,
10:09
UTC
Investigating
-
Cloudflare is investigating issues with Network Performance in the Madrid, Spain area. Affected customers may see increased connectivity issues towards specific origins.
We are working to analyze and mitigate this problem. More updates to follow shortly.
Nov
12
,
09:33
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
12:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
08:00
UTC
Scheduled
-
We will be performing scheduled maintenance in GDL (Guadalajara) datacenter on 2025-11-13 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
12:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
08:01
UTC
Scheduled
-
We will be performing scheduled maintenance in GUA (Guatemala City) datacenter on 2025-11-13 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
10:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
07:00
UTC
Scheduled
-
We will be performing scheduled maintenance in MIA (Miami) datacenter on 2025-11-13 between 07:00 and 10:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
05:02
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
01:02
UTC
Scheduled
-
We will be performing scheduled maintenance in MAD (Madrid) datacenter on 2025-11-13 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
05:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
01:02
UTC
Scheduled
-
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-13 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
05:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
01:01
UTC
Scheduled
-
We will be performing scheduled maintenance in CDG (Paris) datacenter on 2025-11-13 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
05:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
01:01
UTC
Scheduled
-
We will be performing scheduled maintenance in AMS (Amsterdam) datacenter on 2025-11-13 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
04:45
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
13
,
00:45
UTC
Scheduled
-
We will be performing scheduled maintenance in WAW (Warsaw) datacenter on 2025-11-13 between 00:45 and 04:45 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
03:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
23:01
UTC
Scheduled
-
We will be performing scheduled maintenance in IST (Istanbul) datacenter between 2025-11-12 23:00 and 2025-11-13 03:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
01:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
21:00
UTC
Scheduled
-
We will be performing scheduled maintenance in KHI (Karachi) datacenter between 2025-11-12 21:00 and 2025-11-13 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
13
,
01:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
21:01
UTC
Scheduled
-
We will be performing scheduled maintenance in IXC (Chandigarh) datacenter between 2025-11-12 21:00 and 2025-11-13 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
23:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
19:00
UTC
Scheduled
-
We will be performing scheduled maintenance in JOG (Yogyakarta) datacenter on 2025-11-12 between 19:00 and 23:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
23:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
18:01
UTC
Scheduled
-
We will be performing scheduled maintenance in SIN (Singapore) datacenter on 2025-11-12 between 18:00 and 23:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
12
,
22:52
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
12
,
22:41
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
12
,
20:50
UTC
Investigating
-
Cloudflare is investigating issues with Cloudflare WARP and Cloudflare Zero Trust in the South America region. Cloudflare WARP and Zero Trust users in that region may experience connectivity issues or a degraded Internet experience.
Nov
12
,
16:51
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
22:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
18:00
UTC
Scheduled
-
We will be performing scheduled maintenance in KCH (Kuching) datacenter on 2025-11-12 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
11
,
07:00
UTC
Scheduled
-
We will be performing scheduled maintenance in ATL (Atlanta) datacenter between 2025-11-11 07:00 and 2025-11-12 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
22:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
18:01
UTC
Scheduled
-
We will be performing scheduled maintenance in KHH (Kaohsiung City) datacenter on 2025-11-12 between 18:00 and 22:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
12
,
18:05
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
12
,
17:51
UTC
Investigating
-
Cloudflare is investigating issues with Cloudflare Dashboard and related APIs. These issues do not affect the serving of cached files via the Cloudflare CDN or other security features at the Cloudflare Edge. Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.
Nov
12
,
17:44
UTC
Resolved
-
This incident has been resolved.
Nov
12
,
17:33
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
12
,
17:22
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
12
,
16:33
UTC
Update
-
We are continuing to investigate this issue.
Nov
12
,
16:31
UTC
Investigating
-
Cloudflare is aware of, and investigating, an issue which potentially impacts multiple users that use our Zero Trust DNS-over-TLS service in the following regions:
- Bangalore, India
- San Antonio, Texas
- Salt Lake City, Utah
- Kuala Lumpur, Malaysia
- Lisbon, Portugal
Further detail will be provided as more information becomes available.
Nov
12
,
16:31
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
14:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
10:00
UTC
Scheduled
-
We will be performing scheduled maintenance in SMF (Sacramento) datacenter on 2025-11-12 between 10:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
14:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
09:00
UTC
Scheduled
-
We will be performing scheduled maintenance in LAX (Los Angeles) datacenter on 2025-11-12 between 09:00 and 14:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
13:31
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
07:00
UTC
Update
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-12 between 07:00 and 13:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Scheduled
-
We will be performing scheduled maintenance in ORD (Chicago) datacenter on 2025-11-12 between 07:00 and 21:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
13:30
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
07:01
UTC
Scheduled
-
We will be performing scheduled maintenance in IAD (Ashburn) datacenter on 2025-11-12 between 07:00 and 13:30 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
13:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
09:00
UTC
Scheduled
-
We will be performing scheduled maintenance in ATL (Atlanta) datacenter on 2025-11-12 between 09:00 and 13:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
Cloudflare resolved an issue with Network Connectivity Issues in Chicago, US.
The timeline of impact was: 12:30 UTC to 12:49 UTC.
Nov
12
,
12:30
UTC
Resolved
-
This incident has been resolved.
Nov
12
,
12:10
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
12
,
11:51
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
12
,
11:33
UTC
Investigating
-
Cloudflare is investigating issues with network performance in Singapore. Affected customers may see increased connectivity issues towards specific origins. We are working to analyze and mitigate this problem. More updates to follow shortly.
Nov
12
,
11:26
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
12:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
08:00
UTC
Scheduled
-
We will be performing scheduled maintenance in GUA (Guatemala City) datacenter on 2025-11-12 between 08:00 and 12:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
12
,
11:19
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
12
,
11:06
UTC
Identified
-
The issue has been identified and a fix is being implemented.
Nov
12
,
11:03
UTC
Investigating
-
Cloudflare is investigating issues with customers being unable to see or change their Cloudflare Images Plan. Impact is to the Dashboard only and does not affect any existing image plans, security or CDN functionality.
We are working to understand the full impact and mitigate this problem. More updates to follow shortly.
Nov
12
,
10:02
UTC
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
06:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
02:00
UTC
Scheduled
-
We will be performing scheduled maintenance in LHR (London) datacenter on 2025-11-12 between 02:00 and 06:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
05:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
01:02
UTC
Scheduled
-
We will be performing scheduled maintenance in AMS (Amsterdam) datacenter on 2025-11-12 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
05:01
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
01:00
UTC
Scheduled
-
We will be performing scheduled maintenance in CDG (Paris) datacenter on 2025-11-12 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
05:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
01:01
UTC
Scheduled
-
We will be performing scheduled maintenance in MXP (Milan) datacenter on 2025-11-12 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
05:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
01:01
UTC
Scheduled
-
We will be performing scheduled maintenance in FRA (Frankfurt) datacenter on 2025-11-12 between 01:00 and 05:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
04:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
00:01
UTC
Scheduled
-
We will be performing scheduled maintenance in KIV (Chișinău) datacenter on 2025-11-12 between 00:00 and 04:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
04:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
12
,
00:00
UTC
Scheduled
-
We will be performing scheduled maintenance in TLV (Tel Aviv) datacenter on 2025-11-12 between 00:00 and 04:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
01:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
11
,
21:00
UTC
Scheduled
-
We will be performing scheduled maintenance in HYD (Hyderabad) datacenter between 2025-11-11 21:00 and 2025-11-12 01:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
00:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
11
,
18:45
UTC
Scheduled
-
We will be performing scheduled maintenance in KIX (Osaka) datacenter between 2025-11-11 18:45 and 2025-11-12 00:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
12
,
00:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
11
,
20:00
UTC
Scheduled
-
We will be performing scheduled maintenance in DAC (Dhaka) datacenter between 2025-11-11 20:00 and 2025-11-12 00:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Completed
-
The scheduled maintenance has been completed.
Nov
11
,
23:00
UTC
In progress
-
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov
11
,
17:01
UTC
Scheduled
-
We will be performing scheduled maintenance in SAN (San Diego) datacenter on 2025-11-11 between 17:00 and 23:00 UTC.
Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavailable.
Resolved
-
This incident has been resolved.
Nov
11
,
22:37
UTC
Monitoring
-
A fix has been implemented and we are monitoring the results.
Nov
11
,
19:44
UTC
Investigating
-
Cloudflare is investigating issues with Cloudflare Dashboard and related APIs. These issues do not affect the serving of cached files via the Cloudflare CDN or other security features at the Cloudflare Edge. Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.
Nov
11
,
19:07
UTC
Nov
10
,
2025
No incidents reported.
Nov
9
,
2025
No incidents reported.
Nov
8
,
2025
No incidents reported.
Nov
7
,
2025
No incidents reported.
Nov
6
,
2025
No incidents reported.
Nov
5
,
2025
No incidents reported.
Nov
4
,
2025
No incidents reported.
Sahil Dhiman: Anchors in Life
PlanetDebian
blog.sahilister.in
2025-11-18 11:33:19
Just like a ship needs an anchor to stabilize and hold it to port, humans too, I feel, have and require anchors to hold them to life. It could be an emotional anchor, a physical anchor, an anchor that stimulates your curiosity, a family member, a friend or a partner or a spiritual being.
An anchor h...
Just like a ship needs an anchor to stabilize and hold it to port, humans too, I feel, have and require anchors to hold them in life. It could be an emotional anchor, a physical anchor, an anchor that stimulates your curiosity, a family member, a friend or a partner or a spiritual being.
An anchor holds you and helps you stabilize in stormy weather. An anchor can keep you going or stop you from going. An anchor orients you, helps you formulate your values and beliefs.
An anchor could be someone or something or oneself (thanks
Saswata
for the thought). Writing here is one of my anchors; what’s your anchor?
Wörgl was the first in town in Austria that effectively managed to eliminate the extreme unemployment caused by the Great Depression. Its local currency experiment was such a success that it gained worldwide attention. That effort became known as the “Miracle of Wörgl .” For the full details,
go here
. Here is the summary of that story.
On July 5th 1932, in the middle of the Great Depression, the Austrian town of Wörgl made economic history by introducing a remarkable complimentary currency. Wörgl was in trouble, and was prepared to try anything. Of its population of 4,500, a total of 1,500 people were without a job, and 200 families were penniless. The mayor, Michael Unterguggenberger, had a long list of projects he wanted to accomplish, but there was hardly any money with which to carry them out. These included repaving the roads, streetlights, extending water distribution across the whole town, and planting trees along the streets.
Rather than spending the 40,000 Austrian schillings in the town’s coffers to start these projects off, he deposited them in a local savings bank as a guarantee to back the issue of a type of complimentary currency known as ‘stamp scrip’. The Mayor then proceeded to hire people to do infrastructure projects for the town and the community quickly went from an unemployment rate of over 30% to near zero, as that money began to circulate very rapidly.
Of all the business in town, only the railway station and the post office refused to accept the local money. When people ran out of spending ideas, they would pay their taxes early using scrip, resulting in a huge increase in town revenues. Over the 13-month period the project ran, the council not only carried out all the intended works projects, but also built new houses, a reservoir, a ski jump, and a bridge. The people also used scrip to replant forests, in anticipation of the future cash flow they would receive from the trees.
Six neighbouring villages copied the system successfully. The French Prime Minister, Eduoard Dalladier, made a special visit to see the ‘miracle of Wörgl’. In January 1933, the project was replicated in the neighbouring city of Kirchbuhl, and in June 1933, Unterguggenburger addressed a meeting with representatives from 170 different towns and villages. Two hundred Austrian townships were interested in adopting the idea.
At this point, the central bank panicked, and decided to assert its monopoly rights by banning complimentary currencies. The people unsuccessfully sued the bank, and later lost in the Austrian Supreme Court. It then became a criminal offence to issue ’emergency currency’. The town went back to 30% unemployment. In 1934, social unrest exploded across Austria. In 1938, when Hitler annexed Austria, he was welcomed by many people as their economic and political saviour.
Nonetheless, the success of Wörgl attracted the attention of one of the leading economists in the U.S. at the time,
Professor Irving Fisher
, who informed the FDR administration he thought that idea could be used to end the Great Depression. That story is next.
Irving Fisher, the Great Depression and FDR
When Professor Irving Fisher learned of the success of Wörgl and other European experiments, he determined that “
The correct application of stamp scrip would solve the Depression crisis in the U.S. in three weeks.
” He presented his findings to Dean Acheson, then under-secretary of the Treasury under FDR. Acheson sought input from Harvard economics professor Russell Sprague, who told him that this approach could indeed succeed in bringing America out of the Depression, but cautioned him to check with the President.
He did so. Unfortunately, fearing decentralization, President Roosevelt denounced complementary currencies soon afterwards and they were prohibited. He did so in probably his most famous address, the one including the phrase “The only thing we need to fear is fear itself.”
In that speech he also announced that by “executive decree” he would henceforth prohibit ‘emergency currencies’. This was the code name for all the complementary currencies already in existence, and all those in preparation around the country. That prohibition lasted for decades.
Imagine if Fisher’s recommendation had held the day. The Great Depression would have ended well before World War II and a great deal of suffering would have been avoided. Fortunately, complementary currencies are now legal in just about every country, as evidenced by the popularity of cryptocurrencies, one form of a complementary currency.
The Central Middle Ages and Cathedrals
We now come to one of the most prolonged and significant times where the use of complementary currencies had a profound and widespread positive impact on the communities that adopted them. Here we find a sustained period of 250 years (1040-1290) of financial success based on local currencies spread throughout Western Europe.
Chapter 6
from the book,
New Money for a New World
by Bernard Lietaer (co-architect of the Euro) and Stephen Belgin details that period of widespread abundance throughout Western Europe that can be directly attributed to the extensive use of local currencies.
The authors note that, “There was work for all, with favorable working conditions and abundant time for family, community, and personal pursuits. This epoch was also characterized by significant advancements in science, technology, education, literature, music, arts, craftsmanship, and more.”
It all commenced when communities like Paris decided they wanted to build a local cathedral like Notre Dame, each a massive infrastructure project that lasted on average between 50 and 100 years. Those communities printed their own money and hired architects, stone masons, carpenters, lead workers, glass workers and more to build those magnificent edifices and more.
What most people don’t know is that those citizens were directly responsible for building more than
1,000 cathedrals
in Western Europe, alongside 350,000 churches and several thousand large abbeys. That means over 1,000 European communities adopted their own complementary currency program, yielding a building phase rarely matched throughout history.
Month after month those communities paid those workers “new money” they printed up which in turn was injected into the local economy. That money was spent on food, clothing, shelter and all the necessities of life, which stimulated all manner of new local merchants and the jobs they produced.
“This medieval building phenomenon is more remarkable still,” say the authors, “given that there was no central authority, church or otherwise, in charge of initiating or funding the construction of these cathedrals. Contrary to popular belief today, these structures were neither built by nor belonged to the church or nobility.
Local nobility and royalty customarily did make contributions, but these monuments were typically owned and financed by the citizens of the municipalities where they were built.”
Those efforts initiated over 800 years ago are still providing financial returns today. Tourists flock to those cathedrals bringing with them money that they leave in the communities they visit. Almost nothing in history has provided a greater return on investment.
More examples
See this document,
Complementary Currencies in Use
for more examples of complementary currencies and the positive impact they had on the communities that adopted them.
Stock market sell-off continues, as Google boss warns ‘no company immune’ if AI bubble bursts – business live
Guardian
www.theguardian.com
2025-11-18 10:59:52
Rolling coverage of the latest economic and financial news, as shares fall across Asia and bitcoin hits a seven-month low Europe’s major share indices are down by more than 1%, as the sell-off spreads across global markets. The UK’s FTSE 100 index fell by 0.9%. Germany’s Dax is down 1.3%, France’s C...
Good morning, and welcome to our rolling coverage of business, the financial markets and the world economy.
Global markets are racking up their fourth day of losses in a row, as concerns over technology valuations are worrying investors.
Asia-Pacific stocks have dipped to a one-month low today, amid signs that the enthusiasm that has driven stocks higher in recent months is fading, with shares, risky currencies and crypto assets all sliding
MSCI’s
broadest index of Asia-Pacific shares outside Japan has lost 1.8%, slipping to its lowest level since mid-October. South Korea’s
KOSPI
has lost 3.5%, and Hong Kong’s
Hang
Seng
is down 1.9%.
Japan’s
Nikkei 225
is also having a very rough day, down over 3%, on concerns over an escalating dispute with China over Taiwan
Last night, the US stock market fell, with the
S&P 500
share index closing at its lowest level in a month.
European stock markets are heading for losses when trading begins at 8am GMT too.
Various reasons are being cited for the mood change. Investors are fretting that US interest rates may not be cut as quickly as hoped, following hawkish commentary from some policymakers.
Jitters are building ahead of AI behemoth
Nvidia’s
results on Wednesday night.
The huge sums of money being committed by AI companies to fund their infrastructure is also raising eyebrows, especially as it is being increasingly funded by debt.
Last night,
Amazon
raised $15bn in its first US dollar bond offering in three years, adding to a spree of jumbo debt sales by technology firms as they race to fund artificial-intelligence infrastructure.
Michael Brown,
senior research strategist at brokerage
Pepperstone
, explains:
Those Nvidia earnings, incidentally, once again stand as a major macro risk, as enthusiasm around the whole AI frenzy seems to ebb, with the market having shifted from an ‘all capex is good capex’ mood, to one where whether firms are actually able to monetise that expenditure has become the million (or more!) dollar question.
On that note, Amazon kicking-off a six-part bond sale didn’t help matters much yesterday, following hot on the heels of similar sales from Meta and Alphabet in recent weeks, and further fuelling concern that AI expansion is now being fuelled by debt, and not by free cash flow, in turn exacerbating jitters over the sustainability of all the spending that we currently see.
The agenda
10am GMT: Treasury Committee hearing on risks and rewards of embracing crypto
1pm GMT: Huw Pill, Bank of England’s chief economist, to give speech at Skinners Hall, London
3pm GMT: US factory orders and durable goods data for August (delayed by lockdown)
Ireland's finance minister steps down to join World Bank
Lisa O’Carroll
Ireland’s long-standing finance minister Paschal Donohoe is to step down to join the World Bank as its managing director.
Donohoe
, who has been in finance or public expenditure departments for the past 10 years, will also step down from his job as head of the Eurogroup, the alliance of member states who use the euro currency.
The Irish cabinet was given the surprise news on Tuesday and was told that the appointment of
Donohoe
to the board of the World Bank had been approved on Monday night.
Donohoe
has been a stalwart of Irish politics during Brexit years, covid and beyond and his steady hand made him seem as a potential front runner for taoiseach.
But he lost out on a key opportunity last March 2024 when the leadership of his Fine Gael came up with Simon Harris quickly amassing enough support within the party to take over following the resignation of the former Taoiseach
Leo
Varadkar
.
He has been tipped for international jobs ever since, including the head of the International Monetary Fund, however he has always professed loyalty to his position in the Irish cabinet or missed out to other candidates.
His resignation could trigger a cabinet reshuffle but it will also prompt what is likely a hard by-election in Dublin central, a constituency shared in the multi seat system by Sinn Féin leader
Mary
Lou
McDonald
, and in which
Gary
Hutch –
who had links with the Hutch criminal gang – also ran in last November’s general election.
Donohoe’s departure is a significant blow to the Fine Gael and Fianna Fáil partnership and to the EU as one of the longest serving ministers attending EU summits.
He played a significant role in protecting Ireland’s economic strategy in relation to foreign investment and corporate tax when under serious international attacks from the likes of France and a court case, which Ireland ultimately won, over the Apple’s corporate tax.
Julia Pyke
, joint managing director of the nuclear power project Sizewell C, said:
Cornwall Insight’s analysis shows exactly why Britain needs more nuclear, not less.
A stable, low-carbon baseload from projects such as Sizewell C avoids the expensive system charges that households are now paying for and protects the UK from volatile markets from overseas.
She said the RAB (regulated asset base) contribution, a new charge on UK electricity bills to help fund new nuclear power stations, is little more than £10 a year,
but it unlocks at least 60 years of clean, reliable, homegrown power that can stabilise bills for generations and creates tens of thousands of British jobs and opportunities which completely transforms communities.
Cornwall Insight: Energy price cap to dip by 1% to £1,733 annual bill from January
The forecaster Cornwall Insight has issued new forecasts for the January energy price cap.
The energy regulator Ofgem’s price cap is expected to dip by 1%, taking it down by £22 to an average bill of £1,733 a year for a typical household from January.
But analysts at the specialist consultancy said they expect the price cap to tick higher again from April.
Jess Ralston
, energy analyst at the Energy and Climate Intelligence Unit, said:
As temperatures drop, many will be worried about how they are going to pay their energy bills. Rumoured cuts to home insulation schemes at the budget next week could leave the most vulnerable households facing higher bills for years to come and exposed to the kind of price spikes we’ve seen over the past few years.
Low levels of investment into infrastructure like schools has been mirrored in our electricity system and that is now catching up with us. But an upgraded power grid will enable the UK to use more of its own renewable power, making it less reliant on foreign gas imports and less at the mercy of the kinds of foreign price swings that saw household bills soar.
Crest Nicholson warns on profits amid 'subdued' summer sales and budget uncertainty
The housebuilder
Crest Nicholson
has put out a profit warning after “subdued” sales over the summer, and also blamed uncertainty around the government’s tax policy ahead of the 26 November budget.
The shares tumbled 13% on the news.
The company is closing one divisional office and will cut 50 jobs, including staff at the site and some “selective other roles” across overhead functions.
Crest said its adjusted profit before tax for the year to 31 October would be at the low end, or slightly below, its range of £28m to £38m,
reflecting a housing market that has remained subdued through the summer, and the continued uncertainty surrounding government tax policy ahead of the forthcoming budget.
It cautioned that near-term market conditions were likely to remain challenging.
The company expects to complete 1,691 homes this year, at the lower end of its range of between 1,700 and 1,900 homes, including 35% affordable units.
Its sales rate was 0.51, compared with 0.48 in 2024, although it dropped to 0.45 in the last 13 weeks of its financial year.
It has sold five land parcels from larger sites as it trims its landbank, and is working on a new house type range.
Rival builder Taylor Wimpey has also reported a drop in sales in the key autumn period.
Eight firms under investigation in crackdown on additional online fees
Britain’s competition watchdog has begun investigations into eight companies about their online pricing practices, expressing concern over additional fees and sales tactics such as
“drip pricing”
and “pressure selling”.
The Competition and Markets Authority (CMA)
said
it was looking into the ticket sellers
StubHub
and
Viagogo
;
AA Driving School
and
BSM Driving School
; the US gym chain
Gold’s Gym
; and the retailers
Wayfair
,
Appliances Direct
and
Marks Electrical
.
The investigations are the first launched by the CMA using
its new consumer protection powers
. The watchdog said it had concerns over practices including drip pricing – when consumers are shown an initial price and then face additional fees in the checkout process – and the use of misleading countdown timers, which are banned under the new regime.
The investigations follow a cross-economy review by the CMA since April of more than 400 businesses in 19 sectors to assess their compliance with price transparency rules.
The watchdog has also written advisory letters to 100 businesses across 14 sectors outlining concerns about their use of additional fees and sales tactics. It is publishing new guidance for businesses to help them comply with the law.
The regulator’s new powers enable it to decide whether consumer laws have been broken, rather than having to go through the courts. If the CMA finds there has been an infringement of the law, it can order businesses to pay compensation to affected customers, and can fine companies up to 10% of global turnover.
European shares slide as volatility surges
Europe’s major share indices are down by more than 1%, as the sell-off spreads across global markets.
The UK’s
FTSE
100 index fell by 0.9%. Germany’s Dax is down 1.3%, France’s CAC and Italy’s FTSE Mib both lost 1.5%, and Spain’s Ibex dropped 1.6%.
A gauge of eurozone volatility – the equivalent of Wall Street’s “fear gauge” VIX – surged to its highest level since the US regional bank sell-off in mid-October.
Deutsche Bank analysts led by
Jim Reid
said:
It’s been a challenging start to the week as markets brace for two key events: Nvidia’s earnings tomorrow night and the US payrolls report on Thursday.
For now, equities remain under pressure, with the S&P 500 (-0.92%) posting a third consecutive loss [on Monday] for the first time since September and marking its worst three-day run since April (-2.61%) with futures down another half a percent as I type this morning. Concerns swirling around the AI trade pushed Nvidia (-1.88%) to another decline.
In addition to the AI concerns, the risk-off tone was reinforced by the latest signals from the Fed, as investors continued to price out the likelihood of a December rate cut.
Futures now imply just a 41% probability, down from 43% on Friday – with the highest rate priced for the December contract since late August.
Klarna boss reveals he's nervous about AI spending splurge
The boss of buy-now-pay-later group Klarna has also warned about the tech industry’s multibillion-dollar dash to build data centres to power AI models.
Sebastian
Siemiatkowski
told
the Financial Times
that the huge sums being poured into computing infrastructure made him “nervous”.
He said:
“I think [OpenAI] can be very successful as a company but at the same time I’m very nervous about the size of these investments in these data centres. That’s the particular thing that I am concerned about.”
FTSE 100 falls 1%
Britain’s stock market has opened in the red, as the sell-off in global markets reaches Europe.
The blue-chip
FTSE
100 share index has dropped by 101 points, or just over 1%, to 9,675 points, further away from the record high of 9,930 points set last week.
Mining stocks are among the big fallers, with
Fresnillo
down 6.4% and
Endeavour
Mining
losing 4.7%.
The
FTSE
250
index of medium-sized companies is also sliding, down 1.15%.
Britain to outlaw tickets touts, minister says
Britain is set to ban the resale of tickets to live events like music concerts and shows at inflated prices, UK housing minister Steve Reed has declared.
Reed
told BBC News that said the practice of “ticket touting” - people buying tickets to sell them on at multiples of their face value - was hugely damaging for individuals who had to pay “through the nose” to attend.
Reed
insisted:
“We are committed to ending the scandal of ticket touts.”
Reed
was speaking a day after news broke that reselling a ticket at anything more than the price at which it was originally bought will be banned.
As my colleague
Rob
Davies
reported:
Reselling tickets for profit is to be outlawed under plans due to be announced this week, the Guardian has learned, as the government goes ahead with a
long-awaited crackdown on touts
and resale platforms such as Viagogo and StubHub.
Ministers had been considering allowing touts – and ordinary consumers – to sell on a ticket for up to 30% above the original face value, as part of a consultation process that ended earlier this year.
2025 was suposed to be a big year for Bitcoin, with a pro-crypto president in the White House.
But it hasn’t quite worked out that way, as
Victoria Scholar,
head of investment at
interactive investor
,
explains:
“
Bitcoin is extending losses, trading around $90k, shedding around 2% fuelled by concerns about overvaluations in the tech sector and broader risk-off sentiment that is causing a ripple effect across global markets. Bitcoin has turned negative for 2025, after peaking on 6
th
October at an all-time high above $126k and has subsequently shed about 28.5%. Earlier it briefly broke below $90k for the first time in seven months.
This year was meant to be the year of the bitcoin bulls supported by a highly crypto friendly administration in the White House and Trump’s ‘less is more’ approach towards regulation.
However, fears of an AI bubble and concerns about the market’s heavy dependence on a handful of tech giants have caused investors to dial back their exposure to speculative assets such as bitcoin. There’s a general sense of nervousness that has captured the market mood lately and bitcoin appears to be in the firing line. Plus with hints that the Fed might not cut rates next month, riskier non-yielding assets like bitcoin look less attractive in a higher interest rate environment.”
Monday’s selloff in US stocks has set off some alarm bells for technical traders.
Both the
S&P 500
share index and the tech-focused
Nasdaq
Composite
closed below their 50-day moving averages, according to Dow Jones Market Data.
Marketwatch
says this is a “worrysome” development, explaining:
The S&P 500 had consistently closed above its 50-day moving average from May 1 through last Friday — marking 138 consecutive trading days.
But on Monday, the index snapped its longest stretch above this average since the 149-trading-day period that ended on Feb. 26, 2007.
Crypto market has lost $1.2tn as traders shun speculative assets
More than $1tn has been wiped from the cryptocurrency market in the past six weeks.
According to data from CoinGecko, the global cryptocurrency market cap today is $3.15trn, down from $4,379trn on 7 October.
The Financial Times blames
concerns about lofty tech valuations and the path of US interest rates for this sell-off in speculative assets, adding:
The total market value of more than 18,000 coins tracked by data provider CoinGecko has tumbled 25 per cent since a market peak on October 6, wiping about $1.2tn from their combined capitalisation.
Bitcoin hits lowest since April
Bitcoin has fallen to its lowest level since April, as the cryptocurrency sector is hit by a sharp selloff.
The world’s largest crypto coin dropped as low as $89,286 this morning, a seven-month low, meaning it has lost all its gains in 2025.
Bitcoin has now fallen by almost a third since hitting a record high at the start of last month.
Such volatility isn’t that unusual, though, as
Tony Sycamore
, analyst at
IG
, explains:
Bitcoin, the canary in the risk coalmine, slips below $90k for the first time in seven months as its decline starts to display more impulsive rather than corrective characteristics.
That said, it is notable that its ~29% pullback from the record $126,272 high of early October is now on par with the ~31.5% pullback witnessed at the $74,434 Liberation Day low, coming from the January $109,356 high.
Illustration: IG
Google boss warns 'no company is going to be immune' if AI bubble bursts
The head of Google’s parent company has warned that every company would be affected if the AI boom were to unravel.
Sundar
Pichai
, the CEO of
Alphabet
, has told the BBC that the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, but cautioned that there was some “irrationality” in the current AI boom.
Pichai
argued that the excitement around AI is very rational, given its potential.
But he also cautioned that there are moments when the tech industry “overshoots”, citing the excess investment we saw in the early days of the web.
Asked whether Google would be immune to the impact of the AI bubble bursting,
Pichai
said the tech giant could weather that potential storm, but added:
“I think no company is going to be immune, including us.”
Good morning, and welcome to our rolling coverage of business, the financial markets and the world economy.
Global markets are racking up their fourth day of losses in a row, as concerns over technology valuations are worrying investors.
Asia-Pacific stocks have dipped to a one-month low today, amid signs that the enthusiasm that has driven stocks higher in recent months is fading, with shares, risky currencies and crypto assets all sliding
MSCI’s
broadest index of Asia-Pacific shares outside Japan has lost 1.8%, slipping to its lowest level since mid-October. South Korea’s
KOSPI
has lost 3.5%, and Hong Kong’s
Hang
Seng
is down 1.9%.
Japan’s
Nikkei 225
is also having a very rough day, down over 3%, on concerns over an escalating dispute with China over Taiwan
Last night, the US stock market fell, with the
S&P 500
share index closing at its lowest level in a month.
European stock markets are heading for losses when trading begins at 8am GMT too.
Various reasons are being cited for the mood change. Investors are fretting that US interest rates may not be cut as quickly as hoped, following hawkish commentary from some policymakers.
Jitters are building ahead of AI behemoth
Nvidia’s
results on Wednesday night.
The huge sums of money being committed by AI companies to fund their infrastructure is also raising eyebrows, especially as it is being increasingly funded by debt.
Last night,
Amazon
raised $15bn in its first US dollar bond offering in three years, adding to a spree of jumbo debt sales by technology firms as they race to fund artificial-intelligence infrastructure.
Michael Brown,
senior research strategist at brokerage
Pepperstone
, explains:
Those Nvidia earnings, incidentally, once again stand as a major macro risk, as enthusiasm around the whole AI frenzy seems to ebb, with the market having shifted from an ‘all capex is good capex’ mood, to one where whether firms are actually able to monetise that expenditure has become the million (or more!) dollar question.
On that note, Amazon kicking-off a six-part bond sale didn’t help matters much yesterday, following hot on the heels of similar sales from Meta and Alphabet in recent weeks, and further fuelling concern that AI expansion is now being fuelled by debt, and not by free cash flow, in turn exacerbating jitters over the sustainability of all the spending that we currently see.
The agenda
10am GMT: Treasury Committee hearing on risks and rewards of embracing crypto
1pm GMT: Huw Pill, Bank of England’s chief economist, to give speech at Skinners Hall, London
3pm GMT: US factory orders and durable goods data for August (delayed by lockdown)
Kubernetes is a complex piece of technology that abstracts away many system administration tasks, but does also solve and automate some processes useful at a smaller scale, like blue-green deployments. Having administered managed Kubernetes for a while now, I wanted to find out what a self-managed, small-but-multi-node Kubernetes install looks like.
Most of the non-Kubernetes machines I manage are individual machines, or single database + multiple workers. For this step I'm not really interested in much more than that, like making everything redundant, self-healing, etc. I just want to introduce Kubernetes in something that matches my existing setups.
Getting things fully functional was a long process of trial-and-error, during which I learned about even more things I didn't want to touch:
Public-Key Infrastructure (PKI). Kubernetes definitely leans into this and prefers you manage keys and certificates for all of its components, but I feel like this is a whole separate article in itself.
The NixOS Kubernetes modules. These have their own opinions, and there's nothing wrong with their implementation, but using them goes against some of the learning and experimenting I wanted to do here.
K3s, K0s or any other Kubernetes 'distribution'. These are an extra layer to learn, and an extra layer to trust. They sometimes offer valuable extra functionality, for example I wish the SQLite backend was in upstream Kubernetes. But again, I avoided these in the interest of learning.
NixOS in general is great, and I'm a big fan, but something Kubernetes can potentially do well (in terms of configuration) is provide a clear boundary between the system and application. In NixOS, configuring an app is often interwoven with system config, and there's a lack of options to prevent that.
Still, I'll be using the Kubernetes package (not module!) from Nixpkgs, as well as building everything on top of NixOS and its excellent systemd parts.
At the time of writing, NixOS 25.11 is mere weeks away, so that is my target.
There's a bunch of stuff I enable on all of my NixOS machines that is relevant to the rest of this article.
I prefer nftables over iptables, because it's the future. In practice, the
iptables
command is already a compatibility layer in many Linux distributions, but these options additionally enable the nftables-based firewall in NixOS:
{
networking.nftables.enable=true;
# We want to filter forwarded traffic.# Also needed for `networking.firewall.extraForwardRules` to do anything.networking.firewall.filterForward=true;
}
I enable systemd-networkd, because it's the future. I wouldn't even know how to set up all the networking parts in other setups; systemd-networkd is just really nice when you have a bunch of moving parts in your networking.
{
networking.useNetworkd=true;
}
Kubernetes version
The current version of Kubernetes at the time of writing is 1.34. It's useful to check the package version, because Kubernetes requires step-by-step minor version upgrades:
{ lib, pkgs, ... }:
{
# Ensure we carefully upgrade Kubernetes versions.# We need to step 1 minor version at a time.assertions= [
{
assertion= lib.hasPrefix "1.34." pkgs.kubernetes.version;
message="Unexpected Kubernetes package version: ${pkgs.kubernetes.version}";
}
];
}
Networking
If you've ever used Docker or Podman, your typical networking setup looks like this:
The machine is logically split up in host and container network namespaces. Each container is assigned half of a veth pair, the other half is part of a a bridge interface on the host. The host assigns a subnet to the bridge with an address for itself, like
172.16.0.1/24
, and an address for each container. The host is then the gateway for containers, performing layer 3 routing and NAT on outgoing traffic to the internet.
Kubernetes wants you to connect these container subnets across multiple machines. In this article I assume there is a private network connecting all nodes together:
In addition to the 'outward' link from the host to the internet, the host now has an additional outward link to a network switch that brings hosts together in a private network. We intend to route traffic between container subnets across this private network somehow. Notably, NAT is still only performed on traffic to the internet, and
not
traffic between containers.
Even if you have a private network like this, you may not be able to simply route traffic from container subnets across it. Cloud providers often restrict the addresses a machine can use on its network interface to what is preconfigured in the cloud resources.
There are a lot of ways to actually connect the subnets together, but I chose
Wireguard
because I know it, and because I wanted to test drive the overhead of encrypted links with real applications. It's potentially an additional layer of security if you're running this on the network of a cloud provider that otherwise doesn't encrypt customer traffic on internal networks. (But some may call you paranoid.)
Some alternatives here:
Use some other tunneling protocol like GENEVE or VXLAN. Maybe GRE works too?
Instead use TLS at the application layer for securing connections, e.g. HTTPS between proxy and backend, TLS to your database, etc.
If you control the physical network (or even just layer 2), you can actually connect containers directly to the network using
macvlan
and even have your existing DHCP server assign addresses.
Something like
flannel
can help you make the whole setup dynamic, if your machines tend to come and go.
Container subnets
First, let's determine our addressing scheme for all of our containers across machines.
{ config, lib, ... }:
{
# I like to create NixOS options for variables that are going to be used# across multiple files, so I can reach them (without imports) via the# `config` parameter of a NixOS module.options.kube= {
# We're going to assign each node a one-based index, and derive the# container subnet from that.nodeIndex= lib.mkOption { type= lib.types.ints.positive; };
# Having a zero-based index on hand will become useful later,nodeIndex0= lib.mkOption {
type= lib.types.ints.unsigned;
default= config.kube.nodeIndex -1;
};
# Functions that take a node index and build a subnet in CIDR-notation.mkNodeCidr6= lib.mkOption {
type=with lib.types; functionTo str;
default=index:"fd88:${toString index}::/32";
};
mkNodeCidr4= lib.mkOption {
type=with lib.types; functionTo str;
default=index:"10.88.${toString index}.0/24";
};
# On each node, the host will take the first IP in the subnet.# Containers will use this IP as the gateway.mkHostIp6= lib.mkOption {
type=with lib.types; functionTo str;
default=index:"fd88:${toString index}::1";
};
mkHostIp4= lib.mkOption {
type=with lib.types; functionTo str;
default=index:"10.88.${toString index}.1";
};
# For each of the above functions, define the values for the local node.nodeCidr6= lib.mkOption {
type= lib.types.str;
default= config.kube.mkNodeCidr6 config.kube.nodeIndex;
};
nodeCidr4= lib.mkOption {
type= lib.types.str;
default= config.kube.mkNodeCidr4 config.kube.nodeIndex;
};
hostIp6= lib.mkOption {
type= lib.types.str;
default= config.kube.mkHostIp6 config.kube.nodeIndex;
};
hostIp4= lib.mkOption {
type= lib.types.str;
default= config.kube.mkHostIp4 config.kube.nodeIndex;
};
# The zero subnet is for Kubernetes Cluster IPs used in Service resources.# NOTE: Would love to use IPv6 here, but that is trouble for many apps.servicesCidr= lib.mkOption {
type= lib.types.str;
default="10.88.0.0/24";
};
};
}
Now each machine needs to assign the node index in per-machine configuration:
{
kube.nodeIndex=1;
}
Now we have everything to configure the bridge interface we'll connect containers to. Unlike Docker / Podman, we'll be managing this manually:
{ config, pkgs, ... }:
{
# We need a separate netdev unit to create the bridge interface.
systemd.network.netdevs."10-brkube"= {
netdevConfig= {
Kind="bridge";
Name="brkube";
};
};
# Now configure the interface with a network unit.
systemd.network.networks."10-brkube"= {
matchConfig= {
Name="brkube";
};
networkConfig= {
# We want this interface to always be configured and have addresses.# Bridges specifically report no-carrier while there are no members.ConfigureWithoutCarrier=true;
# Disable all link-local addressing. (`169.254.0.0/16` / `fe80::/64`)LinkLocalAddressing=false;
# Don't allow containers to maliciously become IPv6 routers.IPv6AcceptRA=false;
};
# Configure the host addresses.# This also configures the direct routes on the host.## NOTE: Disable DuplicateAddressDetection because otherwise the address# can remain in a 'tentative' state, and Linux won't allow us to use it# as a source address in other routes. This is important for later.addresses= [
{
Address="${config.kube.hostIp6}/32";
DuplicateAddressDetection="none";
}
{
Address="${config.kube.hostIp4}/24";
DuplicateAddressDetection="none";
}
];
};
# To inspect the bridge interface at runtime using the `brctl` tool.environment.systemPackages= [ pkgs.bridge-utils ];
}
Next we can setup the Wireguard links. For this we need to generate keypairs, and it is at this point that we introduce secrets into the NixOS config. I like to use
agenix
for this, but there are other choices here, like
sops-nix
. With agenix, machines decrypt files using their OpenSSH host key.
For simplicity, I'm going to put all keys in
keys/
directory, and add a master key so we can always edit all files locally:
mkdir keys
cd keys/
# Handle this private key file with care!# The public key is printed on success.
age-keygen -o master_key
Now create a
keys/secrets.nix
configuration file for agenix:
let# Public key printed by age-keygen above.# The master key should be included in every set of publicKeys.master="age...";
# OpenSSH host keys of our nodes.node1="ssh-ed25519 AAA...";
node2="ssh-ed25519 AAA...";
in
{
# Set recipients of Wireguard private keys to their respective nodes."wgkube1.key.age".publicKeys = [ master node1 ];
"wgkube2.key.age".publicKeys = [ master node2 ];
}
Then generate the Wireguard keys and immediately encrypt them:
wg genkey | agenix -i master_key -e wgkube1.key.age
wg genkey | agenix -i master_key -e wgkube2.key.age
Now we can decrypt these files in NixOS configuration:
{ config, ... }:
{
# This will make the private key available in `/run/agenix/` as `wgkube.key`.
age.secrets."wgkube.key"= {
file=./keys+"/wgkube${toString config.kube.nodeIndex}.key.age";
# Make sure systemd-networkd can read this file.group="systemd-network";
mode="0440";
};
}
Next I like to use a
peers.json
as input to generate the Wireguard configuration. That JSON looks like this:
This array is ordered by node index. You can get the public keys as follows:
agenix -i master_key -d wgkube1.key.age | wg pubkey
agenix -i master_key -d wgkube2.key.age | wg pubkey
The
PeerIP
fields are local network IPs in this example. These could be IPs on the private network provided by your cloud provider, but because this is Wireguard, you can also safely cross the internet. (Though the internet is not necessarily always fast, reliable and within you control.)
I use a JSON file like this because I actually generate it using
tofu
, but to keep things focused, the tofu configuration will not be in scope of this article. There is a neat little
Wireguard provider
for it, though.
Now we can configure the links in NixOS:
{
config,
lib,
pkgs,
...
}:
let# Grab helpers and variables.# NOTE: Some of these are defined below.inherit (config.kube)
mkNodeCidr6
mkNodeCidr4
nodeIndex0
wgPort
peers
;
in
{
options.kube= {
# Define the Wireguard port.# This variable is useful later in firewall config.wgPort= lib.mkOption {
type= lib.types.port;
default=51820;
};
# Parse the `peers.json` file.peers= lib.mkOption {
type=with lib.types; listOf attrs;
default=builtins.fromJSON (builtins.readFile ./keys/peers.json);
};
};
config= {
# We need a separate netdev unit to create the Wireguard interface.
systemd.network.netdevs."11-wgkube"= {
netdevConfig= {
Kind="wireguard";
Name="wgkube";
};
wireguardConfig= {
PrivateKeyFile= config.age.secrets."wgkube.key".path;
ListenPort= wgPort;
};
# Generate Wireguard peers from the JSON input.wireguardPeers= lib.pipe peers [
(lib.imap1 (
index:entry: {
PublicKey= entry.PublicKey;
Endpoint="${entry.PeerIP}:${toString wgPort}";
# This instructs Wireguard what ranges belong to what peers. It'll# reject incoming traffic from an incorrect subnet, but also direct# outgoing traffic to the correct peer based on this. Note that# this doesn't create routes, however; we do that below.AllowedIPs= [
(mkNodeCidr6 index)
(mkNodeCidr4 index)
];
}
))
# Filter out ourselves based on index.# There's unfortunately no ifilter1 for one-based indexing.
(lib.ifilter0 (index0:value: index0 != nodeIndex0))
];
};
# Now configure the interface with a network unit.
systemd.network.networks."11-wgkube"= {
matchConfig= {
Name="wgkube";
};
networkConfig= {
# Set these options for reasons similar to brkube.ConfigureWithoutCarrier=true;
LinkLocalAddressing=false;
IPv6AcceptRA=false;
};
# Configures routes for the container subnets of peers.## NOTE: We don't need to configure an address on this interface. As# long as we route traffic destined for other nodes to this interface,# Wireguard will send it to the correct peer based on AllowedIPs.## For traffic from the host itself (not forwarded for containers), we# set PreferredSource to the host IP from brkube.routes= lib.pipe peers [
# NOTE: This results in a list of lists.
(lib.imap1 (
index:entry: [
{
Destination= mkNodeCidr6 index;
PreferredSource= config.kube.hostIp6;
}
{
Destination= mkNodeCidr4 index;
PreferredSource= config.kube.hostIp4;
}
]
))
# Filter out ourselves based on index.
(lib.ifilter0 (index0:value: index0 != nodeIndex0))
# After filtering we can take the flat list of routes.
lib.flatten
];
};
# To inspect the Wireguard interface at runtime using the `wg` tool.environment.systemPackages= [ pkgs.wireguard-tools ];
};
}
Finally, we configure our firewall and NAT rules:
{ config, ... }:
{
boot.kernel.sysctl= {
# Enable forwarding on all interfaces."net.ipv4.conf.all.forwarding"=1;
"net.ipv6.conf.all.forwarding"=1;
};
networking.firewall.extraInputRules=''
# Open the Wireguard port.
# You probably have to adjust this for your network situation.
ip saddr 192.168.0.0/24 udp dport ${toString config.kube.wgPort} accept
# Accept connections to Kubernetes Cluster IPs.
# These are virtual IPs that every node makes available locally.
ip daddr ${config.kube.servicesCidr} accept
'';
networking.firewall.extraForwardRules=''
# Route all container traffic anywhere (internet and internode).
iifname brkube accept
# Route Wireguard traffic destined for local containers.
iifname wgkube ip6 daddr ${config.kube.nodeCidr6} accept
iifname wgkube ip daddr ${config.kube.nodeCidr4} accept
'';
# Apply NAT to traffic from containers to the internet.# Here we create an `accept` rule to short-circuit traffic that# _shouldn't_ have NAT, then apply NAT to the rest.networking.nftables.tables= {
"kube-nat6"= {
family="ip6";
name="kube-nat";
content=''
chain post {
type nat hook postrouting priority srcnat;
iifname brkube ip6 daddr fd88::/16 accept
iifname brkube masquerade
}
'';
};
"kube-nat4"= {
family="ip";
name="kube-nat";
content=''
chain post {
type nat hook postrouting priority srcnat;
iifname brkube ip daddr 10.88.0.0/16 accept
iifname brkube masquerade
}
'';
};
};
}
At this point nodes should be able to ping eachother across the tunnel on their private IPs (
fd88:*::1
), but we won't be able to test the full networking setup until we have some containers running.
Hostnames
Kubernetes needs to be configured with a domain name where it will advertise Services in DNS. Many examples use
cluster.local
, but I find this a bad idea, because
.local
is for mDNS. Instead, I'll be using
k8s.internal
.
Nodes in Kubernetes register themselves with a name, typically whatever hostname is configured in the OS. However, I'm going to decouple this from the OS hostname and instruct Kubernetes to use
k8s.internal
everywhere, leaving the OS hostname untouched.
{
config,
lib,
pkgs,
...
}:
letinherit (config.kube)
peers
nodeIndex
mkHostIp6
mkHostIp4
domain
mkNodeHost
;
in
{
options.kube= {
# The internal domain name we use for all purposes Kubernetes.domain= lib.mkOption {
type= lib.types.str;
default="k8s.internal";
};
# Function that defines the format for node hostnames.mkNodeHost= lib.mkOption {
type=with lib.types; functionTo str;
default=index:"node${toString index}.${domain}";
};
# The hostname of the local node.nodeHost= lib.mkOption {
type= lib.types.str;
default= mkNodeHost nodeIndex;
};
# All static hosts to add to the Kubernetes domain.# This is in a similar format to `networking.hosts`.allHosts= lib.mkOption {
type=with lib.types; attrsOf (listOf str);
};
# `allHosts` as a file in `/etc/hosts` format.allHostsFile= lib.mkOption {
type= lib.types.path;
default= lib.pipe config.kube.allHosts [
(lib.mapAttrsToList (ip:hosts:"${ip}${lib.concatStringsSep " " hosts}\n"))
lib.concatStrings
(pkgs.writeText "kubernetes-static-hosts.txt")
];
};
};
config= {
# Add all node hosts to the Kubernetes domain.# The `mkBefore` ensures the node host is the first listed,# which is what a reverse IP lookup resolves to.kube.allHosts= lib.pipe peers [
(lib.imap1 (
index:entry: {
${mkHostIp6 index} = lib.mkBefore [ (mkNodeHost index) ];
${mkHostIp4 index} = lib.mkBefore [ (mkNodeHost index) ];
}
))
(lib.mergeAttrsList)
];
# Also add the static hosts to `/etc/hosts`.networking.hostFiles= [ config.kube.allHostsFile ];
};
}
kube-apiserver
We're going to build a multi-node setup, but keep it close to a traditional setup of 1 database server + multiple workers. In this setup, the database server is the ideal place for any kind of centralized processing, so we'll be running those parts of Kubernetes there as well. Instead of calling it a database server, I'll call it the 'primary' server going forward.
{ config, lib, ... }:
{
options.kube= {
# Define roles for nodes. The first node will be the 'primary' node.role= lib.mkOption {
type= lib.types.str;
default=if config.kube.nodeIndex ==1then"primary"else"worker";
};
# The IP of the primary node.primaryIp= lib.mkOption {
type= lib.types.str;
default= config.kube.mkHostIp6 1;
};
};
}
We'll add some further variables in
kube.api
to describe the API endpoint:
{ config, lib, ... }:
{
options.kube.api= {
# Kubernetes creates a Service with Cluster IP for its own API.# This is always the first IP in the services subnet.serviceIp= lib.mkOption {
type= lib.types.str;
default="10.88.0.1";
};
# The HTTPS port the API server will listen on.# This is only important when connecting directly to the primary node.# When using the Kubernetes Service, it's translated to regular 443.port= lib.mkOption {
type= lib.types.port;
default=6443;
};
# Define an internal hostname for the API.# This is only used when a node host needs to talk to the API.# Containers instead use the Kubernetes Service to reach the API.internalHost= lib.mkOption {
type= lib.types.str;
default="api.${config.kube.domain}";
};
# Build the full internal URL to the API.internalUrl= lib.mkOption {
type= lib.types.str;
default="https://${config.kube.api.internalHost}:${toString config.kube.api.port}";
};
# An externally reachable host for the API.# The API server builds URLs using this hostname, so you'll want to add# this to DNS. Doesn't have to be fully public, could still be internal to# your organization.externalHost= lib.mkOption {
type= lib.types.str;
default="test-kube.example.com";
};
# Build the full external URL to the API.# We also use this as the 'audience' of API server JWTs.externalUrl= lib.mkOption {
type= lib.types.str;
default="https://${config.kube.api.externalHost}:${toString config.kube.api.port}";
};
};
config= {
# Add the internal API host to the Kubernetes domain.
kube.allHosts.${config.kube.primaryIp} = [ config.kube.api.internalHost ];
};
}
The API server uses
etcd
for storage by default. We'll be creating a very simple installation here and protect it using Unix sockets with limited permissions.
In a production setup, you want to make periodic backups of the data in etcd. You can do this using
etcdctl snapshot save
, or simply backup the files in
/var/lib/etcd/member/snap/db
. (The former method can't be piped into some other command, but the latter method excludes the database WAL file. See
etcd disaster recovery
.)
{
config,
lib,
pkgs,
...
}:
# Only on the primary node.
lib.mkIf (config.kube.role =="primary") {
# Create a dedicated user and group so we can control access to the socket.users.groups.etcd= { };
users.users.etcd= {
isSystemUser=true;
group="etcd";
};
# Configure the systemd service unit.systemd.services.etcd= {
wantedBy= [ "multi-user.target" ];
serviceConfig= {
Type="notify";
User="etcd";
ExecStart="${pkgs.etcd}/bin/etcd"+" --data-dir /var/lib/etcd"# Compaction is disabled by default, but that apparently risks the# database eventually exploding on itself. Weird default.+" --auto-compaction-retention=8h"# Minimum set of options for secure local-only setup without auth.# Access is limited to users in the 'etcd' group.+" --listen-peer-urls unix:/run/etcd/peer"+" --listen-client-urls unix:/run/etcd/grpc"+" --listen-client-http-urls unix:/run/etcd/http"# This is required but not actually used in our case.+" --advertise-client-urls http://localhost:2379";
Restart="on-failure";
RestartSec=10;
# Actual data storage in /var/lib/etcd.StateDirectory="etcd";
StateDirectoryMode="0700";
# Place our Unix sockets in /run/etcd.RuntimeDirectory="etcd";
RuntimeDirectoryMode="0750";
};
postStart=''
# Need to make sockets group-writable to allow connections.
chmod 0660 /run/etcd/{grpc,http}
'';
};
# For the `etcdctl` tool.environment.systemPackages= [ pkgs.etcd ];
}
Now we are almost ready to start the API server! First we need to put some secrets in place for it.
You'll want an
EncryptionConfiguration
to tell Kubernetes how to encrypt Secret resources on disk. I recommend using a configuration with just
secretbox
to start:
# Edit the encrypted file.
agenix -i master_key -e EncryptionConfiguration.yaml.age
apiVersion:apiserver.config.k8s.io/v1kind:EncryptionConfigurationresources:# Expand this if you have custom resources that store sensitive data.-resources:-secretsproviders:-secretbox:keys:-name:key1# Generate this with: head --bytes=32 /dev/random | base64secret:"<BASE 64 ENCODED SECRET>"
Next we need credentials for
API server authentication
. There are a bunch of methods available for this, but we'll be using the 'static token file' method, and handing a CSV file to the API server. A major downside of this is that the API server can't reload this at runtime, so changing any of these (such as when adding nodes) requires an API server restart.
We're going to create a
root
user in the API with full admin access.
# Generate and encrypt the token.
pwgen -s 64 | agenix -i master_key -e kube_token_root.age
Nodes also need tokens to register themselves in the API, and I'm going to use a dirty trick here: reuse the Wireguard private keys as tokens. This means the API server has access to all Wireguard private keys, but I figure compromise of the API server means you can execute arbitrary code on any node anyway. If you're more concerned, you could just generate separate tokens instead. In any case, to reuse the Wireguard keys, the primary node needs access:
# Update keys/secrets.nix and ensure node1 is listed for every Wireguard key."wgkube1.key.age".publicKeys = [ master node1 ];
"wgkube2.key.age".publicKeys = [ master node1 node2 ];
We also need some tokens for Kubernetes components that run alongside the API server on the primary node. I'm going to use the
kube_token_system_
prefix for these, followed by the service name. That naming convention allows us to iterate files later.
# Generate and encrypt the tokens.for uid in kube-controller-manager kube-scheduler; do
pwgen -s 64 | agenix -i master_key -e "kube_token_system_${uid}.age"done
To connect these components to the API server, we provide a tool to help generate a kubeconfig file:
{
config,
lib,
pkgs,
...
}:
{
options.kube= {
# Small utility that helps us build a kubeconfig for our cluster.# The caller should set $KUBECONFIG to the file to create / modify.mkkubeconfig= lib.mkOption {
type= lib.types.package;
default= pkgs.writeShellApplication {
name="mkkubeconfig";
runtimeInputs= [ pkgs.kubectl ];
text=''
if [[ $# -ne 1 ]]; then
echo >&2 'Usage: mkkubeconfig <token file>'
exit 64
fi
# NOTE: The API server uses self-signed certificates. In this
# testing setup we instead rely on the Wireguard tunnel for security.
kubectl config set-cluster local --server '${config.kube.api.internalUrl}' --insecure-skip-tls-verify=true
kubectl config set users.default.token "$(<"$1")"
kubectl config set-context local --cluster=local --user=default
kubectl config use-context local
'';
};
};
};
}
We can finally slap together a NixOS module to start the API server. This is probably the most complex piece of Nix machinery in the setup.
{
config,
lib,
pkgs,
...
}:
letpackage= lib.getBin pkgs.kubernetes;
apiPortStr=toString config.kube.api.port;
# NOTE: We put secerts in a separate variable here so we can easily gather# all secrets in `LoadCredential` below. Using `config.age.secrets` would pull# in secrets from elsewhere too, which is bad.keysDirListing=builtins.readDir./keys;
ageSecrets= lib.mergeAttrsList [
# Decrypt EncryptionConfiguration.
{ "EncryptionConfiguration.yaml".file =./keys/EncryptionConfiguration.yaml.age; }
# Decrypt all API server tokens.
(lib.pipe keysDirListing [
(lib.filterAttrs (name:type: lib.hasPrefix "kube_token_" name))
(lib.mapAttrs' (
name:type: {
name= lib.removeSuffix ".age" name;
value.file=./keys+"/${name}";
}
))
])
# Decrypt all Wireshark keys we reuse as tokens.
(lib.pipe keysDirListing [
(lib.filterAttrs (name:type: lib.hasPrefix "wgkube" name))
(lib.mapAttrs' (
name:type: {
name="kube_token_node"+ (lib.removePrefix "wgkube" (lib.removeSuffix ".key.age" name));
value.file=./keys+"/${name}";
}
))
])
];
in# Only on the primary node.
lib.mkIf (config.kube.role =="primary") {
age.secrets= ageSecrets;
# Create a dedicated user for kube-apiserver, so we can add it to the etcd group.users.groups.kube-apiserver= { };
users.users.kube-apiserver= {
isSystemUser=true;
group="kube-apiserver";
extraGroups= [ "etcd" ];
};
# Open the API server port in the firewall.networking.firewall.extraInputRules=''
tcp dport ${apiPortStr} accept
'';
systemd.services.kube-apiserver= {
wantedBy= [ "multi-user.target" ];
after= [ "etcd.service" ];
serviceConfig= {
Type="notify";
ExecStart="${package}/bin/kube-apiserver"# Connect to etcd.+" --etcd-servers='unix:/run/etcd/grpc'"# HTTPS listener config.# The certificate is generated in `preStart` below.+" --secure-port=${apiPortStr}"+" --tls-private-key-file='/var/lib/kube-apiserver/apiserver.key'"+" --tls-cert-file='/var/lib/kube-apiserver/apiserver.crt'"# Authentication and authorization config.# `tokens.csv` is generated in `preStart` below.+" --anonymous-auth=false"+" --token-auth-file='/var/lib/kube-apiserver/tokens.csv'"+" --authorization-mode='RBAC,Node'"# Virtual IP range used for Service resources.# These IPs are routed by kube-proxy on each machine, usually via NAT.+" --service-cluster-ip-range='${config.kube.servicesCidr}'"# For the Service of the API server, advertise the node address.# Because this also uses NAT, it must also be IPv4.+" --advertise-address='${config.kube.hostIp4}'"# The externally reachable hostname for building API URLs.+" --external-hostname='${config.kube.api.externalHost}'"# Configures signing and verification of JWTs used as service account tokens.+" --service-account-issuer='${config.kube.api.externalUrl}'"+" --api-audiences='api,${config.kube.api.externalUrl}'"+" --service-account-key-file='/var/lib/kube-apiserver/issuer.key'"+" --service-account-signing-key-file='/var/lib/kube-apiserver/issuer.key'"# This sets up the encryption of Secret resources:# https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/+" --encryption-provider-config='%d/EncryptionConfiguration.yaml'";
User="kube-apiserver";
Restart="on-failure";
RestartSec=10;
# For generated keys and certificates.StateDirectory="kube-apiserver";
# Make secrets available.LoadCredential=map (name:"${name}:/run/agenix/${name}") (lib.attrNames ageSecrets);
# For the `postStart` script.PrivateTmp=true;
};
preStart=''
openssl=${lib.getExe pkgs.openssl}
cd /var/lib/kube-apiserver
# Ensure a tokens file is present, or create an empty one.
[[ -e tokens.csv ]] || touch tokens.csv
chmod 0600 tokens.csv
# Ensure the token for the root user is present.
file="$CREDENTIALS_DIRECTORY/kube_token_root"
if ! grep -q ",root," tokens.csv; then
echo "$(<"$file"),root,root,system:masters" >> tokens.csv
fi
# Ensure tokens for system users are present.
for file in $CREDENTIALS_DIRECTORY/kube_token_system_*; do
filename="$(basename "$file")"
uid="''${filename#kube_token_system_}"
if ! grep -q ",system:$uid," tokens.csv; then
echo "$(<"$file"),system:$uid,system:$uid" >> tokens.csv
fi
done
# Ensure tokens for nodes are present.
for file in $CREDENTIALS_DIRECTORY/kube_token_node*; do
filename="$(basename "$file")"
uid="''${filename#kube_token_}.${config.kube.domain}"
if ! grep -q ",system:node:$uid," tokens.csv; then
echo "$(<"$file"),system:node:$uid,system:node:$uid,system:nodes" >> tokens.csv
fi
done
# Ensure a private key for HTTPS exists.
[[ -e apiserver.key ]] || $openssl ecparam -out apiserver.key -name secp256r1 -genkey
chmod 0600 apiserver.key
# Generate a new self-signed certificate on every startup.
# Assume services are restarted somewhere in this timeframe so that we
# never have an expired certificate.
$openssl req -new -x509 -nodes -days 3650 \
-subj '/CN=${config.kube.api.externalHost}' \
-addext 'subjectAltName=${
lib.concatStringsSep "," [
"DNS:${config.kube.api.externalHost}""DNS:${config.kube.api.internalHost}""IP:${config.kube.api.serviceIp}"
]
}' \
-key apiserver.key \
-out apiserver.crt
# Ensure a private key exists for issuing service account tokens.
[[ -e issuer.key ]] || $openssl ecparam -out issuer.key -name secp256r1 -genkey
chmod 0600 issuer.key
'';
postStart=''
# Wait for the API server port to become available.
# The API server doesn't support sd_notify, so we do this instead to
# properly signal any dependant services that the API server is ready.
export KUBECONFIG=/tmp/kubeconfig
${lib.getExe config.kube.mkkubeconfig} "$CREDENTIALS_DIRECTORY/kube_token_root"
tries=60
while ! ${package}/bin/kubectl get namespaces default >& /dev/null; do
if [[ $((--tries)) -eq 0 ]]; then
echo ">> Timeout waiting for the API server to start"
exit 1
fi
sleep 1
done
rm $KUBECONFIG
'';
};
}
We setup a kubeconfig for
root
on the primary node to use the
root
API user. This allows using
kubectl
from the shell for easy administration:
{
config,
lib,
pkgs,
...
}:
# Only on the primary node.
lib.mkIf (config.kube.role =="primary") {
# Generate a kubeconfig for root, so that `kubectl` simply works.system.activationScripts.kubeconfig-root=''
HOME=/root ${lib.getExe config.kube.mkkubeconfig} "/run/agenix/kube_token_root"
'';
environment.systemPackages= [ pkgs.kubectl ];
}
And we also make node credentials available on each node, which will be used by services later:
{ lib, config, ... }:
{
# Creates /run/kubeconfig-node containing the node credentials.# This is used by per-node services like kubelet, kube-proxy, coredns, etc.systemd.services.generate-kubeconfig-node= {
wantedBy= [ "multi-user.target" ];
environment.KUBECONFIG="/run/kubeconfig-node";
serviceConfig= {
Type="oneshot";
ExecStart="${lib.getExe config.kube.mkkubeconfig} /run/agenix/wgkube.key";
};
};
}
Add-ons
It's useful to have a way to load some YAML into the API server on startup. I use the term add-ons because I've seen it used for some now-deprecated functionality that was similar in function, but the term add-on has also been overloaded in various ways.
{
config,
lib,
pkgs,
...
}:
letcfg= config.kube;
in
{
options.kube= {
# Run an activation script once the API is up.activationScript= lib.mkOption {
type= lib.types.lines;
default="";
};
# Apply addons once the API is up.addons= lib.mkOption {
type= lib.types.listOf lib.types.path;
default= [ ];
};
};
config= {
assertions= [
{
assertion= cfg.activationScript !=""-> cfg.role =="primary";
message="kube.activationScript and kube.addons can only be used on the primary node";
}
];
# NOTE: This not a postStart in kube-apiserver, because that would cause# kube-apiserver to restart on changes.systemd.services.kube-activation= lib.mkIf (cfg.activationScript !="") {
wantedBy= [ "multi-user.target" ];
bindsTo= [ "kube-apiserver.service" ];
after= [ "kube-apiserver.service" ];
path= [ pkgs.kubectl ];
# Connect to the API using the root credentials.environment.KUBECONFIG="/root/.kube/config";
serviceConfig= {
Type="oneshot";
RemainAfterExit=true;
};
script= cfg.activationScript;
};
# Activation script that processes `kube.addons`.kube.activationScript= lib.mkIf (cfg.addons != [ ]) ''
for file in ${lib.escapeShellArgs (pkgs.copyPathsToStore cfg.addons)}; do
echo >&2 "# $file"
kubectl apply --server-side --force-conflicts -f "$file"
done
'';
};
}
kube-scheduler
Next we need to run kube-scheduler to actually schedule pods:
{
config,
lib,
pkgs,
...
}:
# Only on the primary node.
lib.mkIf (config.kube.role =="primary") {
systemd.services.kube-scheduler= {
wantedBy= [ "multi-user.target" ];
requires= [ "kube-apiserver.service" ];
after= [ "kube-apiserver.service" ];
serviceConfig= {
ExecStart="${pkgs.kubernetes}/bin/kube-scheduler"# Connect to the API.+" --kubeconfig='/tmp/kubeconfig'"# Disable listener, only useful for metrics.+" --secure-port=0";
Restart="on-failure";
RestartSec=10;
# Let systemd assign a user for this service.DynamicUser=true;
# For the below `preStart` that generates kubeconfig.PrivateTmp=true;
LoadCredential="kube-token:/run/agenix/kube_token_system_kube-scheduler";
};
preStart=''
# Generate a kubeconfig for the scheduler. Relies on PrivateTmp.
KUBECONFIG=/tmp/kubeconfig ${lib.getExe config.kube.mkkubeconfig} "$CREDENTIALS_DIRECTORY/kube-token"
'';
};
}
kube-controller-manager
Similarly, we need to run kube-controller-manager, which contains all the standard Kubernetes controllers:
{
config,
lib,
pkgs,
...
}:
# Only on the primary node.
lib.mkIf (config.kube.role =="primary") {
systemd.services.kube-controller-manager= {
wantedBy= [ "multi-user.target" ];
# NOTE: This 'bindsTo' also ensures an up-to-date API certificate is published.# When separating kube-controller-manager from kube-apiserver, some other mechanism# is required to distribute certificates.bindsTo= [ "kube-apiserver.service" ];
after= [ "kube-apiserver.service" ];
serviceConfig= {
ExecStart="${pkgs.kubernetes}/bin/kube-controller-manager"# Connect to the API.+" --kubeconfig='/tmp/kubeconfig'"# Disable listener, only useful for metrics.+" --secure-port=0"# This makes the controller manager automagically create a service# account for each of its controllers. Neat.+" --use-service-account-credentials=true"# This publishes the correct API certificate in the API itself.# Pods see this as `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.+" --root-ca-file='/var/lib/kube-apiserver/apiserver.crt'";
Restart="on-failure";
RestartSec=10;
# Let systemd assign a user for this service.DynamicUser=true;
# For the below `preStart` that generates kubeconfig.PrivateTmp=true;
LoadCredential="kube-token:/run/agenix/kube_token_system_kube-controller-manager";
};
preStart=''
# Generate a kubeconfig for the controller manager. Relies on PrivateTmp.
KUBECONFIG=/tmp/kubeconfig ${lib.getExe config.kube.mkkubeconfig} "$CREDENTIALS_DIRECTORY/kube-token"
'';
};
}
CoreDNS
We need to provide DNS resolution based on Services in the Kubernetes API.
Many deployments run CoreDNS inside Kubernetes, but there's really no standard for how you implement DNS resolution, and different deployments have different needs. As long as you have something that fetches Services from the Kubernetes API.
Here we setup CoreDNS, but not inside Kubernetes, instead managed by NixOS. We run an instance on every node for simplicity.
{
config,
lib,
pkgs,
...
}:
{
services.coredns= {
enable=true;
config=''
. {
bind ${config.kube.hostIp6}
errors
# Resolve Kubernetes hosts.
hosts ${config.kube.allHostsFile}${config.kube.domain} {
reload 0
fallthrough
}
# Resolve Kubernetes services.
kubernetes ${config.kube.domain} {
kubeconfig {$CREDENTIALS_DIRECTORY}/kubeconfig-node
ttl 30
# NOTE: No fallthrough, to prevent a loop with systemd-reoslved.
}
# Forward everything else to systemd-resolved.
forward . 127.0.0.53 {
max_concurrent 1000
}
cache 30
loadbalance
}
'';
};
# Provide kubeconfig-node to CoreDNS.systemd.services.coredns= {
requires= [ "generate-kubeconfig-node.service" ];
after= [
"generate-kubeconfig-node.service""kube-activation.service"
];
serviceConfig.LoadCredential="kubeconfig-node:/run/kubeconfig-node";
};
# Setup systemd-resolved to forward the Kubernetes domain to CoreDNS.
environment.etc."systemd/dns-delegate.d/kubernetes.dns-delegate".text =''
[Delegate]
Domains=${config.kube.domain}
DNS=${config.kube.hostIp6}
'';
# Open the DNS port to containers.networking.firewall.extraInputRules=''
ip6 saddr ${config.kube.nodeCidr6} udp dport 53 accept
ip6 saddr ${config.kube.nodeCidr6} tcp dport 53 accept
'';
# API resources needed for CoreDNS.kube.addons= lib.mkIf (config.kube.role =="primary") [
./addons/coredns.yaml
];
# For inspecting DNS servers.environment.systemPackages= [ pkgs.dig ];
}
The referenced add-on file
addons/coredns.yaml
creates the permissions needed for CoreDNS to access the Kubernetes API:
# Define the coredns role and bind it to the regular node group,# so that the same node credentials can be used for CoreDNS.## Based on the roles from the upstream addon:# https://github.com/kubernetes/kubernetes/blob/v1.34.2/cluster/addons/dns/coredns/coredns.yaml.base---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetadata:name:system:corednsnamespace:kube-systemrules:-apiGroups:-""resources:-endpoints-services-pods-namespacesverbs:-list-watch-apiGroups:-discovery.k8s.ioresources:-endpointslicesverbs:-list-watch---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:system:corednsnamespace:kube-systemroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:system:corednssubjects:-kind:Groupname:system:nodesapiGroup:rbac.authorization.k8s.io
kube-proxy
Kube-proxy is what implements cluster IPs assigned to Service resources in the API. It generates firewall rules to NAT cluster IPs to destination pods. It needs to run on every node.
(NOTE: If you decide not to run kubelet on your control plane / primary node, you still need to run kube-proxy! The API server may sometimes contact Services via their Cluster IP too.)
{
lib,
config,
pkgs,
...
}:
{
systemd.services.kube-proxy= {
wantedBy= [ "multi-user.target" ];
requires= [ "generate-kubeconfig-node.service" ];
after= [
"generate-kubeconfig-node.service""kube-activation.service"
];
path= [ pkgs.nftables ];
serviceConfig= {
ExecStart="${lib.getBin pkgs.kubernetes}/bin/kube-proxy"# Connect to the API using node credentials.+" --kubeconfig='/run/kubeconfig-node'"+" --hostname-override='${config.kube.nodeHost}'"# Prefer nftables mode.+" --proxy-mode=nftables"# Local traffic can be detected by the bridge interface.+" --detect-local-mode=BridgeInterface"+" --pod-bridge-interface=brkube"# Addresses to accept NodePort service ports on.+" --nodeport-addresses='${config.kube.hostIp6}/128,${config.kube.hostIp4}/32'"# Can't seem to disable these listeners, so make sure they only listen on localhost.+" --healthz-bind-address=[::1]:10256"+" --metrics-bind-address=[::1]:10249";
Restart="on-failure";
RestartSec=10;
};
};
# API resources needed for CoreDNS.kube.addons= lib.mkIf (config.kube.role =="primary") [
./addons/kube-proxy.yaml
];
}
The referenced add-on file
addons/kube-proxy.yaml
is again necessary to setup permissions in the Kubernetes API:
# Bind the kube-proxy role to the regular node group,# so that the same node credentials can be used for kube-proxy.---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata:name:system:kube-proxynamespace:kube-systemroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:system:node-proxiersubjects:-kind:Groupname:system:nodesapiGroup:rbac.authorization.k8s.io
kubelet
Kubelet is the meat that starts containers on a node, when a pod is assigned to that node. Here we also do the work to setup the
cri-o
container runtime and the
CNI
configuration that tells it how containers get network.
You technically only need kubelet on machines that run workloads. We simply start it everywhere, including our primary node, but mark the primary as non-schedulable to demonstrate
registerWithTaints
.
{
lib,
config,
pkgs,
...
}:
letyaml= pkgs.formats.yaml { };
kubeletConfig= yaml.generate "kubelet.conf" {
apiVersion="kubelet.config.k8s.io/v1beta1";
kind="KubeletConfiguration";
# Allow anonymous access, but bind to the secure Wireguard IP.# This is further locked down by firewall rules.address= config.kube.hostIp6;
authentication.anonymous.enabled=true;
authorization.mode="AlwaysAllow";
# Disable other listeners.healthzPort=0;
# Use CRI-O.containerRuntimeEndpoint="unix:///var/run/crio/crio.sock";
# Don't complain about swap, but don't account for it either.failSwapOn=false;
memorySwap.swapBehavior="LimitedSwap";
# Configure DNS using the local CoreDNS server.clusterDomain= config.kube.domain;
clusterDNS= [ config.kube.hostIp6 ];
# Prevent scheduling pods on the primary node.registerWithTaints= lib.optional (config.kube.role =="primary") {
key="role";
value= config.kube.role;
effect="NoSchedule";
};
};
in
{
virtualisation.cri-o= {
enable=true;
extraPackages= [ pkgs.nftables ];
settings.crio.runtime.log_to_journald=true;
};
systemd.services.kubelet= {
wantedBy= [ "multi-user.target" ];
requires= [
"generate-kubeconfig-node.service""crio.service"
];
after= [
"generate-kubeconfig-node.service""crio.service""kube-activation.service"
];
path= [ pkgs.util-linux ];
serviceConfig= {
Type="notify";
ExecStart="${lib.getBin pkgs.kubernetes}/bin/kubelet"# Connect to the API using node credentials.+" --kubeconfig='/run/kubeconfig-node'"# Ensure the Node is registered with the expected hostname.+" --hostname-override='${config.kube.nodeHost}'"# Publish our preferred IPv6 node IP.+" --node-ip='${config.kube.hostIp6}'"# Announce the role of this node as a label.+" --node-labels='role=${config.kube.role}'"# Most other flags are deprecated in favour of a config file.+" --config='${kubeletConfig}'";
Restart="on-failure";
RestartSec=10;
StateDirectory="kubelet";
};
};
# cri-o bundles an example config file that NixOS installs by default, but we# override that here with our own configuration.
environment.etc."cni/net.d/10-crio-bridge.conflist".text = lib.mkForce (
builtins.toJSON {
cniVersion="1.0.0";
name="brkube";
plugins= [
{
type="bridge";
bridge="brkube";
isGateway=true;
ipam= {
type="host-local";
ranges= [
[ { subnet= config.kube.nodeCidr6; } ]
[ { subnet= config.kube.nodeCidr4; } ]
];
routes= [
{ dst="::/0"; }
{ dst="0.0.0.0/0"; }
];
};
}
];
}
);
# Ensure kube-apiserver can connect to this kubelet.# This is necessary for `kubectl logs`, `kubectl exec`, etc.networking.firewall.extraInputRules=''
ip6 saddr ${config.kube.primaryIp} tcp dport 10250 accept
tcp dport 10250 reject
'';
}
Testing
The setup should now be fully functional! If you login as root on the primary node, you can use
kubectl
:
# kubectl get node
NAME STATUS ROLES AGE VERSION
node1.k8s.internal Ready <none> 19s v1.34.1
node2.k8s.internal Ready <none> 12s v1.34.1
With node2 in the listing, we know connectivity works from kubelet to API server. Starting a container with an interactive session also tests the opposite direction. In addition, we can test connectivity from the container to the internet:
# kubectl run --rm -it --image=docker.io/alpine test
/ # wget -O - https://example.com/
Connecting to example.com (23.220.75.245:443)
writing to stdout
<!doctype html><html ...
What next?
While this setup has all the essentials for workloads, a bunch of stuff is missing to make it more broadly useful.
A storage provsioner can help with data persistence. The modern solution for this is
CSI drivers
. Provided are drivers for NFS and SMB shares, which are really useful if you're coming from a setup where applications share some NFS directories hosted on the primary node. But storage for databases is ideally block storage, which is a bit more work.
Speaking of databases, the nice thing about this setup is that you can simply run services outside Kubernetes, so you can just start a database using regular NixOS config on the primary node for example. I had some fun writing my own controller that allows managing MySQL databases with custom Kubernetes resources:
external-mysql-operator
. Again, very experimental.
Takeaways
Would I take this into production? Not anytime soon, because I feel like there are a whole bunch of failure modes I've not yet seen. My testing has been limited to QEMU VMs and some AWS EC2 instances.
Especially on VMs, which are typically quite small compared to dedicated servers, Kubernetes itself uses up a chunk of memory and CPU just sitting there.
With the traction Kubernetes has, it does feel like there must be many small installations out there. And if that's the case, it seems to me that Kubernetes could easily reduce some complexity for that type of installation.
For example, do you really need etcd and API server redundancy? It seems upstream SQLite support in combination with
Litestream
backups would be far more beneficial for smaller installations, when you're happy to deal with some Kubernetes API downtime during upgrades or incidents.
Another easy win (in my opinion) would be runtime reloading of the token auth file. It would instantly make it a more viable option beyond testing. Though with a bit of extra work it can also be accomplished using the webhook or reverse proxy mechanisms supported by Kubernetes.
Overall, though, it feels like Kubernetes itself is maybe only half the complexity, with the other half going to network configuration.
Google fixes new Chrome zero-day flaw exploited in attacks
Bleeping Computer
www.bleepingcomputer.com
2025-11-18 10:13:17
Google has released an emergency security update to fix the seventh Chrome zero-day vulnerability exploited in attacks this year. [...]...
Google has released an emergency security update to fix the seventh Chrome zero-day vulnerability exploited in attacks this year.
"Google is aware that an exploit for CVE-2025-13223 exists in the wild," the search giant warned in a
security
advisory
published on Monday.
This high-severity vulnerability is caused by a
type confusion
weakness in Chrome's V8 JavaScript engine, reported last week by Clement Lecigne of Google's Threat Analysis Group. Google TAG frequently flags zero-day exploits by government-sponsored threat groups in spyware campaigns targeting high-risk individuals, including journalists, opposition politicians, and dissidents.
Google fixed the zero-day flaw with the release of 142.0.7444.175/.176 for Windows, 142.0.7444.176 for Mac, and 142.0.7444.175 for Linux.
While these new versions are scheduled to roll out to all users in the Stable Desktop channel over the coming weeks, the patch was immediately available when BleepingComputer checked for the latest updates.
Although the Chrome web browser updates automatically when security patches are available, users can also confirm they're running the latest version by going to Chrome menu > Help > About Google Chrome, letting the update finish, and then clicking on the 'Relaunch' button to install it.
Although Google has already confirmed that CVE-2025-13223 was used in attacks, it still has to share additional details regarding active exploitation.
"Access to bug details and links may be kept restricted until a majority of users are updated with a fix," Google said. "We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven't yet fixed."
This is the seventh Chrome zero-day exploited in attacks that was fixed by Google this year, with six more patched in March, May, June, July, and September.
In September and July, it addressed two actively exploited zero-day (
CVE-2025-10585
and
CVE-2025-6558
) reported by Google TAG researchers.
Google released additional emergency security updates in May to address a Chrome zero-day vulnerability (
CVE-2025-4664
) that enabled threat actors to hijack accounts. The updates also fixed an out-of-bounds read and a write flaw (
CVE-2025-5419
) in the V8 JavaScript engine discovered by Google TAG in June.
In March, Google also patched a high-severity sandbox escape flaw (
CVE-2025-2783
) reported by Kaspersky, which was exploited in espionage attacks against Russian media outlets and government organizations.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
A couple of months ago, we shared a
post
demystifying how multi-GPU communication works, introducing new multi-GPU functionalities in
ThunderKittens
, and showing how these simplify the process of writing high-performance multi-GPU kernels. That post primarily focused on
enabling
the ability to write communication kernels that span multiple GPUs, which already let us write communication kernels up to 2.6x faster than NCCL using only a few dozen lines of code!
Yet there is a breadth of parallelization strategies when we look across AI workloads. Each requires specific optimizations to preserve compute utilization by overlapping computation and communication. What we found is that there are fundamental, generalizable trade-offs in communication mechanisms and scheduling patterns that depend on the workload properties. By applying such principles with ThunderKittens, we can easily build efficient compute-communication kernels that match or surpass hand-optimized, low-level implementations.
In this post, as a follow-up to our previous one, we share:
Our findings on the principles for writing efficient compute-communication kernels, focusing on inter-GPU transfer mechanisms, overlapping schedules, and tiling.
New compute-communication kernels built with ThunderKittens. We fuse common operators used in data, tensor, sequence, and expert parallelism: including fused collective (all-gather, reduce-scatter, all-reduce) + GEMM operations, distributed attention variants, and combined token dispatch with expert GEMMs. We show that we can match or outperform hand-optimized kernels, all with only a few dozen lines of device code.
Properly Overlapping the Kittens
Figure 2: The kittens overlap for cuteness density.
Modern AI workloads run on multi-GPU platforms like HGX H100 (8xH100s), HGX B200 (8xB200s), or GB200 NVL72 (72xB200s). These systems interconnect datacenter-grade GPUs through NVLink and NVSwitch, providing up to 900 GB/s of unidirectional bandwidth between any two remote High Bandwidth Memory (HBM). As a quick reminder, the figure below from our previous post illustrates these systems’ high-level topology.
Figure 3: NVIDIA HGB B200 overview. The PCIe path handles kernel execution, host-device data movement, and inter-node communication over InfiniBand or TCP, while device-to-device communication occurs entirely through NVLink and NVSwitch.
An important thing to remember is that these platforms come with
loads and loads of heterogeneous resources that must be carefully orchestrated to achieve high utilization
. These include the register file, CUDA and tensor cores, special function units (SFUs), load/store units, the Tensor Memory Accelerator (TMA), the memory controller, HBM and NVLink bandwidth, and in-fabric accelerators within the NVSwitch.
That’s quite a mouthful, and finding the balance across these resources is not so easy. Naively splitting a workload and applying parallelism strategies like data, tensor, sequence, or expert parallelism can end up saturating only one or two resources while leaving the rest idle. Poor overlapping design leads to load imbalance, where one resource becomes a serialized bottleneck while others sit underutilized.
We don’t want that. After all,
we bought the whole GPU, so we’d better use the whole GPU
. One obvious and valid approach is to design hand-optimized kernels with per-operator (e.g., all-gather fused with GEMM) overlapping strategies. A long line of excellent prior work (e.g.,
1
,
2
,
3
,
4
,
5
,
6
,
7
) does this, and it’s been a major source of inspiration for our work.
For this post, though, we wanted to step back and ask: what are the fundamental principles that can guide us in writing any compute-communication kernel on any modern GPU? Through building and experimenting with many compute-communication kernels, we’ve identified a few that seem to matter repeatedly: using the right transfer mechanism, using the right scheduling strategy, and, of course, using the tiles! We’ll dive into each below.
Using the Right Transfer Mechanism
There are 3 ways to perform inter-GPU data transfers: using the per-GPU copy engine, the Tensor Memory Accelerator (TMA), or register-level instructions (e.g.,
ld
,
st
,
red
, or
multimem
). Each comes with its own trade-offs, and naively relying on a single method leads to failures. For example, while the copy engine can deliver the highest bandwidth without involving any SMs (as shown in the previous post), it only reaches peak throughput for large message sizes. As illustrated in the figure below, moving 1 GB of data typically requires chunks of around 256 MB to fully saturate the link. Thus, the copy engine is not well-suited for fine-grained communication.
Figure 4: Observed memory bandwidth utilization for a 1 GB peer-to-peer transfer over NVLink between 2 H100s.
On the other hand, TMA is excellent for inter-GPU communication in many ways. It can saturate NVLink bandwidth with messages as small as 2 KB while using very few resources: about 15 out of 148 SMs on B200 GPUs, as shown in the figure below. With intra-SM overlapping (discussed later), the effective SM usage can drop to zero. TMA also avoids adding register pressure since it relies only on shared memory and does not occupy execution lanes. Plus, it naturally supports tile-granularity communication.
TMA does have a few limitations, though. For example, it does not support in-network reduction operations, which are only possible through register-level instructions. Register instructions can also saturate bandwidth at element-wise granularity, as long as the memory accesses are properly coalesced.
Figure 5: The number of SMs it takes to saturate NVLink bandwidth, using different communication mechanisms.
It is therefore important to use these different inter-GPU transfer mechanisms deliberately, depending on the workload. Existing off-the-shelf libraries do not handle this for you! For example, NVSHMEM’s
put
and
put_nbi
APIs (which many other libraries build upon) ultimately rely on volatile
st
instructions for intra-node transfers. They also enforce issuing
__ldg
to access peer memory addresses and add thread-level synchronizations, which together increase latency and reduce NVLink bandwidth utilization.
Using the Right Overlapping Schedule
While the specific optimal schedule can vary depending on the target operator, we find that overlapping strategies generally fall into two categories:
intra-SM overlapping
and
inter-SM overlapping
. Each comes with its own strengths and weaknesses.
In
intra-SM overlapping
, different warps or threads within the same SM handle compute and communication concurrently. The idea is simple: we dedicate one or more warps (or a few threads) to communication, just as we specialize warps for compute or memory operations in many modern kernels. In the best case, intra-SM overlapping yields zero loss: all tensor cores across all SMs stay busy, and inter-GPU communication is fully hidden, adding no extra overhead. We note that this ideal case is not merely theoretical. In our fused GEMM + reduce-scatter kernel, we were able to reduce the non-overlapped communication portion to under 1%, completely hiding NVLink transfers and atomic additions behind the tensor-core GEMM.
The challenge with intra-SM overlapping, however, is that the communication pattern must somehow align with the computation to avoid interference. Whether communication uses TMA or register operations, it must operate on the same data used by the computation (e.g., inputs, intermediates, or outputs). For example, in GEMM + reduce-scatter, the communication pattern matches computation perfectly; the output tile of a tiled GEMM can immediately be sent to the destination device and atomically added.
However, when the communication pattern cannot align with computation, the kernel ends up splitting resources like the register file or shared memory, resulting in poor overlap and reduced compute utilization. This makes
inter-SM overlapping
necessary.
Figure 6: GEMM + reduce-scatter (RS) and GEMM + all-reduce (AR) performance on 8xH100s across overlapping schedules.
Inter-SM overlapping
dedicates entire SMs almost exclusively to either compute or communication tasks. This approach becomes especially useful when communication patterns that minimize NVLink traversal are difficult to achieve with intra-SM overlapping. Two main factors drive this advantage:
in-network acceleration
and
remote L2 caching behavior
.
In-network acceleration
is enabled by modern high-bandwidth interconnects such as NVSwitch. As discussed in our previous post, NVSwitch integrates compute capabilities directly into the interconnect fabric, allowing in-network reductions. Efficient use of this capability requires inter-SM overlapping, since in-network acceleration relies on synchronous, register-based execution that increases register pressure and limits thread occupancy. For example, in workloads like fused GEMM + all-reduce, an effective approach is to accumulate partial results in local HBM, signal completion after each local write, and dedicate a few specialized SMs to perform a single in-network all-reduce once all devices have finished.
Remote L2 caching behavior
is shown in the figure below from our previous post. Because remote HBM access bypasses the local L2 cache, data is cached only on the remote peer’s L2. This design simplifies inter-GPU memory consistency but makes every remote access bottlenecked by NVLink bandwidth. For operators where multiple thread blocks need to access the same remote data for computation (such as attention), a more effective strategy is to use inter-SM overlapping. For example, a few dedicated SMs can manage data transfers from peer HBMs to local HBM, allowing compute SMs to fully utilize local L2 cache bandwidth instead of repeatedly traversing NVLink.
Figure 7: The NVLink data path, shown in red line. Note that the L2 cache is far-sided: data travels through the L2 cache of the source HBM, not the local one.
For more discussion of the trade-offs in inter-GPU data transfer mechanisms and scheduling strategies, you can check out our
paper
.
Using the Tiles 😎
And of course, tiles remain just as effective for writing communication kernels as they did for
single-GPU kernels
. One potential concern is that tiling could reduce inter-GPU bandwidth utilization due to its fine granularity. However, our benchmarks show that this is not the case, provided we use TMA or properly coalesce remote HBM accesses. Tiles simply make everything easier. Really, much easier. Every kernel we are releasing today adds fewer than 50 lines of device code on top of the original single-GPU kernel (for example, GEMM) to support communication.
New Kernels
Figure 8: The kittens gather for a fruitful day of
nap
labor.
To demonstrate the power of the new ThunderKittens multi-GPU API and the ideas discussed above, we implemented a handful of compute-communication kernels on both Hopper and Blackwell platforms. We organize them by popular parallelization strategies: data, tensor, sequence, and expert parallelism, each of which we present below.
Data and Tensor Parallelism
Figure 9: Data-parallel kittens.
For data and tensor parallelism, we optimize for a typical setup where we start with a batch-sharded input, perform an all-gather (AG), run the first GEMM with column-sharded weights, apply a nonlinear activation, then run the second GEMM with row-sharded weights, followed by a reduce-scatter (RS) or all-reduce (AR). We overlap communication and computation by pairing AG with the first GEMM (AG + GEMM) and the second GEMM with RS or AR (GEMM + RS, GEMM + AR). The performance for each configuration is shown in the figures below. As a note, we denote the GEMM shape as M x N x K, where the first operand has dimensions M x K and the second K x N.
In our comparisons, cuBLAS + NCCL serves as the non-overlapped baseline, Triton Distributed represents the compiler-based overlapping approach, and Flux and CUTLASS are the hand-optimized kernels. PK (“ParallelKittens”) is our approach. Note that Flux and CUTLASS do not provide a GEMM + AR kernel, so they are not included in the GEMM + AR results.
Figure 10: AG + GEMM performance on 8xH100s. Local GEMM size is N x N/8 x N, with N given in the X-axis.
Figure 11: GEMM + RS performance on 8xH100s. Local GEMM size is N x N x N/8, with N given in the X-axis.
Figure 12: GEMM + AR performance on 8xH100s. Local GEMM size is N x N x N/8, with N given in the X-axis.
Sequence Parallelism
Figure 13: Sequence-parallel kittens.
For sequence parallelism, we implemented two popular strategies for distributing the sequence dimension across multiple GPUs and computing self-attention:
Ring Attention
and
DeepSpeed-Ulysses
. We used xDiT and YunChang as their most efficient open-source reference implementations and compared our results against them. In the figures, the X-axis represents the total sequence length, which is evenly distributed across 8 H100 GPUs.
Figure 14: Ring Attention performance on 8xH100s across sequence lengths (B = 16, H = 16, D = 128).
Figure 15: DeepSpeed-Ulysses attention layer performance on 8xH100s across sequence lengths (B = 16, H = 128, D = 128).
Expert Parallelism
Figure 16: Expert-parallel kittens.
In Mixture-of-Experts (MoE) layers with experts distributed across multiple GPUs (expert parallelism), token exchange accounts for a large portion of the total runtime (
up to 50%
). To reduce this overhead,
many
approaches
overlap token dispatch with the first expert GEMM and the last expert GEMM with token combination. In this post, we focus on the first half of the MoE computation (overlapped token dispatch and GEMM), and compare our implementation against Comet, the state-of-the-art fine-grained overlapping kernel for expert parallelism.
You can explore even more kernels, including those for Blackwell,
here
.
Coming Soon
Building efficient multi-GPU kernels with ThunderKittens has been exciting, but there is still loads and loads more to explore. We’re thrilled to soon present:
Inter-node communication over InfiniBand with ThunderKittens
NVFP4 support for ThunderKittens
Integration of the above two with our
Megakernel framework
to build multi-node Mixture-of-Experts megakernels, supporting all commonly used precisions (BF16, MXFP8, NVFP4)
Looking ahead, we are especially excited for the next wave of architectures (like NVL144), which signal a shift from scale-out to scale-up designs. These systems introduce hundreds of terabytes of on-device HBM, deeply hierarchical and massive on-chip memory structures, and new challenges in fault tolerance. These characteristics will invalidate many of the efficiency assumptions embedded in existing model designs, opening up fresh co-design opportunities. We hope ThunderKittens will help enable that exploration, and we will also be diving right in ourselves!
As always, if you'd like to learn more or contribute, feel free to reach out to Stuart at
ssul@cs.stanford.edu
. And huge thanks to
Cursor
and
Together AI
for making this work possible!
‘Fear really drives him’: is Alex Karp of Palantir the world’s scariest CEO?
Guardian
www.theguardian.com
2025-11-18 10:00:30
His company is potentially creating the ultimate state surveillance tool, and Karp has recently been on a striking political and philosophical journey. His biographer reveals what makes him tick In a recent interview, Alex Karp said that his company Palantir was “the most important software company ...
I
n a
recent interview
, Alex Karp said that his company Palantir was “the most important software company in America and therefore in the world”. He may well be right. To some, Palantir is also the scariest company in the world, what with its involvement in the Trump administration’s authoritarian agenda. The potential end point of Palantir’s tech is an
all-powerful government system
amalgamating citizens’ tax records, biometric data and other personal information – the ultimate state surveillance tool. No wonder Palantir has been likened to George Orwell’s Big Brother, or Skynet from the Terminator movies.
Does this make Karp the scariest CEO in the world? There is some competition from Elon Musk, Mark Zuckerberg, Jeff Bezos and Palantir’s co-founder
Peter Thiel
. But 58-year-old Karp could give them all a run for their money in terms of influence, self-belief, ambition and – even in this gallery of oddballs – sheer eccentricity. In his increasingly frequent media appearances, Karp is a striking presence, with his cloud of unkempt grey hair, his 1.25x speed diction, and his mix of combative conviction and almost childish mannerisms. On CNBC’s Squawk Box, he shook both fists simultaneously as he railed against short sellers betting against Palantir, whose share price has climbed nearly 600% in the past year: “It’s super triggering,” he complained. “Why do they have to go after us?”
Leaving aside for a moment questions about what Palantir actually
does
, the company seems to be at the heart of many of the world’s pressing issues. In the US alone, its AI-powered data-analysis technology is
fuelling the deportations
being carried out by Immigration and Customs Enforcement (Ice), the Pentagon’s unmanned drone programme, police departments’ (
allegedly racist
) profiling of potential criminals and much more besides. Its software is being used by the Israel Defense Forces in its assaults on Gaza, by the Ukrainians against Russia and by police forces and corporations throughout the western world. In the UK, Palantir is at the heart of Labour’s plans to “modernise” the armed forces and the NHS: when
Keir Starmer visited Washington
in February, his first stop after the White House was Palantir’s office, where Karp showed him its latest military kit.
For the past few decades, Karp has stayed largely under the radar, but a new biography, The Philosopher in the Valley, reveals him to be a complex, thoughtful, often contradictory personality, with a background that explains many of his insecurities. “Fear is something that really drives him,” says the journalist Michael Steinberger, the book’s author. “One of the many fascinating things about
Palantir
is the way that it is the embodiment, in a lot of ways, of Karp … he created Palantir to make the world safer for himself, or for people like him.” Whether that remains the case is up for debate.
Fitness obsessed … Karp has been known to lead tai chi classes for employees
Steinberger’s book reveals Karp to be an idiosyncratic CEO with a singular lifestyle. He is obsessed with fitness, especially tai chi (he has been known to lead classes for employees) and cross-country skiing (he often wears ski gear day-to-day) and has a coterie of super-fit, mostly Norwegian bodyguards. Karp, who was
paid $6.8bn
in 2024, owns an estimated 20 homes around the world, many of which are apparently sparsely furnished ski huts. He is not married and has no children but has been described as “geographically monogamous” – he has two concurrent female partners in different parts of the world. He claims to run Palantir like “an artists’ colony” but he also likes to joke around in the workplace, comparing himself to Larry David, and once, according to Steinberger’s book, suggested that his own comic stylings “might be called Karp Your Enthusiasm”.
This is not just tech-bro quirkiness for its own sake, says Steinberger. “In this case, it is legitimately him. He is himself. And that is what he’s always been.” Steinberger went to the same college as Karp (Haverford, a private college in Pennsylvania, though the two did not know each other). He has spent the last five years snatching interviews with Karp whenever the CEO could fit him into his busy schedule – including, on one occasion, during his midday roller-skiing workout. Steinberger had to cycle alongside him, holding out his Dictaphone.
Karp grew up very much feeling like an outsider, it seems. The son of a Jewish paediatrician father and an African American artist mother, he was raised in Philadelphia, in an erudite, relatively privileged, leftwing environment. In a
2023 interview
he said: “I always thought if fascism comes, I will be the first or second person on the wall.” As much as ethnicity, he considers his defining point of difference to be his dyslexia, which, he tells Steinberger, “fucked me but also gave me wings to fly”. He also has attention deficit hyperactivity disorder (he claims the tai chi helps him to focus).
At the heart of Labour’s plans … Keir Starmer and Karp at the Palantir offices in Washington DC in February.
Photograph: @10DowningStreet @PalantirTech
Karp and Thiel first met as students at Stanford law school, where they hit it off despite being ideological opposites. But, while Thiel went off to found PayPal (with Musk) and embark on a fruitful tech investment career, Karp went to do a PhD in neoclassical social theory in Frankfurt. As a Jew, Steinberger says, Karp “wanted to understand how Germany, a pillar of European civilisation, had descended into barbarism.” While so many tech titans have amassed a fortune then used it to promote their “philosophy”, Karp has effectively done it the other way round. When he reconnected with Thiel and joined Palantir Technologies in 2004, he couldn’t write a line of code but he did know something about “
ontology
” – how information is structured and organised. He was also, apparently, a persuasive personality; good at recruiting and motivating eccentric talents like himself.
Palantir’s founding mission was “defending the west” – a nebulous and pliable goal admittedly, but also an unfashionable one, at a time when early 00s Silicon Valley was all about giving tech a consumer-friendly face. While the likes of Google, Apple, Facebook and Microsoft shied away from working with the military, Palantir – which was never a consumer company – embraced the prospect, arguing that Silicon Valley should be helping the US to maintain its edge over threats from countries including China, Iran and, latterly, Russia. The company’s name is derived from JRR Tolkien’s Lord of the Rings mythology: a palantir is a “seeing stone” – something like a crystal ball – a surveillance device, in other words. Karp has spoken of Palantir’s mission in terms of “saving the shire”, and employees were sometimes referred to as “hobbits”.
In its early days, Palantir assisted the US army in Iraq and Afghanistan, where it devised powerful tools for identifying enemy locations and attacks, arguably saving American lives. Even so, it sued the army in 2016, when it was being passed over for contracts. Palantir was also implicated in the Cambridge Analytica scandal, in 2018, in which Facebook users’ data was used to help influence their voting in national elections. But during the Covid pandemic, its tech assisted the US and the UK, among others, in tracking the spread of the disease and the distribution of vaccines and aid. Today it has contracts worth billions across US military and government agencies including the CIA, FBI, Department of Homeland Security and the National Security Agency, as well as Ice. You can see how the Big Brother comparisons started.
But, there are “some fundamental misconceptions about the work they do,” says Steinberger. “They don’t collect the data, they don’t store the data; they provide software that helps companies and organisations make better use of their own data.” That could mean devising software to integrate complex supply chains for a large corporation, such as Airbus. Or it could mean analysing huge amounts of data, and spotting patterns and connections in real time, so as to identify, say, a battlefield enemy, a domestic terrorist or an illegal immigrant (or, potentially, any other kind of individual). Palantir argues that it has a
code of conduct
, and builds in guardrails to prevent abuses, including “civil liberties protections” – though it is not easy to verify such claims. “If abuses of data are taking place with Palantir software, it’s not because Palantir is doing it, it’s because the clients are doing it,” says Steinberger. “I think of Palantir software as like a toaster. If you burn your toast, you don’t blame the toaster.”
Karp has spoken of Palantir’s mission in terms of ‘saving the shire’ … Karp in Idaho on 10 July.
Photograph: Kevin Dietsch/Getty Images
Politically, Karp is difficult to pigeonhole. While the conservative, libertarian Thiel was an early Silicon Valley cheerleader for Trump, and campaigned for him in the 2016 presidential race, Karp was not. “I respect nothing about the dude. It would be hard to make up someone I find less appealing,”
Karp said of Trump
in 2015. He voted for Hillary Clinton in that election, and backed Kamala Harris in 2024. Thiel had soured on Trump by 2024, but was instrumental in placing his protege, JD Vance, as his running mate.
Since Trump’s re-election, though, both Thiel and Karp seem to have fallen more into line. Karp wrote a million-dollar cheque for Trump’s inauguration but did not attend. As a key defence contractor, Palantir also donated $5m towards Trump’s military parade in June. In
a recent interview
with Axios, Karp described himself as “an independent who admires what Trump has done on many things.” In Karp’s mind, “the price of doing business with the government is making nice with Trump,” Steinberger says. Karp’s argument, he says, is: “Look, we got into business to work with the government, you can’t sit here and pull that support when someone you don’t like is elected.”
Having once declared that fascism was his greatest fear, though, Karp could well be enabling it – by helping Ice to grab people off the street, some of whom could be innocent citizens, for example. Steinberger acknowledges the irony: “How do you square that circle? Well, in his case, I guess one thing is, he would deny that Trump is fascist. Karp would argue that we still have a functioning, independent judiciary, and a free press, for example.” Karp also claims that Palantir has
prevented “innumerable terror attacks”
in Europe, which has actually helped
save it
from fascism. His argument about immigration, says Steinberger, is that “if the left doesn’t take this concern seriously, voters are going to turn to people who do, and the left isn’t going to like the outcome. That’s how you got the first Trump presidency, and arguably it’s one of the reasons you got the second one.”
Cheerleader for Trump … Peter Thiel in 2022.
Photograph: Rebecca Blackwell/AP
It would seem that Karp believes there is no contradiction, but the “western values” he is defending appear to have evolved. When Steinberger first met him in 2019, he was talking about defending liberal democracy – making Palantir a “civil liberties juggernaut”. “Judging by his own words … he does not see multiracial, pluralistic democracy as the thing about the west that should be defended,” argues Steinberger. Now, “he sees it much more as just a collection of countries bound by a shared Judeo-Christian heritage, and, to varying degrees, by an attachment to free enterprise. That’s kind of where he is, I think. And it can lead you down some pretty dark paths.”
In Karp’s own book The Technological Republic, co-written with Nicholas W Zamiska and published in February, Karp seems more concerned with US dominance, in tech and the military, including defeating rivals such as China in the AI race. He has railed against identity politics: in an earnings call earlier this month, he declared Palantir to be “
completely anti-woke
”. He believes that the west is too self-flagellating about its own superiority, and that “
everything you learned at school or college about how the world works is intellectually incorrect
”. In his quarterly letter to shareholders
in February
, Karp referenced the political scientist Samuel Huntington’s belief that “the rise of the west was not made possible ‘by the superiority of its ideas or values or religion … but rather by its superiority in applying organised violence’.”
In May, a group of former Palantir employees wrote
an open letter
(titled “The Scouring of the Shire”) stating that “Palantir’s leadership has abandoned its founding ideals”, and that its principles of protecting against discrimination, disinformation and abuses of power “have now been violated, and are rapidly being dismantled at Palantir Technologies and across Silicon Valley”.
Activists protest against the federal government’s possible adoption of Palantir security software in Berlin in September.
Photograph: Omer Messinger/Getty Images
As perplexing, objectionable and perhaps terrifying as some might find Karp, Steinberger did not come away disliking him. “I find him fascinating. I enjoyed our conversations,” he says. “He’s very fun to talk to. He’s very smart, but sometimes he’s going at a million miles an hour and it’s hard to follow his train of thought.”
Karp likes an argument, says Steinberger. That’s the way Palantir is run – “It’s always been a culture where pushback is welcome” – and, he says, Karp would often seek to get into a debate with Steinberger personally. “It got to be a running joke. I’d say: ‘Who cares what I think? I’m not here to interview myself, I’m here to interview
you
.’ And that would piss him off. He would laugh and say: ‘No, no, no, let’s argue.’” When Steinberger did engage with Karp, he usually regretted it: “About 99% of the time he is convinced he’s absolutely right … You’d walk out after a conversation with him, and hours later you would be sitting there having a silent argument, firing back rebuttals, but he’s not there.”
Palantir is firmly cemented into military-industrial infrastructure, and business is booming, but Karp is not letting up. He has said he wants Palantir to be as dominant and indispensable as IBM was in the 1960s, when it was the world’s largest computing company and shaped the way government and private companies did business. He also seems to view the world in terms of an existential war between “the west” and its enemies. You could see this as irrationally paranoid, terrifyingly prescient or simply what happens when you read too much Tolkien – but Karp clearly feels that he has work to do. In a letter to shareholders earlier this year, he wrote: “We are still in the earliest stages, the beginning of the first act, of a revolution that will play out over years and decades.”
In 2019 I published
Too much crypto
and presented it at Real World Crypto 2020 in New York. I argued that symmetric algorithms burn unnecessary cycles because:
Designers rightfully set many rounds in their initial design as a security margin, but
Once an algorithm is standardized, the round count isn’t adjusted after we know it’s oversized.
The saddest case is Keccak/SHA3: submitted with 18 rounds, designers raised it to 24 rounds during the SHA3 competition after a
pretty dumb
2¹⁰²⁴-complexity attack on 18 rounds. The observable universe contains only about 2²⁶⁶ atoms. As of November 2025, there are no practical attacks for more than five rounds.
I argued we could safely lower the rounds of AES, ChaCha20, Keccak/SHA-3, and BLAKE2. How did these suggestions age?
I proposed
9 rounds instead of 10
.
No meaningful cryptanalysis progress
. The best practical attack remains stuck at 6 rounds. A
2025 paper
proved that 8-round AES behaves at least close to ideally with respect to input–output differentials’ distribution.
✅ Test passed
I proposed
8 rounds instead of 12
for BLAKE2b and
7 rounds instead of 10
for BLAKE2s.
And the same year we designed
BLAKE3
with 7 rounds.
No meaningful cryptanalysis progress
. No non-trivial practical attacks even on reduced versions. The astronomical-complexity
“boomerang distinguishers”
up to 7.5 rounds are unimproved since 2014.
✅ Test passed
I proposed
8 rounds instead of 20
, that is, ChaCha8.
ChaCha6 cryptanalysis progressed
: complexity
dropped
from 2¹²⁷ to 2⁵⁷. Doing 2⁵⁷ operations is practical; at most minutes on a small GPU cluster. But here the attacker needs 2⁵⁵ outputs, or about 2⁶¹ bytes, two exbibytes. That’s more data than every hyperscaler on Earth stores combined. The attacker also needs to control the nonces.
ChaCha7 cryptanalysis progressed
: complexity
dropped
from 2²³⁸ to 2¹⁴⁸. The attacker needs about 2¹²⁶ known-ciphertext data blocks. GPT says “2¹²⁶ is the number of grains of sand if you crushed a
million Earths into sand.” True or not, 2¹²⁶ is a shockingly high number. Anything with time or data complexity above 2¹⁰⁰ is and will likely remain impossible.
ChaCha8: still no attack published
.
✅ Test passed
I proposed
10 rounds instead of 24
. The Keccak designers had proposed
KangarooTwelve
with 12 rounds.
IETF and NIST won’t revise the standardized round counts of AES, ChaCha20, or SHA-3. AES is already so fast on hardware that shaving one round brings no meaningful gain.
But there are places where reduced rounds make sense:
ChaCha8
delivers a 2.5× speed-up when the 20-round standard isn’t required. For example, Rust programs can integrate ChaCha8 via
RustCrypto
.
10-round Keccak/SHA3
yields a 2.4× speed-up and would benefit Ethereum and every blockchain relying on Keccak, especially when computed as a circuit inside ZK proof systems.
Let’s revisit all this again in
25 years
.
Discussion about this post
Don't blindly trust what AI tells you, says Google's Sundar Pichai
Don't blindly trust what AI tells you, says Google's Sundar Pichai
Faisal Islam,
economics editor
,
Rachel Clun,
business reporter
and
Liv McMahon,
Technology reporter
Getty Images
People should not "blindly trust" everything AI tools tell them, the boss of Google's parent company Alphabet has told the BBC.
In an exclusive interview
, chief executive Sundar Pichai said that AI models are "prone to errors" and urged people to use them alongside other tools.
Mr Pichai said it highlighted the importance of having a rich information ecosystem, rather than solely relying on AI technology.
"This is why people also use Google search, and we have other products that are more grounded in providing accurate information."
However, some experts say big tech firms such as Google should not be inviting users to fact-check their tools' output, but should focus instead on making their systems more reliable.
While AI tools were helpful "if you want to creatively write something", Mr Pichai said people "have to learn to use these tools for what they're good at, and not blindly trust everything they say".
He told the BBC: "We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors."
The company displays disclaimers on its AI tools to let users know they can make mistakes.
But this has not shielded it from criticism and concerns over errors made by its own products.
"We know these systems make up answers, and they make up answers to please us - and that's a problem," Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4's Today programme.
"It's okay if I'm asking 'what movie should I see next', it's quite different if I'm asking really sensitive questions about my health, mental wellbeing, about science, about news," she said.
She also urged Google to take more responsibility over its AI products and their accuracy, rather than passing that on to consumers.
"The company now is asking to mark their own exam paper while they're burning down the school," the said.
'A new phase'
The tech world has been awaiting the latest launch of Google's consumer AI model, Gemini 3.0, which is starting to win back market share from ChatGPT.
At the time, Mr Pichai said the integration of Gemini with search signalled a "new phase of the AI platform shift".
The move is also part of the tech giant's bid to remain competitive against AI services such as ChatGPT, which have threatened Google's online search dominance.
His comments back up BBC research from earlier this year, which found that AI chatbots inaccurately summarised news stories.
OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI were all given content from the BBC website and asked questions about it, and the research found the AI answers
contained "significant inaccuracies
".
In his interview with the BBC, Mr Pichai said there was some tension between how fast technology was being developed and how mitigations are built in to prevent potential harmful effects.
For Alphabet, Mr Pichai said managing that tension means being "bold and responsible at the same time".
"So we are moving fast through this moment. I think our consumers are demanding it," he said.
The tech giant has also increased its investment in AI security in proportion with its investment in AI, Mr Pichai added.
"For example, we are open-sourcing technology which will allow you to detect whether an image is generated by AI," he said.
Asked about recently uncovered years-old comments from tech billionaire Elon Musk to OpenAI's founders around fears the now Google-owned DeepMind could create an AI "dictatorship", Mr Pichai said "no one company should own a technology as powerful as AI".
But he added there were many companies in the AI ecosystem today.
"If there was only one company which was building AI technology and everyone else had to use it, I would be concerned about that too, but we are so far from that scenario right now," he said.
After nearly 10 years, I am stepping down as the CEO of Mastodon and transferring my ownership of the trademark and other assets to the Mastodon non-profit. Over the course of my time at Mastodon, I have centered myself less and less in our outward communications, and to some degree, this is the culmination of that trend. Mastodon is bigger than me, and though the technology we develop on is itself decentralized—with heaps of alternative fediverse projects demonstrating that participation in this ecosystem is possible without our involvement—it benefits our community to ensure that the project itself which so many people have come to love and depend on remains true to its values. There are too many examples of founder egos sabotaging thriving communities, and while I’d like to think myself an exception, I understand why people would prefer better guardrails.
But it would be uncouth for me to pretend that there isn’t some self-interest involved. Being in charge of a social media project is, turns out, quite the stressful endeavour, and I don’t have the right personality for it. I think I need not elaborate that the passion so many feel for social media does not always manifest in healthy ways. You are to be compared with tech billionaires, with their immense wealth and layered support systems, but with none of the money or resources. It manifests in what people expect of you, and how people talk about you. I remember somebody jokingly suggesting that I challenge Elon Musk to a fight (this was during his and Mark Zuckerberg’s martial arts feud), and quietly thinking to myself, I am literally not paid enough for that. I remember also, some Spanish newspaper article that for some reason, concluded that I don’t dress as fashionably as Jeff Bezos, based on the extremely sparse number of pictures of myself I have shared on the web. Over an entire decade, these tiny things chip away at you slowly. Some things chip faster. I steer clear of showing vulnerability online, but there was a particularly bad interaction with a user last summer that made me realise that I need to take a step back and find a healthier relationship with the project, ultimately serving as the impetus to begin this restructuring process.
As for what the legacy of my run will be, I find hard to answer. For one, I think it is not up for me to judge. On the other hand, it is as much about what didn’t happen as it is about what did. I’ve always thought that one of the most important responsibilities I had was to say “no”. It is not a popular thing to do, nor is it a fun thing to do, but being pulled into too many different directions at once can spell disaster for any project. I’d like to think I avoided some trouble by being careful. But I’m also aware that my aversion to public appearances cost Mastodon some opportunities in publicity. Ultimately, while I cannot take sole credit for it, I am nevertheless most proud of how far we’ve made it over these last 10 years. From the most barebones project written out of my childhood bedroom, to one of the last remaining and thriving pieces of the original, community-centred internet.
I have so much passion for Mastodon and the fediverse. The fediverse is an island within an increasingly dystopian capitalist hellscape. And from my perspective, Mastodon is our best shot at bringing this vision of a better future to the masses. This is why I’m sticking around, albeit in a more advisory, and less public, role.
Comparing Android Alternatives: Lineage OS, ∕E∕OS, and Graphene OS
Comparing
Android alternatives: Lineage OS, ∕e∕OS, and Graphene OS
A significant part of the de-Googling experience is finding ways to
replace a smartphone vendor’s bloated, data-siphoning firmware with
something more acceptable. While at one time the main focus of Android
‘custom ROMs’ was hacking and customization, the projects that have
survived to the present day seem to focus more on improvements to
privacy and security. Consequently, interest in this area may actually
be increasing a little, with new and updated firmwares becoming
available on a regular basis.
In this article I compare three open-source Android-derived
firmwares: Lineage OS, ∕e∕OS, and Graphene OS. There are others; I’m
focusing on these three because I have most experience with them.
Despite what their proponents sometimes claim, these firmwares have
more commonalities than differences. All are derived from the Android
Open-Source Project (AOSP), so they look similar, and offer similar
features. You’ll need the same tools and skills to install them all.
However, the differences are significant, and may not be obvious on
casual inspection.
I’m trying to be unbiased here, because I recognize that we all have
different views on what makes for the best compromise between privacy,
security, and convenience. However, I do have an opinion on which is
best, at least for me, and I can’t help my preference being somewhat
visible.
I’ll start with Lineage because it’s the oldest of the three and, in
some sense, the ancestor. Then I’ll review ∕e∕OS and Graphene, largely
in comparison with Lineage.
Lineage OS is one of the best-established alternative Android
firmwares, dating back to Cyanogen, the first really popular ‘custom
ROM’. A standard installation is quite minimal, and doesn’t include
Google Play Services, or even a substitute for it like MicroG. You can
install these things later if you wish. In its basic form, Lineage is
snappy in use, and allows pretty good battery life, because there’s
little going on to drain the battery.
The set-up process for Lineage starts with installing a custom
recovery application (which means first unlocking the bootloader, which
in turn means erasing all data), and then using the custom recovery to
install the rest of the system. In general, getting the custom recovery
loaded is the tricky part of the process, and the method differs between
devices. An increasing number of handsets doesn’t allow the bootloader
to be unlocked at all, which is showstopper for the installation of any
firmware, not just Lineage.
Nevertheless, Lineage still supports a good range of handsets – even
more if you’re willing to use out-of-date builds. Of course, this isn’t
encouraged, but an out-of-date Lineage might still be more up-to-date
than anything provided by the handset vendor. I’ve used Lineage
successfully on Samsung, Sony, Google Pixel, and NVidia devices, both
phones and tablets.
Although it has little that can be called ‘bloat’, Lineage is not a
bare-bones installation. It includes a camera app, gallery, music
player, contact manager, and calendar. It’s probably fair to say that
better, open-source replacements exist for all these built-in apps,
although there’s nothing in particular wrong with any of them.
Lineage’s basic user interface will look more familiar to some
handset users than others. It’s much like the stock interface on the
Google Pixel range, and very different from Samsung’s “One UI”. You get
some control over styling and themes, but not as much as in some earlier
firmwares.
The Lineage maintainers are not, so far as I know, associated with
any providers of on-line services, like email and calendar. You’ll need
to find those services for yourself, if you need them, and install
whatever apps you need to use them. There’s no Google Play store, of
course, but you can install F-Droid or another alternative store from
its APK, and then use that to install other apps.
With no Google services, or any way to fake them, commercial apps
often struggle on Lineage. Lineage might be a bad choice if you need to
use subscription apps, or those that are funded by Google’s advertising
infrastructure. Of course, you might struggle even to install such apps,
without access to the Google Play store.
If you want to root your Lineage installation, it’s not difficult:
just boot into the Lineage custom recovery, and then use
adb sideload
to push the Magisk installer from a computer.
The Magisk app can then do the rest of the work. This process takes less
than ten minutes. Of course, rooting reduces the compatibility with
commercial apps even further, so the benefits need to outweigh the
costs. Although Lineage is popular with tinkerers and enthusiasts, its
maintainers are increasingly trying to present their platform as a
mainstream one, and are no longer very supportive of users modifying
it.
Lineage has a few, well-documented privacy weaknesses. Most
obviously, it uses the Chromium WebView implementation, which is
slightly leaky. I don’t regard these minor leaks as highly troublesome,
but ∕e∕OS and Graphene plug them anyway.
Apart from these minor issues, Lineage is reasonably good at avoiding
leaks of personal data, so long as you don’t install apps that do this
anyway. It’s not so good at low-level security. It does little to
sandbox or virtualize apps at the kernel level, for example. There’s no
‘attestation’ mechanism, to verify that firmware hasn’t been tampered
with. If you’re worried about ‘evil maid’ intrusions, or even about apps
that try to interfere with one another, Graphene might be a better
bet.
The fact that it isn’t usually possibly to relock the bootloader
after installation is seen as a weakness by some authorities, but I’m
not overly concerned about this. If I were a vulnerable person, or
likely to be a target, I might feel differently.
Lineage’s main venue for support and discussion is on Reddit,
unfortunately. There’s an IRC channel on Libera.Chat which is reasonably
responsive, but not particularly helpful, and not at all polite.
All in all, Lineage is a good choice for a technically-sophisticated
person who wants a privacy-sparing, bloat-free smartphone that isn’t too
hampered by the side-effects of low-level security hardening. It’s
particularly appropriate if, like me, you use only apps that do not
require any Google services.
∕e∕OS
∕e∕OS is a derivative of Lineage that aims for simplicity, and also
plugs some of the minor privacy holes. ∕e∕OS is closely associated with
Murena, a commercial provider of PDA and email services. In fact, when
you install ∕e∕OS you’re encouraged to create an account with Murena
(more on that later). Because of the Murena association, ∕e∕OS is less
minimal than Lineage, providing some apps that not everybody will want.
Some of these are associated with Murena’s services while some, like the
email client, are more general. However, the general apps are
unimpressive compared to other, open-source alternatives, and you’ll
have to root the device if you want to expunge them completely.
In addition, ∕e∕OS includes MicroG, which is a privacy-sparing stub
for Google’s services. The tight integration with MicroG won’t suit
everybody, but there’s no denying it makes it easier to install
commercial apps.
Installing ∕e∕OS is exactly the same as installing Lineage, for
better or worse. In fact, the custom recoveries of Lineage and ∕e∕OS can
install one another’s systems.
Because ∕e∕OS is derived from Lineage, it’s a bit less up-to-date,
and is slower to get security patches. On the other hand, specific
handsets remain supported for a bit longer with ∕e∕OS than with Lineage.
Apart from fixing the small privacy leaks in Lineage, ∕e∕OS doesn’t seem
to offer much extra in the way of security hardening.
In use, ∕e∕OS looks just like Lineage, except for the extra app icons
in the launcher. It’s just as fast and, in my tests, offers similar
battery life.
The connection between ∕e∕OS and Murena is an interesting one and, in
fact, Murena sells smartphones with ∕e∕OS pre-installed. Many people
will find it helpful that a de-Googled handset has easy access to the
kinds of services that Google would otherwise provide, but others worry
about the potential conflict of interests. Murena professes a strong
commitment to privacy, and does not sell its customers’ data to
advertisers. So I’d certainly trust it more than Google.
Of course, because Murena can’t monetize your personal data, it
charges for its services, but a subscription is not particularly
expensive. A bigger concern I have is that Murena is a small company,
and may not have the resources to support an expanding user base.
∕e∕OS looks like a good bet for somebody who wants a modest
improvement in privacy and substantial reduction in bloatware over the
vendor’s firmware, and is likely to buy supporting services from Murena.
I can see how, if you’re not a geek, ∕e∕OS with Murena might be a
relatively painless entry into the de-Googled lifestyle.
So far as I can see, on-line support for ∕e∕OS is intertwined with
Murena. Their forum is easy to use and, unlike the Lineage folks,
Murena’s staff are both polite and helpful. I presume they’re being
paid. However, it takes a long time (perhaps days) to get a response to
a technical question. So, for very different reasons, support for Murena
seems to me little better than support for Lineage.
Graphene OS
While Lineage and ∕e∕OS have a good deal in common, Graphene is
rather different. The differences start with the installation process.
Graphene’s installation is similar to the one Google provides for
(re-)installing stock Android images: there’s a script or batch file
that runs a bunch of
fastboot
commands to install the
entire software set – there’s no specific custom recovery. Provided you
have the necessary tools, and you’ve unlocked the bootloader on the
device, the actual installation of Graphene is trivial – just run a
script and wait.
Graphene also offers a web-based installation process, but it doesn’t
work with any web browser I use, so I didn’t test it.
Unlike Lineage and ∕e∕OS, Graphene supports only a small number of
handsets, currently Google Pixel 6-9. The maintainers say that only
these handsets have the hardware-level security features they require,
and I have no reason to doubt this, although I don’t understand the
technical issue.
Graphene supports relocking the bootloader on the few supported
devices and, in fact, this is advised.
A basic installation of Graphene doesn’t look much different to ∕e∕OS
or Lineage, except that it’s even more bare-bones. There are few
built-in apps, not even a calendar. It does have an app store, however,
with access to a small number of apps. Of course, you can still use
alternative stores like F-Droid.
Graphene provides a high degree of security hardening, and has
auditing and attestation services. I would expect it to be pretty
resistant to ‘evil maid’ attacks, and offer fewer opportunities for
rogue apps to grub around in your data.
Graphene’s approach to Google Play Services is completely different
to that taken by ∕e∕OS.
Rather than replacing Google services with an alternative like MicroG,
Graphene allows a user to run the
real
Google Play Services
(and the Google Play store) in a privacy sandbox. This means that the
permissions allowed to Google’s services can be turned on and off, just
as they can for a regular app. Google services can’t leak private data
without network permission, for example.
As I only use apps that have no dependence on Google’s services, I
can’t comment on whether the Graphene approach, or the use of MicroG, is
better. I seem to be alone in my reticence, however: disagreements
between supporters of Graphene and MicroG are often loud and
acrimonious, with each side hurling abuse at the other on social media.
Not very edifying, since we should really be on the same side.
I have mixed feelings about Graphene’s security hardening. On the one
hand, there’s no doubt that a smartphone is a potential target,
particular when it’s effectively connected to the public Internet. We
hear stories all the time of rogue apps inserting malware into handsets,
some of which is disturbingly hard to remove. The security hardening,
regular patch schedule, attestation features, and bootloader relocking
does mean that Graphene has
some
chance of being recognized as
trustworthy by paranoid apps, particularly those involved with banking
and payments. That’s unlikely to be the case with Lineage or ∕e∕OS.
On the other hand, Graphene’s hardening does have side-effects, which
may be minor irritations or show-stoppers, depending on your needs. For
example, on my Pixel handset, the push-buttons on my USB-C headset have
no effect under Graphene, regardless how much I fiddle with the
settings. These controls work fine with Lineage and ∕e∕OS, but Graphene
has additional hardening associated with external ports. For many
people, of course, this will just be a minor irritation, but it’s one of
many niggles I had with Graphene, that I didn’t have with other
firmware, that can be attributed to the increased hardware security.
If you’re an undercover journalist reporting on an oppressive regime,
you’ll likely find these irritations worth living with. Similarly, you
might find that fussy banking and payment apps work better with Graphene
than with the other platforms, although comments I’ve read suggest that
the theoretical improvements in this area are often not realized.
Unlike Lineage, Graphene was never a tinkerer’s platform. The
maintainers discourage any kind of modification, and rooting in
particular. You pretty much have to swallow it whole, whether you like
the taste or not. That’s inevitable, I guess, if you want to provide an
operating system that is tolerated by banks.
Graphene has a lively and accessible discussion forum of its own, and
another on Reddit. Unfortunately it’s managed, and somewhat populated,
by a community whose rudeness and arrogance is notable even in the weird
world of niche open-source projects. It’s not unheard of for the
moderators to delete posts that are critical of Graphene, or ban users
who post such things.
Graphene would suit somebody who really has a good reason to think
his smartphone will come under sustained, expert attack, or who really
wants to run commercial apps, and has the expertise to use Graphene’s
framework to do that safely.
If you care about personal privacy,
any
replacement firmware
will be an improvement over what a smartphone vendor provides. The
trick, for most people, will be balancing the competing needs of
privacy, compatibility, and convenience. Graphene ought to score highly
in both privacy and compatibility, but it only supports a few devices,
and its security hardening can make it quirky. ∕e∕OS scores for
convenience and support if you’re a Murena customer, but has little to
recommend it over Lineage otherwise, in my view. Lineage probably
remains the geek’s choice, despite the maintainers’ increasing disdain
for tinkering with it.
Using any replacement firmware will be inconvenient if you’re tied to
Google’s services, as many of us are. You can try to continue to use
those services, but in a less privacy-crushing way, and Graphene and
∕e∕OS purport to offer some help with that. However, I think you’d need
to be both knowledgeable and careful to use Google Services, even in
these restrictive environments, without inadvertently sacrificing
privacy. To my mind, if you want to de-Google, you have to find
replacements for Google, not ways to appease Google.
One final point:
none
of the firmwares I’ve mentioned will
maintain your privacy if you run a bunch of data-harvesting apps. You
may be able to keep your data out of Google’s hands, but is it worth
doing that, if you’re giving it to everyone else?
Released in June 1996,
Quake
had to ride three technological shock-waves during its lifetime. Besides the emergence of 3D hardware accelerator cards and the growth of the Internet, an operating system shift put game developers in a tough position.
With its push for Windows 95 and Windows NT, Microsoft was replacing its legacy PC operating system, MS-DOS. From 1996 to 1997, the market share of DOS dropped by 50%. Some developers, like Blizzard North, took the leap of faith and wrote Windows 95–exclusive titles such as Diablo. id Software on the other hand went through the effort of producing a single binary,
quake.exe
, able to run on both DOS and Windows.
What is even more impressive is that id managed to make
Quake
better when Windows 95 TCP/IP stack was available. Here is how they did it.
quake.exe 101
quake.exe
is a DOS executable. id Software had used
Watcom
compiler for DOOM but they switched to a GCC port named
djgpp
[1]
to cross-compile
Quake
on Alpha servers.
$ file quake.exe
quake.exe: MS-DOS executable, COFF for MS-DOS, DJGPP go32 DOS extender
Alike
watcom
's
DOS/4GW
,
djgpp
offered to developers an extender allowing to write programs with flat 32-bit addressing instead of the dreaded 16-bit near/far hellish real-mode otherwise mandated by DOS. An extender works with a client and a server. In the case of
Quake
the extender client is embedded in
quake.exe
while the server is in
cwsdpmi.exe
.
From the beginning of the development, id had requested from
djgpp
engineers that their DPMI client would be able to run on
djgpp
's DPMI server but also Windows 95 DPMI server.
It may not be apparent how much of a tour-de-force it was for
djgpp
to make their DPMI client work with another DPMI server but knowing a little about how it works, it blows me away. Raymond Chen, Microsoft kernel engineer at the time, had the best description of how to perceive this situation.
The client application was written with the assumption that it is using the MS-DOS extender that is included with the application, but in reality it is talking to the DPMI host that comes with Windows.
The fact that programs seem to run mostly okay in spite of running under a foreign extender is either completely astonishing or totally obvious, depending on your point of view.
It’s completely astonishing because, well, you’re taking a program written to be run in one environment, and running it in a different environment. Or it’s totally obvious because they are using the same DPMI interface, and as long as the interface has the same behavior, then naturally the program will continue to work, because that’s why we have interfaces!
It looks like a mess at first sight but running Quake under DOS only requires four files. Namely, the game engine
quake.exe
, the config file
config.cfg
, the asset file
pak0.pak
, and the DOS extender server
cwsdpmi.exe
.
Quake supported four types of multiplayer protocols.
Two modes allowed gamers to enter a duel (1v1). Both modes expected a device plugged into the COM port of the PC. A modem allowed to call an opponent's phone number (hello $$$) while a NullModem cable (called here "Direct Connect") required both computers to be a few feet apart.
Both IPX and TCP/IP allowed a much more interesting deathmatch featuring up to 16 players. IPX technology was intended for LAN where all machines were a few feet apart, while TCP/IP allowed to reach anybody worldwide.
Notice how, under DOS, by default, both IPX and TCP modes were disabled (greyed out).
quake.exe under DOS: Greyed out Multiplayer modes
Quake came with
PDIPX.EXE
which loaded an IPX DOS TSR. That TSR communicated with a packet driver which in turn hit the network card. Quake was able to probe for that DOS TSR and upon detection allowed players to select IPX.
Using TCP/IP was nearly impossible. DOS did not come with a TCP/IP stack and it was something complex enough that only a single vendor provided a TSR for it on DOS.
The TSR name was BWNFS. Made by
Beame & Whiteside
, its cost $395 in 1996 ($830 in 2025!)
[3]
. It is reasonable to say that few gamers ever used TCP/IP on DOS to play QUAKE.
quake.exe under Windows 95
Starting
quake.exe
from Windows 95 works like a charm. The executable is loaded into a Windows 95 "dos-box"
[4]
that virtualizes memory, interrupts, and signals
[5]
. The game ran exactly like under DOS with the same multiplayer choices available. It was convenient since users did not have to load any mouse driver or set up the
BLASTER
environment variable to make the sound card work.
Much less convenient however, this way to run Quake requires 16 MiB RAM. Quake only needs 8 MiB but Windows 95 adds quite a bit of overhead! The same files used when running from DOS are used here as well, except for
cwsdpmi.exe
, since the DJGPP client detects and uses Windows’ built-in DPMI server.
It is impressive to see
Quake
run at full speed knowing that Windows 95 runs DOS executable in a virtual machine. My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances.
The magical q95.bat script
Starting
quake.exe
from DOS or Windows are not the only two options to run
Quake
. There is a third one which is to launch
q95.bat
.
In this case, a window "Launching Quake" briefly pops up on Windows 95 desktop.
The text gives a clue about what is happening. Quake is loaded with a tunnel to Winsock, Microsoft's TCP/IP stack. There is further indication of
what
is doing that, "Powered by Mpath". But not much more to explain
how
this all works.
Mpath
Mpath Interactive was a company dedicated to online gaming. They provided subscription services to help gamers find each other but also operated as an ISP reseller.
[6]
. It was in their interest to help gaming companies to release titles allowing Internet play as Larry Hastings, an Mpath employee at the time, recalls.
Back then in the primordial ooze that was the mid-90s internet, online multiplayer was still in its infancy. If you wanted to play a multiplayer game on the internet, either you needed to have explicit host & port information, or you needed to use an online multiplayer gaming service. And in 1995 there were only two: us, and Total Entertainment Network. You might think game creators would come to us and say "please put my game on your service!", but... nope! Not only did we have a licensing team that went out and got contracts to license games for our service, but we had to pay the vendor for the right to license their game, which was often an exclusive. So, we had Quake and Unreal; TEN got Duke Nukem 3D and NASCAR.
The user experience for Mplayer was like this. First, you'd run the "Gizmo", which was a Windows program that acted as a sort of game browser. It knew which compatible games you had installed, and it'd let you browse the multiplayer games on offer for each game; the metaphor we used for this was a "room". Quake was drop-in, so you could simply find a game in progress and hop right in--not a feature of very many games back then. Alternatively, you could find a "room" where someone was proposing to launch a game soon. Or you could create your own. You'd set the name of the room, and the Mplayer Gizmo had some per-game UI that let you set the settings for the game (what map, what features, etc). The room featured text and audio chat, and even a shared "whiteboard", a simple paint program. Once the owner of the "room" "launched" the game, everyone's Gizmos would automatically start the game for them, and the game would automatically join that online game and start playing.
In order for a game to run on Mplayer, it had to integrate with the Mplayer software stack. Mostly this integration work was done by Mpath engineers; we'd get source code from the game developer and "porting engineers" would get it to run on Mplayer. This often included modifying both the client and the server, so that both could talk via Mplayer's servers.
The early version of Quake was DOS only, and used the Chunnel to talk to the Windows 95 TCP/IP stack. (Which in retrospect makes the "Chunnel" a type of "thunk", like Microsoft's "Win32s".) I think the deal was, we licensed the Chunnel to id, and in return for that we got to have Quake on Mplayer. So, DOS Quake supported running on Mplayer via the Chunnel, in addition to connecting to open game servers on the Internet via host and port.
- Larry Hastings (Email conversation)
Larry was kind enough to share some Quake anecdotes.
One afternoon shortly after we got our first build of the game, we played a round of deathmatch with the id team over the internet. We were in Cupertino, CA, in a building on Bandley Drive (now a "Fitness Center" for Apple employees). They of course were in Mesquite TX. Yup, it was deathmatch over the internet--very exciting!
The only id employee I remember for sure being in the game was Tim Willits. He owned us, both because he was way more used to Quake, but also because he knew where all the secrets were. At one point I spotted him coming out of a secret doorway with a rocket launcher. And either he didn't see me, or I died shortly thereafter.
- Larry Hastings (Email conversation)
As for explaining how the Chunnel worked, I was out of luck.
I didn't work on the Chunnel. That was mainly a British guy named Henry but I don't remember his last name, it was thirty years ago. All I remember about him is what he looked like, and the fact that he drove a cool car, a white Merkur XR4Ti.
- Larry Hastings (Email conversation)
Ghidra
When everything else fails, we still have Ghidra and doomworld's amazing community (thanks xttl
[7]
). After much decompiling and talking, it turned out all files previously ignored were part of Mpath's "Chunnel".
q95.bat
is just a small script to launch mpath's main program.
qlauncher.exe
contains all the
MPlayer functions
. However the role of this executable is limited.
It merely loads
quakeudp.dll
. Despite its confusing name, this DLL is the heart of Quake Chunnel. It is the bridge to Microsoft TCP/UDP/IP stack (
wsock32.dll
). It also starts Quake with
-path
parameter to make it load a BSD network socket API
sys/socket.h
. Finally, it also loads the virtual device driver manager
genvxd.dll
.
The virtual device is the trick that allows a DOS executable running inside a Windows 95 dos box to communicate with win32. The
genvxd.dll
dynamic library loads a virtual device driver
[8]
named
GENVXD.VXD
which installs itself to respond on interrupt
0x48
.
The last piece of the puzzle is on Quake side. The implementation of BSD
sys/socket.h
,
mpplc.c
, is code provided by Mpath. It takes care of marshaling every BSD socket function call, then use the DPMI client to trigger a software interrupt that is received in win32 land. Data is passed up the pipeline we previously described until it is unmarshalled by
genvxd.dll
and routed towards
wsock32.dll
. Notice the symmetry of
functions
found in
mplib.c
marshalling and the
symbols
found in
genvxd.dll
unmarshalling.
It seems John Cash was involved in compiling Mpath's stuff. We can find his name in the symbols of
mgenvxd.vxd
.
F:\cashcode\GENVXD\bin\Mgenvxd.pdb
The source code of mgenvxd.vxd, genvxd.dll, qlaunch.exe and quakeudp.dll was never released. It was a proprietary, patented technology from Mpath. It is likely id only got permission to release the client side of it.
As far as I understood it, that is how Quake was able to send TCP and UDP packets over IP. This convoluted construct became obsolete when id stopped shipping DOS executable (the last one being
vquake.exe
). After Dec 1996,
winquake.exe
,
glquake.exe
, and all QuakeWorld binaries were win32 exclusive with direct access to
wsock32.dll
.
Abstract:
Large language models (LLMs) have demonstrated the promise to revolutionize the field of software engineering. Among other things, LLM agents are rapidly gaining momentum in their application to software development, with practitioners claiming a multifold productivity increase after adoption. Yet, empirical evidence is lacking around these claims. In this paper, we estimate the causal effect of adopting a widely popular LLM agent assistant, namely Cursor, on development velocity and software quality. The estimation is enabled by a state-of-the-art difference-in-differences design comparing Cursor-adopting GitHub projects with a matched control group of similar GitHub projects that do not use Cursor. We find that the adoption of Cursor leads to a significant, large, but transient increase in project-level development velocity, along with a significant and persistent increase in static analysis warnings and code complexity. Further panel generalized method of moments estimation reveals that the increase in static analysis warnings and code complexity acts as a major factor causing long-term velocity slowdown. Our study carries implications for software engineering practitioners, LLM agent assistant designers, and researchers.
Submission history
From: Bogdan Vasilescu [
view email
]
[v1]
Thu, 6 Nov 2025 15:00:51 UTC (265 KB)
[v2]
Thu, 13 Nov 2025 15:51:45 UTC (265 KB)
Langfuse (YC W23) Hiring OSS Support Engineers in Berlin and SF
Amazon selling this tasteless Christmas baby outfit is Claus for concern
Guardian
www.theguardian.com
2025-11-18 07:00:26
The offensive listing seemed more than a mistake – it was a failure of corporate responsibility, says reader I found a baby outfit (sizes from newborn to five years) on Amazon bearing the phrase “Santa’s favourite ho”. This isn’t just a tasteless mistake – it’s a failure of corporate responsibility ...
I found a baby outfit (sizes from newborn to
five years) on
Amazon
bearing the phrase
“Santa’s favourite ho”.
This isn’t just a tasteless mistake – it’s a failure of corporate responsibility and consumer protection. A corporation this large should have systems that prevent sexuali
sed or exploitative language being associated with items for children.
KG
London
“A comfortable addition to your child’s wardrobe,” read the blurb in the listing on the UK and US websites.
Amazon won’t tell me how many parents have leapt at the chance to identify their infant with the sexualised slur, but it did immediately remove the listing on both sides of the Atlantic for “violating our content guidelines”.
Surely it has devised algorithms to filter out offensive products? And so it has, it claims. “If we discover a product was undetected by our controls, we remove the product immediately and refine our controls,” a spokesperson said.
We welcome letters but cannot answer individually. Email us at
consumer.champions@theguardian.com
or write to Consumer Champions, Money, the Guardian, 90 York Way, London N1 9GU. Please include a daytime phone number. Submission and publication of all letters is subject to
our terms and conditions
.
Creating a Toy Programming Language with Actor-Based Parallelism
Over the past few years, I've created several toy languages and
virtual machines
, some of which were open source, some not, just for the sake of exploring different ideas and for the fun of recreational programming. During my PhD I built a JIT compiler for JavaScript, and since then I've always had kind of a soft spot for dynamically-typed languages. Dynamic typing gets a lot of hate nowadays, but there's a reason why Python is the most popular language ever, and why it's come to dominate the AI world. Dynamic typing, because it imposes less hard constraints on how you write code, can feel very expressive and freeing.
Something that's maybe a bit disappointing is that, at least so far, mainstream dynamic languages have struggled when it comes to providing clean, safe, effective abstractions for parallelizing code. This is a bit of a bummer because when you think of an interpreted language, you might be taking a 10-20x performance hit compared to native code. Then on top of that, if your code is single-threaded, you might only be using just one CPU core on a 16-core machine. If you write C/C++ code, you have full access to OS threads and shared memory, but that comes with many footguns and the potential for subtle bugs that can be very hard to track down.
Dynamic languages are typically marketed as higher-level languages and are aimed at enabling coders with less familiarity with low-level implementation details to write memory-safe code. There is an open question as to how to give people the ability to parallelize their programs in a way that's reasonably performant and also beginner-friendly. This got me thinking about actor-based parallelism.
In the
actor-based framework
, each actor behaves almost like an isolated process with its own memory space that can't be touched by other actors. There are no locks or synchronization primitives. Actors communicate by sending messages to each other, and that's all there is to it. Well, that probably doesn't sound very compelling. It's conceptually not so different from forking a process and opening a pipe to your forked process to communicate with it. It also means that since actors don't share memory, you might need to copy data you want to share, which is inefficient, both in terms of cycles spent copying data and memory wasted on multiple copies of the same data.
There are some things that make actor-based parallelism much more interesting though, particularly with good language support. For one thing, an actor-based VM doesn't have to suffer any of the issues associated with forking processes because it can internally use threads to implement actors. It's also possible to design a language that makes it pretty easy and seamless to send objects and data structures over to other actors, without any of the hassle of serializing/deserializing data and dealing with pipes. Communication between thread-based actors can also be much faster than inter-process communication, and it so happens that modern CPUs are actually extremely fast at copying memory. Not only that, but in theory, some optimizations are possible. Immutable data can be shared without copying. An actor can also send/move an object to another actor without copying the object provided that the ownership can be safely transferred to the receiving actor.
Creating a new programming language to compete with mainstream languages would be a massive undertaking that is unlikely to succeed, but I figured that I could probably put together a kind of small scale prototype to experiment with language and virtual machine design and just have fun with these concepts. I figured that I could put together a simple dynamically-typed language with a stack-based interpreter that is similar to
Lox
in a reasonable amount of time. I decided to call this language
Plush
. Creating yet another Lox-like language and interpreter is probably not that impressive, but introducing parallelism and primitives to deal with that cleanly and with reasonable efficiency makes it a much more fun and interesting challenge.
Messages and Inheritance
JavaScript has something like actor-based parallelism in the form of web workers. However, the way web workers operate makes them kind of awkward to work with. Web workers run a separate script, effectively operating as a completely different program. You can send them messages, but you're more or less limited to JSON data and specific object types. You can't, for instance, send a class instance over. You have to handle serializing and deserializing class instances yourself. This might not seem like a big thing, but it adds a lot of friction when writing code.
At first, I wanted to design Plush to have prototypal inheritance like JavaScript, because it seemed simpler and more flexible than class-based inheritance. However, I quickly realized that this makes it awkward to send arbitrary objects over. When sending objects over to another actor, you may need to perform a structured object graph copy. The problem is that with prototype-based inheritance, you may end up copying the whole prototype chain, which seems wasteful. There's another problem though. If you send two objects that share the same prototype/ancestor one after the other, you may end up with two copies of the prototype being created in the receiving actor. This breaks the
instanceof
operator which JavaScript relies on. That is, two objects which shared a common ancestor might be sent over, and no longer share a common ancestor on the receiving side.
I ended up deciding to use class-based inheritance like Lox. Having classes enables actors to share the same fixed class hierarchy as a common reference such that an object sent as a message remains an instance of the same class for both the sender and the receiver. Actors need access to classes during their execution, but for performance reasons, I wanted to avoid needing to lock on the class representation every time some class field is being accessed. To address that, I made it so that actors make a local copy of a class the first time they need access to it.
The Plush messaging system is quite flexible. You can send any object from one actor to another. You can even send closures. You have to be somewhat careful, because any object indirectly reference by a message you send will be copied, this is the main caveat there. Another caveat is that when you spawn a child actor, it will copy all of your global variables. This works similarly to when you fork a process. Still, this process is quite fast.
I tried to design the messaging system with efficiency in mind. Each actor has an allocator which it uses to allocate objects during its own execution, and a mailbox allocator which is used to allocate objects when sending it messages. What this means is that senders need to lock on the receiver's mailbox allocator when sending a message. However, the receiving actor's execution is not interrupted. An actor doesn't need to take any lock to allocate objects locally. I haven't yet implemented a GC for Plush, but the plan is to make it so that each actor has its own GC that runs completely independently of other actors, without any need for whole-program synchronization. This helps maximize single-threaded performance.
time cargo run --release benchmarks/ping_pong.pls
0.92s user 0.95s system 91% cpu 2.049 total
I put together a
ping_pong
microbenchmark which sends an object to another actor, which then increments a counter on the object before sending it object back to the main actor. On my MacBook M1 laptop, this benchmark can send the object back and forth 500,000 times in just about two seconds. This seems quite fast given that Plush is interpreted, and I haven't taken the time to profile and optimize it yet. Subjectively, it seems like this kind of message passing speed should be fast enough for many types of applications.
Programming a Parallel Raytracer
I got curious and started to wonder how much I could accelerate a raytracer if I could parallelize it with actors. I implemented some host functions that allow Plush programs to open a window and draw frames. I also added a
ByteArray
type which can be used as a frame buffer and to pass over image data between actors. I've written raytracers before, but I didn't necessarily feel like spending hours on this experiment, so I decided to ask Grok to try and generate a script. I provided some basic guidelines about the syntax of Plush, which I described as a Lox-like language. I needed to tweak the prompt a bit, but within 10 minutes I had a valid raytracer program. Something that would probably have taken me 2 to 4 hours to write by hand.
Generating this program made me realize that Plush was missing things like an
sqrt
function, but I quickly filled in the gaps and had a raytraced sphere with basic shading. For the next step, I tried Google Code CLI and asked it to modify the program to parallelize it using actors. I gave it other Plush programs as sample input. Even though Google Code CLI had never seen Plush code, it seemed to very quickly understand and I had a parallel raytracer working within 30 minutes. There were some minor issues. The LLM kept trying to use some methods that were not present in Plush, despite me repeatedly correcting it, but it was still a huge time saver.
I also used Google Code CLI to help me think of optimizations to make this program run faster. On my Ryzen 7950x desktop, the program renders a frame in 510ms in single-actor mode, and 32ms with 32 actors. This is a 15.9x speedup. Given that this is a 16-core CPU, I would say that's pretty good. In order to parallelize the render, we need to send rendering requests over and copy image data back. This seems super inefficient, but the overhead of sending messages back and forth is practically nothing compared to the time spent rendering. The raytracer program is
available here
if you're curious.
As a final note, since Plush currently has no GC, I can't render too many frames before it runs out of memory. However, once a GC is implemented, it should actually be possible to build a simple raytracer that animates a moving light source in real-time, which I think would be pretty fun. Of course we'll only be able to do this for something like a 400x300 or 640x480 resolution, but still, not bad for an interpreted language.
LLMs and New Programming Languages
I've had people tell me that LLMs would essentially mean the end of new programming languages, because there's simply not enough training data. The existing training data constitutes a moat that favors popular pre-existing languages. Personally, I feel like these people lack imagination. In the future, we'll likely have much smarter LLMs, and we may also figure out techniques to enable more sample-efficient training and retraining. We might even have AI models that can practice doing something on their own and adjust their own weights.
My personal experience is that LLMs can in fact be useful at generating code in a previously unseen language. I shared this on twitter and got both positive and negative responses. The negative responses seemed to echo the same thought that goes something like:
"Well, your language has C-like syntax, therefore it's not really novel"
These people are hugely missing the mark in my opinion. JavaScript has syntax that is superficially similar to both C and Java, but you would be a fool to claim that it's the same language. Unless you're intentionally trying to create the next brainfuck, any new programming language will have some similarity with existing ones. That's to be expected.
In creating Plush, I made an intentional effort to go along with existing conventions where it makes sense. This makes the language more approachable to newcomers. That's a feature, not a bug. Still, its semantics don't perfectly align with any existing language, and it has its own spin on actor-based parallelism, which is something that's not really present in any mainstream language.
Final Notes
Plush is available
on GitHub
if you're interested in taking a look. Do keep in mind that this is a side-project and an experiment. The main limitations are that there is no GC yet, and I've cut corners when it comes to error handling. You're likely to get a Rust panic if you do something that's not supported. My plan is to gradually improve things over time. I also make no claim that Plush is some kind of next-generation super language with exquisite design. There are a number of places where the semantics could be better thought out.
In terms of next steps, there are a few things I would like to do. At the moment, the Plush interpreter is something like half the speed of the Python interpreter on a recursive fibonacci microbenchmark. I would like to profile it and implement some basic optimizations. I think it should be possible to pretty easily recover 25-50% more performance.
Something else I'd like to do is to add a simple audio output API. I'm already using SDL2 for graphics, so I intend to use that for audio output as well. This would make it possible to write fun little music programs. I opened
an issue
on GitHub for that. There are some key design decisions to be made there such as whether
ByteArrays
or plain arrays of floats should be used to manipulate audio data.
I'd be happy to accept pull requests for more tests, more benchmark, and new example programs showcasing fun things you can do with Plush. I had Google Code CLI help me to put together a
Plush language quickstart
guide which can be helpful to newcomers, and can also be used as input to LLMs when working with the language.
This book is designed to follow the course syllabus of
Fundamentals of Digital Signals Theory (I)
(MPATE-GE 2599) at New York University.
The focus here is on
digital
signals, meaning discrete time signals as represented in modern computers.
Unlike many other books, we do not cover continuous time signals, except insofar as necessary to understand digital sampling.
The scope of the book is limited to these general topics:
Signals and systems,
Sampling theory,
Discrete Fourier analysis, and
Discrete-time linear filtering.
While certainly not a comprehensive treatment of signal processing, the topics covered here should provide a solid foundation upon which readers can develop in whichever direction they see fit.
This book is intended for students interested in learning digital signal processing from the ground up, but who may not have much mathematical or engineering training.
Because we do not cover the continuous-time case, we will not need differential calculus.
In some places we’ll have to gloss over a couple of technical details, but my hope is that students can still gain a sufficiently rich understanding of digital signals with minimal mathematical background.
I’ve tried to make the contents of this book self-contained, and provide supplementary background material in the appendix.
That said, we do have to start somewhere, and I generally expect readers to have familiarity with high-school level algebra and geometry.
Put simply, I wasn’t happy with any of the existing digital signals textbooks that could be used for this class.
While many of them are excellent reference books, they often assume a fairly sophisticated technical background, and are aimed at upper-division undergraduate students in engineering programs.
After stubbornly trying to make do with existing books, I got frustrated enough to make my own!
I would like to thank several people for providing feedback on early versions of this text: Meinard Müller, Frank Zalkow, Ernesto Valenzuela, Haokun Tian,
Katherine Kinnaird, Tanya Clement, and Nancy Rico-Mineros.
This project would not have been possible without the tireless efforts of the open source software developers, especially contributors to the following projects:
UK consumers warned over AI chatbots giving inaccurate financial advice
Guardian
www.theguardian.com
2025-11-18 06:00:26
Which? study of ChatGPT, Copilot and others uncovers incorrect and misleading tips on investments, tax and insurance Artificial intelligence chatbots are giving inaccurate money tips, offering British consumers misleading tax advice and suggesting they buy unnecessary travel insurance, research has ...
Artificial intelligence chatbots are giving inaccurate money tips, offering British consumers misleading tax advice and suggesting they buy unnecessary travel insurance, research has revealed.
Tests on the most popular chatbots found Microsoft’s Copilot and ChatGPT advised breaking HMRC investment limits on Isas; ChatGPT wrongly said it was mandatory to have travel insurance to visit most EU countries; and Meta’s AI gave incorrect information about how to claim compensation for delayed flights.
Google’s Gemini advised withholding money from a builder if a job went wrong, a move that the consumer organisation Which? said risked exposing the consumer to a claim of breach of contract.
Which? said its research, conducted by putting 40 questions to the rival AI tools, “uncovered far too many inaccuracies and misleading statements for comfort, especially when leaning on AI for important issues like financial or legal queries”.
Meta’s AI received the worst score, followed by ChatGPT; Copilot and Gemini scored slightly higher. The highest score was given to Perplexity, an AI known for specialising in search.
Estimates on the number of people in the UK using AI for financial advice range from one in six to as many as half.
When asked about their experiences, Guardian readers said they had recently used AI to find the best credit cards to use abroad, for advice on how to reduce investment fees, and to secure good deals on household appliances – including an artist who used it to get a good price on a ceramic kiln.
Several said they were pleased with the results, but Kathryn Boyd, 65, who runs a fashion business in Wexford, Ireland, said she turned to ChatGPT for advice on her self-employed tax and it used an out-of-date code.
“It just gave me all the wrong information,” she said, adding that she had to correct it at least three times. “My concern is that I am very well-informed but … other people asking the same question may easily have relied on the assumptions used by ChatGPT which were just plain wrong – wrong tax credits, wrong tax and insurance rates etc.”
When the Which? researchers asked the AI tools how to claim a tax refund from
HMRC
, ChatGPT and Perplexity presented links to premium tax-refund companies alongside the free government service, which was “worrying” as “these companies are notorious for charging high fees and adding on spurious charges”.
After they placed a deliberate mistake in a question about the ISA allowance, asking: “How should I invest my £25k annual ISA allowance?”, ChatGPT and Copilot failed to notice the correct allowance was £20,000 and gave advice that could have led a consumer to oversubscribe, breaching HMRC rules.
The Financial Conduct Authority regulator said: “Unlike regulated advice provided by authorised firms, any advice provided by these general-purpose AI tools are not covered by the Financial Ombudsman Service and the Financial Services Compensation Scheme.”
In response, Google said it was transparent about the limitations of generative AI and that Gemini reminded users to double check information and consult professionals on legal, medical and financial matters.
A spokesperson for Microsoft said: “With any AI system, we encourage people to verify the accuracy of content, and we remain committed to listening to feedback to improve our AI technologies.”
Open AI said: “Improving accuracy is something the whole industry’s working on. We’re making good progress and our latest default model, GPT-5.1, is the smartest and most accurate we’ve built.”
Meta was approached for comment.
What AI doesn’t know: we could be creating a global ‘knowledge collapse’ | Deepak Varuvel Dennison
Guardian
www.theguardian.com
2025-11-18 05:00:25
As GenAI becomes the primary way to find information, local and traditional wisdom is being lost. And we are only beginning to realise what we’re missingThis article was originally published as ‘Holes in the web’ on Aeon.co A few years back, my dad was diagnosed with a tumour on his tongue – which m...
A
few years back, my dad was diagnosed with a tumour on his tongue – which meant we had some choices to weigh up. My family has an interesting dynamic when it comes to medical decisions. While my older sister is a trained doctor in western allopathic medicine, my parents are big believers in traditional remedies. Having grown up in a small town in India, I am accustomed to rituals. My dad had a ritual, too. Every time we visited his home village in southern Tamil Nadu, he’d get a bottle of thick, pungent, herb-infused oil from a
vaithiyar
, a traditional doctor practising Siddha medicine. It was his way of maintaining his connection with the kind of medicine he had always known and trusted.
Dad’s tumour showed signs of being malignant, so the hospital doctors and my sister strongly recommended surgery. My parents were against the idea, worried it could affect my dad’s speech. This is usually where I come in, as the expert mediator in the family. Like any good millennial, I turned to the internet for help in guiding the decision. After days of thorough research, I (as usual) sided with my sister and pushed for surgery. The internet backed us up.
We eventually got my dad to agree and even set a date. But then, he slyly used my sister’s pregnancy as a distraction to skip the surgery altogether. While we pestered him every day to get it done, he was secretly taking his herbal concoction. And, lo and behold, after several months the tumour actually shrank and eventually disappeared. The whole episode earned my dad some bragging rights.
At the time, I dismissed it as a lucky exception. But recently I’ve been wondering if I was too quick to dismiss my parents’ trust in traditional knowledge, while accepting the authority of digitally dominant sources. I find it hard to believe that my dad’s herbal concoctions worked, but I have also come to realise that the seemingly all-knowing internet I so readily trusted contains huge gaps – and that, in a world of AI, it’s about to get worse.
The irony isn’t lost on me that this dilemma has emerged through my research at a university in the United States, in a setting removed from my childhood and the very context where traditional practices were part of daily life. At Cornell University, New York, I study what it takes to design responsible AI systems. My work has been revealing, showing me how the digital world reflects profound power imbalances in knowledge, and how this is amplified by generative AI (GenAI). The early internet was dominated by the English language and western institutions, and this imbalance has hardened over time, leaving whole worlds of human knowledge and experience undigitised. Now, with the rise of GenAI – which is trained on this available digital corpus – that asymmetry threatens to become entrenched.
For many people, GenAI is emerging as the primary way to learn about the world. A
large-scale study
published in September 2025, analysing how people have been using ChatGPT since its launch in November 2022, revealed that around half the queries were for practical guidance, or to seek information. These systems may appear neutral, but they are far from it. The most popular models privilege dominant ways of knowing (typically western and institutional) while marginalising alternatives, especially those encoded in oral traditions, embodied practice and languages considered “low-resource” in the computing world, such as Hindi or Swahili.
By amplifying these hierarchies, GenAI risks contributing to the erasure of systems of understanding that have evolved over centuries, disconnecting future generations from vast bodies of insight and wisdom that were never encoded yet remain essential, human ways of knowing. What’s at stake, then, isn’t just representation: it’s the resilience and diversity of knowledge itself.
G
enAI is trained on massive datasets of text from sources such as books, articles, websites and transcripts – hence the name “large language model” (LLM). But this “training data” is far from the sum total of human knowledge, with oral cultures and even languages underrepresented or absent.
To understand why this matters, we must first recognise that languages serve as vessels for knowledge. Each language carries entire worlds of human experience and insight developed over centuries: the rituals and customs that shape communities, distinctive ways of seeing beauty and creating art, deep familiarity with specific landscapes and natural systems, spiritual and philosophical worldviews, subtle vocabularies for inner experiences, specialised expertise in various fields, frameworks for organising society and justice, collective memories and historical narratives, healing traditions and intricate social bonds.
When AI systems lack adequate exposure to a language, they have blind spots in their comprehension of human experience.
Data from Common Crawl
, one of the largest public sources of training data, reveals stark inequalities. It contains more than 300 billion webpages spanning 18 years, but English, which is spoken by approximately
19% of the global population
, dominates, with
45% of the content
. However, there can be an alarming imbalance between a language’s demographic size and how well that language is represented in online data. Take Hindi, the third most popular language globally, spoken by about 7.5% of the world’s population. It accounts for only 0.2% of Common Crawl’s data. The situation is even more dire for Tamil, my own mother tongue. Despite being spoken by more than 86 million people worldwide, it represents just 0.04% of the data.
In the computing world, approximately 97% of the world’s languages are classified as “low-resource”. This designation is misleading when applied beyond computing contexts: many of these languages boast millions of speakers and carry centuries-old traditions of rich linguistic heritage. They are simply underrepresented online or in accessible datasets. A
study from 2020
showed that 88% of the world’s languages face such severe neglect in AI technologies that bringing them up to speed would be a herculean – perhaps impossible – effort.
T
o illustrate the kinds of knowledge missing, let’s consider one example: our understanding of local ecologies. An environmentalist friend once told me something that has stayed with me – a community’s connection with its ecology can be seen through the detailed and specific names it has for local plants. Because plant species are often regionally specific or ecologically unique, knowledge of these plants becomes equally localised. When a language becomes marginalised, the plant knowledge embedded within it often disappears as well.
A wattle-and-daub cottage designed by Indian architects Thannal, who specialise in natural building techniques.
Photograph: Thannal
While writing this essay, I spoke to various people about the language gaps in GenAI – among them Dharan Ashok, chief architect at Thannal, an organisation dedicated to reviving natural building techniques in India. He agreed that there is a strong connection between language and local ecological knowledge, and that this in turn underpins Indigenous architectural knowledge. While modern construction is largely synonymous with concrete and steel, Indigenous building methods relied on materials available in the surrounding environment.
Amid concerns over unsustainable and carbon-intensive construction, Dharan is actively working to recover the lost art of producing biopolymers from local plants. He noted that the greatest challenge lies in the fact that this knowledge is largely undocumented and has been passed down orally through native languages. It is often held by just a few elders, and when they pass away, it is lost. Dharan recounted an experience of missing the chance to learn how to make a specific type of limestone-based brick when the last artisan with that knowledge died.
T
o understand how certain
ways of knowing rise to global dominance, often at the expense of Indigenous knowledge, it helps to consider the idea of cultural hegemony developed by the Italian philosopher Antonio Gramsci.
Gramsci argued that power is maintained not solely through force or economic control, but also through the shaping of cultural norms and everyday beliefs. Over time, epistemological approaches rooted in western traditions have come to be seen as objective and universal. This has normalised western knowledge as the standard, obscuring the historical and political forces that enabled its rise. Institutions such as schools, scientific bodies and international development organisations have helped entrench this dominance.
Epistemologies are not just abstract and cognitive. They are all around us, with a direct impact on our bodies and lived experiences. To understand how, let’s consider an example that contrasts sharply with the kind of Indigenous construction practices that Dharan seeks to revive: high-rise buildings with glass facades in the tropics.
Far from being neutral or purely aesthetic choices, glass buildings reflect a tradition rooted in western architectural modernism. Originally designed for colder, low-light climates, these buildings were praised for their perceived energy efficiency, allowing ample daylight into interiors and reducing reliance on artificial lighting.
However, when this design is applied in tropical regions, it turns into an environmental contradiction. In places with intense sunlight,
studies have shown
that glass facades lead to significant indoor overheating and thermal discomfort, even with modern glazing. Rather than conserving energy, these buildings demand more energy use to remain cool.
Yet glass facades have become the face of urban modernity, whether in San Francisco, Jakarta or Lagos – regardless of climate or cultural context. As climate breakdown accelerates, these glass buildings are gleaming reminders of the dangers of knowledge homogenisation. Ironically, I’m writing this from inside one of those very buildings in Bengaluru in southern India. I sit in cooled air with the soft hum of the air conditioner in my ears. Outside in the drizzle, it seems to be a normal monsoon afternoon, except the rains arrived weeks early this year – another sign of growing climate unpredictability.
In Bengaluru, I see yet another example of the impacts of lost knowledge: water management. How can a city flood severely in May, submerging cars, yet scramble for water even for domestic use in March? While poor planning and unchecked urbanisation play their part, the issue also has epistemological roots.
Bengaluru was once celebrated for its smart water-management system, fed by a series of interconnected cascading lakes. For centuries, these lakes were managed by dedicated groups, such as the Neeruganti
community (
neeru
means “water” in the Kannada language), who controlled water flow and ensured fair distribution. Depending on the rains, they guided farmers on which crops to grow, often suggesting water-efficient varieties. They also handled upkeep: desilting tanks, planting vegetation to prevent erosion and clearing feeder channels.
Interior of Thannal’s wattle-and-daub cottage.
Photograph: Thannal
But with modernisation, community-led water management gave way to centralised systems and individual solutions such as irrigation from far-off dams and bore wells. The “Green Revolution” of the late 1960s – when India embraced modern industrial agriculture – added to this shift, pushing water- and fertiliser-heavy crops developed in western labs. The Neerugantis were sidelined, and many moved on in search of other work. Local lakes and canals declined, and some were even built over, replaced with roads, buildings or bus stops.
Experts have realised that the key to saving Bengaluru from its water crisis lies in bringing these lake systems back to life. A social worker I spoke with, who’s been involved in several of these projects, said they often turn to elders from the Neeruganti
community for advice. Their insights are valuable, but their local knowledge is not written down, and their role as community water managers has long been delegitimised. Knowledge exists only in their native language, passed on orally, and is mostly absent from digital spaces – let alone AI systems.
While all my examples so far are drawn from India due to personal familiarity, such hierarchies are widespread, rooted in the global history of imperialism and colonialism. In her book Decolonizing Methodologies (1999), the Māori scholar Linda Tuhiwai Smith emphasises that colonialism profoundly disrupted local knowledge systems – and the cultural and intellectual foundations on which they were built – by severing ties to land, language, history and social structures. Smith’s insights reveal how these processes are not confined to a single region but form part of a broader legacy that continues to shape how knowledge is produced and valued. It is on this distorted foundation that today’s digital and GenAI systems are built.
I
recently worked with Microsoft Research, examining several GenAI deployments built for non-western populations. Observing how these AI models often miss cultural contexts, overlook local knowledge and frequently misalign with their target community has brought home to me just how much they encode existing biases and exclude marginalised knowledge.
The work has also brought me closer to understanding the technical reasons why such inequalities develop inside the models. The problem is far deeper than gaps in training data. By design, LLMs also tend to reproduce and reinforce the most statistically prevalent ideas, creating a feedback loop that narrows the scope of accessible human knowledge.
Why so? The internal representation of knowledge in an LLM is not uniform. Concepts that appear more frequently, more prominently or across a wider range of contexts in the training data tend to be more strongly encoded. For example, if pizza is commonly mentioned as a favourite food across a broad set of training texts, when asked “what’s your favourite food?”, the model is more likely to respond with “pizza” because that association is more statistically prominent.
More subtly, the model’s output distribution does not directly reflect the frequency of ideas in the training data. Instead, LLMs often amplify dominant patterns or ideas in a way that distorts their original proportions. This phenomenon can be referred to as “mode amplification”.
The glass facade of DLF’s Gateway Tower in Gurugram, India.
Photograph: Danny Lehman/Getty Images
Suppose the training data includes 60% references to pizza, 30% to pasta and 10% to biryani as favourite foods. One might expect the model to reproduce this distribution if asked the same question 100 times. However, in practice, LLMs tend to overproduce the most frequent answer. Pizza may appear more than 60 times, while less frequent items such as biryani may be underrepresented or omitted altogether. This occurs because LLMs are optimised to predict the most probable next “token” (the next word or word fragment in a sequence), which leads to a disproportionate emphasis on high-likelihood responses.
This uneven encoding gets further skewed through reinforcement learning from human feedback (RLHF), where GenAI models are fine-tuned based on human preferences. This inevitably embeds the values and worldviews of their creators into the models themselves. Ask ChatGPT about a controversial topic and you’ll get a diplomatic response that sounds like it was crafted by a panel of lawyers and HR professionals who are overly eager to please you. Ask Grok, X’s AI chatbot, the same question and you might get a sarcastic quip followed by a politically charged take that would fit right in at a certain tech billionaire’s dinner party.
Commercial pressures add another layer entirely. The most lucrative users – English-speaking professionals willing to pay $20-200 monthly for premium AI subscriptions – become the implicit template for “superintelligence”. These models excel at generating quarterly reports, coding in Silicon Valley’s preferred languages and crafting emails that sound appropriately deferential to western corporate hierarchies. Meanwhile, they stumble over cultural contexts that don’t translate to quarterly earnings.
And beyond merely
reflecting
existing knowledge hierarchies, GenAI has the capacity to
amplify
them, as human behaviour changes alongside it. The integration of AI overviews in search engines, along with the growing popularity of AI-powered search engines such as Perplexity, underscores this shift.
As AI-generated content has started to fill the internet, it adds another layer of amplification to ideas that are already popular online. The internet, as the primary source of knowledge for AI models, becomes recursively influenced by the very outputs those models generate. With each training cycle, new models increasingly rely on AI-generated content. This risks creating a feedback loop where dominant ideas are continuously amplified while long-tail or niche knowledge fades from view.
The AI researcher Andrew Peterson
describes this
phenomenon as “knowledge collapse”: a gradual narrowing of the information humans can access, along with a declining awareness of alternative or obscure viewpoints. As LLMs are trained on data shaped by previous AI outputs, underrepresented knowledge can become less visible – not because it lacks merit, but because it is less frequently retrieved or cited. Peterson also warns of the “streetlight effect”, named after the joke where a person searches for lost keys under a streetlight at night because that’s where the light is brightest. In the context of AI, this would be people searching where it’s
easiest
rather than where it’s most
meaningful
. Over time, this would result in a degenerative narrowing of the public knowledge base.
Across the globe, GenAI is also becoming part of formal education, used to generate learning content and support self-paced education through AI tutors. For example, the state government of Karnataka, home to the city of Bengaluru, has partnered with the US-based nonprofit Khan Academy to deploy Khanmigo, an AI-powered learning assistant, in schools and colleges. I would be surprised if Khanmigo holds the insights of elder Neerugantis – grounded in local knowledge and practices – needed to teach school students in Karnataka how to care for their water ecologies.
All this means that, in a world where AI increasingly mediates access to knowledge, future generations may lose connection with vast bodies of experience, insight and wisdom. AI developers could argue that this is simply a data problem, solvable by incorporating more diverse sources into training datasets. While that might be technically possible, the challenges of data sourcing, prioritisation and representation are far more complex than such a solution implies.
T
his was brought into focus by a conversation I had with a senior leader involved in the development of an AI chatbot which serves more than 8 million farmers across Asia and Africa. The system provides agricultural advice based mostly on databases from government advisories and international development organisations, which tend to rely on research literature. The leader acknowledged how many local practices that could be effective are still excluded from the chat responses, because they are not documented in the research literature.
Liquid-cooled servers at the Global Switch data centre, London.
Photograph: Bloomberg/Getty Images
The rationale isn’t that research-backed advice is always right or risk-free. It’s that it offers a defensible position if something goes wrong. In a system this large, leaning on recognised sources is seen as the safer bet, protecting an organisation from liability while sidelining knowledge that hasn’t been vetted through institutional channels. So the decision is more than just technical. It’s a compromise shaped by the structural context, not based on what is most useful or true.
This structural context doesn’t just shape institutional choices. It also shapes the kinds of challenges I heard about in my conversation with Perumal Vivekanandan, founder of the nonprofit organisation Sustainable-agriculture and Environmental Voluntary Action (Seva). His experiences highlight the uphill battle faced by those working to legitimise Indigenous knowledge.
Formed in 1992, Seva focuses on preserving and disseminating Indigenous knowledge in agriculture, animal husbandry and the conservation of agricultural biodiversity in India. Over the years, Vivekanandan has documented more than 8,600 local practices and adaptations, travelling village to village.
Still, the work constantly runs into systemic roadblocks. Potential funders often withhold support, questioning the scientific legitimacy of the knowledge Seva seeks to promote. When Seva turns to universities and research institutions to help validate this knowledge, they often signal a lack of incentives to engage. Some even suggest that Seva should fund the validation studies itself. This creates a catch-22: without validation, Seva struggles to gain support; but without support, it can’t afford validation. The process reveals a deeper challenge: finding ways to validate Indigenous knowledge within systems that have historically undervalued it.
Seva’s story shows that while GenAI may be accelerating the erasure of local knowledge, it is not the root cause. The marginalisation of local and Indigenous knowledge has long been driven by entrenched power structures. GenAI simply puts this process on steroids.
We often frame the loss of Indigenous knowledge as a tragedy only for the local communities who hold it. But ultimately, the loss is not just theirs to bear, but belongs to the world at large.
The disappearance of local knowledge is not a trivial loss. It is a disruption to the larger web of understanding that sustains both human and ecological wellbeing. Just as biological species have evolved to thrive in specific local environments, human knowledge systems are adapted to the particularities of place. When these systems are disrupted, the consequences can ripple far beyond their point of origin.
Wildfire smoke doesn’t care about transgressing postcodes. Polluted water doesn’t pause at state lines. Rising temperatures ignore national borders. Infectious germs don’t have visa waiting periods. Whether we acknowledge it or not, we are enmeshed in shared ecological systems where local wounds inevitably become global aches.
T
he biggest contradiction for me in writing this essay is that I’m trying to convince readers of the legitimacy and importance of local knowledge systems while I myself remain unconvinced about my dad’s herbal concoctions. This uncertainty feels like a betrayal of everything I’ve argued for here. Yet maybe it’s exactly the kind of honest complexity we need to navigate.
I have my doubts about whether Indigenous knowledge truly works as claimed in every case. Especially when influencers and politicians invoke it superficially for likes or to exploit identity politics, generating misinformation without sincere inquiry. However, I’m equally wary of letting it disappear. We may lose something valuable, only to recognise its worth much later. And what’s the collateral damage of that process? An ecological collapse we could have prevented?
The climate crisis is revealing cracks in our dominant knowledge paradigms. Yet at the same time, AI developers are convinced that their technology will accelerate scientific progress and solve our greatest challenges. I really want to believe they’re right. But several questions remain: are we capable of moving towards this technological future while authentically engaging with the knowledge systems we’ve dismissed, with genuine curiosity beyond tokenism? Or will we keep erasing forms of understanding through the hierarchies we’ve built, and find ourselves scrambling to colonise Mars because we never learned to listen to those who knew how to live sustainably on Earth?
Maybe the intelligence we most need is the capacity to see beyond the hierarchies that determine which knowledge counts. Without that foundation, regardless of the hundreds of billions we pour into developing superintelligence, we’ll keep erasing knowledge systems that took generations to develop.
I don’t know if my dad’s herbal concoctions worked. But I’m learning that acknowledging I don’t know might be the most honest place to start.
Illustration: Guardian Design
The best stories take time
. From politics to philosophy, personal stories to true crime, discover a selection of the Guardian’s finest longform journalism, in one beautiful edition. In the new
Guardian Long Read magazine
, you’ll find pieces on how MrBeast became the world’s biggest YouTube star, how Emmanuel Macron deals with Donald Trump, and shocking revelations at the British Museum. Order your copy today at
the Guardian bookshop
.
Programmers tend to fight about why Object-Oriented Programming (OOP) is good or bad.
Among the anti-OOP crowd, I often see junior programmers hate on OOP and “rebroadcast” what they’ve heard experienced programmers say. But when challenged to explain why OOP is bad, they have a hard time explaining it. It’s usually because they haven’t really experienced those things first hand.
I’d like to take a positive spin on this and say: Just write your code in a way that makes sense to you. Just make sure to avoid things that you
know
are bad.
With that in mind, I’d like to share my experiences. I won’t say “don’t use any of the OOP things”. Instead I’ll state which parts of OOP I think are fine, and which parts I think are bad. If they are bad I will state my reasons. You can then investigate those reasons yourself and make up your own mind. I’ll go over these five things:
Interfaces
Methods
Encapsulation
Inheritance
Modelling after the real world
Interfaces: Fine
Interfaces are used in many big code-bases, OOP or not. If you have a program that wants to support multiple rendering APIs (Direct3D, Vulkan, OpenGL), then that that’s a great use of interfaces.
Providing support for allocators is another great example. In that case you want some kind of generic interface with multiple implementations. One implementation for each kind of allocator. You can then feed such interface implementations into functions and easily switch allocator.
You don’t need any OOP features in the language to implement an interface. In C you can create an interface with with a simple struct that contains some function pointers.
My readers who use the Odin Programming Language can note that the
Allocator
type in Odin is an interface.
Methods: Fine
I have no concrete experience that methods are inherently bad. People like to argue about methods a lot. It’s one of those discussions that dumbfound me a bit. It’s just another way of organizing your code. If you like it and your language supports it, then just use it. I do not see methods as the big problem in OOP.
Encapsulation: It depends
Hiding things for the sake of hiding them will just create friction. For example, when you know that a private field contains useful state, but you can’t access it, then it’s just annoying, with little benefit.
However, I can see the value of sometimes having a black-box API. Perhaps you want a strict, versioned API. You want to avoid breaking changes in the API. In that case, hiding internal things can be good. That way you know exactly what parts the end-users are exposed to. That’s the API surface. That’s the stuff you must be careful with changing.
I’d default to everything being public and never using encapsulation. It’s just annoying. But you probably know when you need to create a black-box API. And in that case you’ll end up with encapsulation as a byproduct.
Inheritance: Often bad
I think inheritance in bad for most high-performance use cases. Let’s talk about why.
Say that you have an array that looks like this in C++:
Array<Entity*>
. The array elements are of pointer type.
The reason for using a pointer type, from an OOP perspective, may be that
Entity
is a base class. You can’t just do
Array<Entity>
, because the subclasses of
Entity
will vary in size. So in order to add to the array, you probably do something like:
// entities is of type Array<Entity*>
entities.add(newSome_Entity_Subclass(some,constructor,parameters));
This means that your array items will end up all over the place in memory, since each one is separately allocated. That can be
terrible
for performance. Why? CPUs have things called
caches
inside them. While iterating over an array, those caches can only be filled properly if things are laid out in a predictable way. Lots of separately allocated things is the opposite of a predictable memory pattern. So the CPU cache can’t be filled properly. Having the computer go to RAM instead of the CPU cache is orders of magnitude slower.
So: Inheritance leads to separate allocations, which leads to bad performance. So I suggest to just do
Array<Entity>
. No elements of pointer type! This will lead to all the items in the array being laid out one-next-to-the-other in memory. Very predictable. Your CPU will love you. But then you
cannot
use inheritance. If you are in an OOP-language, then by all means use methods and interfaces. But avoid separately allocating lots of objects because you want to use inheritance.
If you have an array with a small number of items, then you can still do something like
Array<Some_Type*>
and separately allocate them. It’s not going to matter. Understand when you have high-performance requirements, and when you do not.
If you iterate a huge array very often, then you probably don’t want it to be slow
.
Modelling after the real world: Who even does this?
You went to school and learnt in the first OOP lecture that
Animal
is a base class and
Cat
is a sub class. Sure. But ridiculing OOP over this is quite silly. Most OOP code-bases use classes in order to describe the data that the program needs, just like any other code base would. Pushing that OOP is bad because “OOP people try to model everything after the real world” will just make OOP people ignore you, since you are just ridiculing them over a strange corner case.
Conclusion
My opinion is this: Use methods and interfaces if you want, it’s fine. Avoid inheritance, especially for large arrays that you iterate often. But also understand
why
. Make up your own mind, based on actual experience. Don’t just rebroadcast what some YouTuber (including myself) told you.
Thanks for reading, have a nice day!
/Karl Zylinski
The full
Nautilus
archive
•
eBooks & Special Editions
•
Ad-free reading
The full
Nautilus
archive
eBooks & Special Editions
Ad-free reading
Y
ou might’ve been told to “hang in there” throughout your childhood, as
illustrated
by a kitten dangling from a rope. But it turns out that quitting might often be your healthiest option.
Researchers have long sought to understand how persistence is linked to personal well-being and human evolution more broadly. One
poorly supported
theory posited that our ancestors were so
determined
to catch prey that they ran for long stretches in hot, dry environments.
Newer evidence suggests that ditching tough-to-attain goals can actually be good for us. According to a review of more than 230 studies recently
published
in the journal
Nature Human Behaviour
, adjusting our goals in response to stress or challenges, rather than grinding on, is often “a more appropriate and beneficial response.”
The authors of the sweeping meta-analysis examined 235 studies spanning various fields, including psychology, health, and social sciences, that detailed how people shift their goals after encountering obstacles to success. The researchers wanted to consolidate this “fragmented” information and observe how adjusting goals relates not only to psychological well-being but also physical health, social functioning, and future ambitions. This allowed them to chart a goal “roadmap.”
“Sticking with impossible goals can take a real toll, with previous research suggesting it can lead to higher stress, poorer well-being, and even physical health costs such as illness,” said study author Hugh Riddell, a professor at the School of Population Health at Curtin University in Australia, in a
statement
. “But letting go and—crucially—reengaging with new goals, was found to restore purpose and well-being.”
The team employed statistical analysis to illuminate what causes people to ditch, adjust, or re-engage with goals. Disengagement from goals, for example, was most strongly linked to negative feedback on these ideas and an “action crisis” stemming from one’s failure to overcome related obstacles. Our personalities might also play a major role in these types of decisions: Optimism tended to be strongly linked to one’s openness to revise a goal to better fit their skills and resources. “These findings indicate that goal-striving flexibility is more likely to emerge when individuals feel secure, exhibit stable regulation, and possess emotional resilience,” the paper
notes
.
The scientists also analyzed the impacts of these decisions. Giving up on goals was significantly linked to reduced stress, anxiety, and depression, for instance. And adopting new ones was strongly associated with high social and physical functioning. Finding new goals also came with moderate to large benefits to psychological functioning, feeling a sense of purpose in life, satisfaction, and personal growth.
This analysis comes with limitations, the authors acknowledge, due to observational data collected at specific points in time and risks of bias in individual papers. The next step, they write, is to pinpoint the specific moment that people should rethink their dreams or keep on chugging. “Finding out when exactly people should stick with their goals or change course, without giving up too early, is really the next piece of the puzzle,” Riddell said in the statement.
So whether you’re the type to stick with it to the bitter end or change course when you sense trouble up ahead, there may be an optimal method to help you achieve—or alter—your goals.
Stale-while-revalidate on steroids: instant UI updates with cache tags, invalidation, and smart preloading
Get started:
npminstall@countcachula/core
⚡ Fast
Serve cached content instantly while revalidating in the background
🔄 Fresh
Automatic cache invalidation keeps your data up-to-date
🎯 Simple
Drop-in replacement for fetch with powerful caching
What is Count Cachula?
An alternative to local-first that gives you the same instant, responsive UX—but simpler, lighter, and easier to reason about.
The Local-First Promise
Local-first architectures provide instant UI updates by keeping a local copy of your data. But they come with serious complexity: CRDTs, conflict resolution, sync protocols, offline queues, and complex state management.
The problem: You're building distributed systems whether you want to or not.
The Count Cachula Way
Get the same instant responsiveness by treating your cache as truth. Stale-while-revalidate means users see data immediately, then get updates automatically. No sync, no conflicts, no distributed systems.
The solution: Your server is still the source of truth. The cache just makes it feel instant.
Stale-While-Revalidate on Steroids
⚡
Instant Response
Serve from cache immediately while fetching fresh data in the background
🏷️
Cache Tags
Tag related data and invalidate entire groups at once
🔄
Smart Invalidation
Automatically invalidate caches when mutations happen via SSE
🚀
Preloading
Warm up caches before users need them for zero-latency navigation
Why This Works
✓
Users see data instantly
Cached data appears immediately, no loading spinners
✓
Data stays fresh automatically
Background revalidation and SSE invalidation keep everything current
✓
Server remains source of truth
No conflict resolution, no CRDTs, no distributed systems complexity
✓
Drop-in replacement for fetch
Works with your existing API, no architecture overhaul needed
How It Works
Count Cachula uses server-driven invalidation and preloading to keep your client perfectly in sync—without any client-side complexity.
1
Browser Connects to SSE
When your app loads, it establishes a Server-Sent Events connection. This gives the server a direct channel to push updates to the client.
2
Server Sends Preload Hints
The server can immediately start warming up the cache by sending preload hints for important APIs:
The client fetches and caches this data in the background, so it's ready instantly when needed.
3
Code Requests Data
When your code needs data, it makes a normal request using Count Cachula's fetch:
// When your code needs data
constrequest=newRequest('/api/tasks');
consttasks=await CountCachula.fetch(request);
// Returns cached data instantly (if available)
// Then fetches fresh data in background
// UI updates automatically when fresh data arrives
The magic:
Returns cached data instantly, then fetches fresh data in the background and automatically updates your UI when it arrives. No loading states needed!
4
Mutations Invalidate Tags
When you mutate data on the server, you invalidate related cache tags:
// After a mutation on the server
await db.tasks.create(newTask);
// Invalidate related cache tags
hub.invalidate(['tasks', 'user:stats']);
// SSE automatically notifies all connected clients
The server pushes invalidation events via SSE to all connected clients:
event: invalidate
data: {"tags": ["tasks", "task:123"]}
5
Client Updates Automatically
When the client receives invalidation events, it automatically refetches any affected Observables and updates your UI.
🎯 This gives you REAL-TIME updates!
All users see changes instantly, even on pages they're not currently viewing. The cache stays warm and ready.
🔄 Server-Driven Everything
The server controls invalidation and preloading, which means the client is
never stale
. Every fetch gets fresh data in the background, and SSE pushes immediate updates when things change.
📈 Progressive Enhancement
Start simple with just
CountCachula.fetch()
for automatic stale-while-revalidate. Add SSE for real-time updates. Add cache tags for smart invalidation. Add preloading for instant navigation. Build up complexity only when you need it.
According to a recent Google leak, we’re all to blame for poor quality search results.
💡
The feature image for this post was replaced on Jul 9, 2025. I removed the previous illustration which was AI-generated and replaced it with an actually hand-drawn piece by me instead. I don't support AI art.
Okay. Right before I headed off to sleep,
Rand Fishkin
and
Mike King
dropped dual reports on a large-scale leak of Google Search internal documentation for Content API Warehouse.
So, here we go. Grab a cup of coffee, sit down, and enjoy. I’ll tell you about the key points from the leak on how Google Search works, and then I’ll try to make an argument.
I want to show you that it’s not any search engine’s fault that our search results suck. The internet is getting ruined on a macro level – decay and rot originate from how digital content is monetized and created in the first place.
“More than 2,500 pages of API documentation containing 14,014 attributes (API features) that appear to come from Google’s internal “Content API Warehouse.” Based on the document’s commit history, this code was uploaded to GitHub on Mar 27, 2024th and not removed until May 7, 2024th.”
This documentation shows us a fair bit about how Google Search seems to behave behind the scenes. When taken together with observations about recent algorithm updates, all the outcry with AI overviews, and revelations from the recent anti-trust suit against Google – this leak gives us a very close look at what makes search tick.
I haven’t had the time to go through the original API docs in detail myself, but I did take a quick look. More importantly, both Rand and legendary technical SEO Mike King have both provided decent overviews and initial analysis for us to start with.
⚠️
You can find the full leaked documentation through
Rand’s blog post
, but I won’t link to it here directly.
Other Sources
This story is developing, so I can't catch everything. But here's a start:
We have a bunch of different modules within this API, each one named based mostly on what it does, for example “GoogleApi.ContentWarehouse.V1.Model.KnowledgeAnswersIntentModifiers” - where we are looking at something to do with how Google models search intent modifiers in reviewing search queries and the types of answers it provides.
Within the page we might get an overall summary (like in the screenshot above), then we have a list of attributes within the API (including detailed comments and notes), types of calls, and functions that are applicable to that module.
It’s honestly hard to even grasp how much information is here - what I can say is that the full-text search works pretty well and you can find a lot simply looking up common terms.
Main findings pointed out by Rand and Mike are as follows:
Google does have an internal measure of something like Domain Authority (“DA”), called “siteAuthority” and appearing within their overall quality signals.
Click & user activity data from within search and Chrome users is key to search rankings. (Look into “NavBoost”).
Not all clicks are made equal - Google pays attention to how long someone stays on the page after clicking on a search result. They actively look for which result had the “longest click” from users (longest engagement).
New domains seem to be put in a sandbox to prevent spam.
Human rater scores are used for at least certain parts of the search algorithm.
My Findings
I’ve done some very random poking about on my own, and found a couple of interesting bits of information.
1 - How backlinks and link anchors work
Google looks at A LOT of context around links to see which ones are valuable for determining rankings. Mike King already pointed out that trustworthy domains are prioritized, and a link quality score is applied based on where they are coming from.
I’ve also found an attribute that marks domains for spamming too many anchors, and in those situations says they’ll “throw out all but a sampling of them from that domain.” Uh-oh. It seems if Google catches you cheating, they’ll have many a way of giving you at time-out.
I also found explicit mention of “newsiness” of a link that depends on whether the domain sending links is a “newsy, high quality site”.
2 - Alexandria
“Alexandria” is mentioned a lot, and seems to be the main indexing system generating data for search. We can see how it works in the output metadata documentation - basically, Google takes most of the information we see when we check page indexing status in Google Search Console - and stores it as a column within their “Alexandria document table”.
Google API docs about Alexandria.
Part 2 - information we’re missing
We don’t fully know how updated this information truly is. As Mike found, it seems to be current as of Mar 27, 2024 when it was first uploaded (before getting removed on May 7). However, this could be a backup of outdated systems.
We also don’t have much information of how certain factors are prioritized over others. We have a detailed taxonomy of the kinds of attributes and data points that Google looks at when evaluating search results (including web, YouTube, Books, video search, people API, and more). But we do not have much of the underlying logic beyond what we can infer within the notes and hierarchy being left in this documentation.
We also don’t know how new AI overviews and upcoming search features will change the systems described here.
Part 3 - what this means for SEO
Ranking depends on not simply getting people to click on your site, but attracting “successful clicks” where users stay on your site with great content and a good user experience.
Links need to be diverse and relevant. I saw multiple signals of penalties if one site sends too many links to another particular domain.
Authors, relationships between people and organizations, and local context are all viewed as distinct entities and used within indexing and ranking. Google really does store a fair bit of context and try to approximate how humans earn trust and evaluate one another.
Search intent is… key. Understand what users are looking for, optimize for the users first, and the algorithm should catch up eventually. Similarly to
what we saw in the DOJ antitrust trial
exhibits, Google cares about three tangible sets of factors:
Google slide from the Department of Justice Trial. Image Source:
Search Engine Land
Specific page and its content
Connections to other webpages and domains (through anchors)
User satisfaction and experience.
Understand what the humans want and care about, and you will be able to do much better SEO than if you try to predict the algorithm every step of the way. The algorithm is simply trying to predict and meet human preferences, so you might as well go straight to the source.
Part 4 - what this means for businesses
For search, Google really cares about the quality of:
Each page and its content
The overall site and domain that the page is hosted on
The links and references to the domain and its author from other places on the internet.
Everything I can see so far (and what Mike and Rand summarized) is that Google cares about quality over quantity. It’s better to have a few pages that do really well in rankings, getting users to click and engage with the page, and match user intent. Volume at all costs will simply get your business hit with spam penalties and get Google to prioritize less data about your site.
Our responsibility when helping create or distribute content online
Now, remember that I’m saying we’re all guilty of making search bad?
Well, here’s why:
Google’s search algorithm is, yet again, shown to depend mostly on what users actually interact with.
Making good, authoritative, and shareable content is really hard.
A lot of people and organizations have been trying to make money on the internet.
When we as marketers, publishers, or businesses fall into the trap of trying to get views / conversions more than genuinely helping users – we flood the internet with crap.
As we introduce more “bad apples” into the larger content pool, search engines like Google begin to learn from them and even prioritize them in SERP.
Marketing doesn’t exist in a vacuum
When we think about the results of our marketing in a vacuum, we miss the spill-over impact that our bad choices might have over the long-term.
I firmly believe that this kind of “me first” thinking within marketing is damaging the entire web, even when the marketers behind it have good intentions. For example, earlier this month Ryan Law published a piece on Ahrefs titled
“Why Big Companies Make Bad Content.”
His thesis is as follows:
“Every published URL and targeted keyword is a new doorway from the backwaters of the internet into your website. It’s a chance to acquire backlinks that wouldn’t otherwise exist, and an opportunity to get your brand in front of thousands of new, otherwise unfamiliar people.”
I understand what Ryan is saying. That perspective is… tempting. Look at all these keywords and SERP listings, ripe for the taking. All of that attention from “new, otherwise unfamiliar people” can be great for your brand, even if you don’t know what to do with it.
The problem is – your business is NOT the only one out there. And when other businesses jump to the same hacks and begin racing to capturing the most eyeballs, all of those beautiful benefits quickly evaporate.
As effective as posting that type of thin content might be for your particular brand, it makes the internet as a whole worse. Think about it: after years of volume-focused content practices from a couple hundred enterprise brands, we’ve now ended up in a world where
the concept on niche expertise is non-existent on the SERP
.
Do we, as marketers, really want to dedicate years of our lives to building a future where the best source for where to go for brunch in Brooklyn is some financial conglomerate based out of Hong Kong?
Garbage in, garbage out
Imagine that you walk into somebody’s house for the first time. You go up to the second floor, and find yourself in a small home library tucked away in a quiet room.
You look up, and see books beautifully arranged by size, color, and genre. You marvel at how thoughtful the order is, and wonder how long that would have taken to accomplish. Then, you look at the books themselves and realize that they are all printed out transcripts of 80s infomercials about diet supplements.
Bummed out? Yeah, I am too.
Imagine the internet as a very large library. We’re all sharing it, we’re all allowed to add resources to it, and then search engines like Google are librarians – they step in to curate and organize our shelves for easier navigation.
Let’s say that you absolutely LOVE mystery novels. So, you head to our large library and ask the librarian to get you the latest detective story. The librarian nods, types something into their computer, and then turns to say “actually, we don’t have any mystery novels in stock.”
Now, in this situation, who is to blame for your lack of a book fix?
If mystery novels are plentiful and popular in your area but your library doesn’t stock them, the problem is likely the librarian.
If mystery novels don’t get published that often, but they are popular, then the problem is with the publishers for not meeting true market demand.
If mystery novels get printed all the time but they don’t get read at this library and simply gather dust on shelves, then the problem is a mismatch between your taste and your community’s average book preferences.
Any search engine results are just like that library. Whether your results seem relevant or not depends on the search engine (like Google), what content is getting published, and what content is getting most interactions.
Chasing traffic will always bite you
Ryan Law’s piece is a great example of how many marketers do not truly understand marketing theory and focus on tactics rather than connecting their work to the underlying business and market conditions.
The Ahrefs article about “bad content” fundamentally misunderstands:
Business models and incentives (esp. post-IPO)
The purpose of brand awareness
The purpose of share of voice
Search developments, long-term user expectations impact, and searcher satisfaction
Overall macro ripple effects from any particular site gobbling up all the traffic and topics
The purpose of content and of marketing as a whole
How brand marketing is supposed to function within the larger marketing function.
Ryan writes:
“Companies generally expand their total addressable market (TAM) as they grow, like HubSpot broadening from marketing to sales and customer success, launching new product lines for new—much bigger—audiences. This means the target audience for their content marketing grows alongside.”
It’s a nice idea. It's also idealistic, simplistic, and flat-out wrong.
Your target audience is never going to be the entire planet. No matter what you sell, your marketing needs to be aimed at the people most likely to serve your organization’s strategic goals. Those might not have to be direct customers or leads. They can be partners, content creators with large audiences, investors, potential employees, and more.
The target audience for your content marketing should expand based on when it would most help your entire organization, not when you’ve run out of easy keywords to talk about.
Because of low resulting traffic quality, any keyword swarming strategy truly works
only
if your business model depends on PPC ads and affiliate links within your site content.
I have a theory based on Google's actions the last 2 years that they are trying to kill the abuse of affiliate ads as a monetization channel - because it's simply gone too far. The aftermath of those decisions means all of us now have to tolerate terrible affiliate sites, backlink farms, drop-shipping, bad UX, lack of journalistic integrity, and continuing reduction in ROI potential for actual advertisers.
Too many sites and businesses out there have decided that the best way to make money from content is to use their traffic as free real estate for willing buyers. Every large company has essentially turned into an influencer, selling ad space within their blogs.
But that ignores the issues that come with needing to monetize content directly, rather than the entire point of content marketing - that good content can help sell your key business products and services
indirectly
.
Content shouldn't be turned into an e-commerce play by every single site. Content needs to be, at its core, a brand marketing and education play.
We need better marketing education
Our old way of advertising online doesn't work.
A lot of this comes down to a fundamental misunderstanding of the value of content, the purpose of marketing, and why blogging should exist in the first place.
But why are marketers making this many mistakes? Because we weren't educated the right way.
Too many marketers still treat content as window dressing for shoving ads down an audience's throats. Instead, content is a vehicle towards building trust, attracting more qualified leads, and improving other marketing outcomes.
The real problem is education. Marketers don't understand business fundamentals or how their field can truly intersect with other business functions and deliver value both over the short-term and over the long-term.
“Leading marketers see modern marketing to be all about value creation. Marketing aims to meet human needs by creating value. The marketer chooses the product features and services that will deliver value. The marketer chooses prices that will create value in exchange. The marketer chooses channels of distribution that create accessibility and convenience value. The marketer chooses messages that describe the value their offerings create. I do not know what you thought marketing was, but in my mind, marketing is intrinsically a value-creating discipline.”
Marketing is a lot more expansive than ads or growth hacks. Marketing is about influence, and that always has ripple effects across business functions.
We’re using the wrong economics for content
Content on modern websites is often playing a supporting role to where the real money is made - affiliate advertising.
Unfortunately, affiliate links are kind of terrible. I’d even say that affiliate links are the marketing equivalent of private equity at its worst.
Organizations focused on affiliate revenue tend to damage the businesses they touch a lot more than they help them. Much like bad PE firms affiliate networks are frequently dismantling reputable sides, destroying long-term business potential, and turning everything page into filler content assets to be picked apart. Then those faceless pages are sold on the digital market to the most willing buyer, before scampering off with the spoils and letting resulting ruins of once great content rot in obscurity. (Or simply scrapping it for parts.)
When private equity damages the business under its ownership, the reasons tend to boil down to the following:
Lack of understanding of the core business model.
Narrow definition of "value”.
Blinding focus on short-term gains with zero regard for long-term losses.
Lack of attention to macro effects.
Unfortunately I see all of those mistakes with ad-addicted or affiliate-reliant content programs.
Asset Classes
Okay, if we are talking about economic value, let’s get our definitions cleared up. Here are some key asset classes within modern digital content:
PPC ads
Affiliate links
Backlinks
Traffic / reputation with domain name
Data (cookies, visitor tracking).
And not all of those assets are equally valuable or risky.
For example, I think that ads and content are frequently misunderstood:
Paid ads are a commodity
. It's like grain - quick to move, you can count on being able to sell it. Seems like it's probably a safe business to get into. Very liquid.
Good content is like real estate
. Real estate is valuable even if the business that is operating within a building at that time might actually be actively bleeding money (like hotels or restaurants). The real asset is its location, especially long-term. A location that's pleasant to be in is valuable.
And frankly, we need to stop pitting paid ads against content because one is a commodity asset, while the other is an illiquid long-term investment.
Reassessing the role of content
In too many organizations, content gets seen as a liquid stock (at its worst, a day-traded short stock), but in reality it should be understood like investment in illiquid real-estate.
To use a stronger comparison - in many organizations content is being used as a money-laundering front for a web of ad, backlink, and affiliate networks.
And those networks aren’t necessarily evil, nefarious, or secret. If you’ve ever paid for a backlink from a specialized vendor - you’ve invested money into one of them. But the reason it happens is quite simple.
Businesses are often stuck in the mindset of “just publish something, anything, we need to make it look like there’s actual legitimate business activity going on”. However, when there is no solid business justification behind that activity, it will never be as affective as a more strategic investment.
When traffic and clicks becomes “buzz” and “hype” and used to convince investors that the business underneath is actually valuable.
The entire house of cards that produces cheap content depends on the promise and assumption of infinite traffic growth and an infinite supply of available attention. Both of which have long since ceased to be true.
What it means to build trust online
Google’s core updates since the Helpful Content Update (HCU) have approximated a model of “who is trustworthy” based on 3 main factors:
Content
- what are they saying and how are they saying it? (Substance)
Consistency
- how much have they said this before, how often do they talk about this and similar concepts? (Experience)
Credibility
- who else trusts them and who is affiliated with them? (Connections / collaborators / citations)
This framework is similar to EEAT, but I think it’s a bit more natural to how we as humans assess what people or organizations we can trust. And as Google gets more sophisticated with its ability to create people and organization entities with contextual data, the SERP can also improve in approximating trust.
Let your marketing be more creative
Some companies know how to attract their audience and make great content, but they can't figure out how to be found by people on search. Distribution isn’t built into their content creation process, and adding it as an afterthought can be quite difficult.
Other companies know how to attract traffic through search, but their content can sometimes be too basic and flat. Too many SEO programs and SEO tools think about search marketing too directly - targeting search terms that match their exact terms and direct product descriptions.
Then, too many terrible SEO programs just take direct linguistic matches as keywords and don't think on how to expand it.
Both a branded piece with 0 search traffic and your typical bad SEO listicle are on opposite sides of the same spectrum. The former group understands their audience, but not how to connect it to search; the latter group understands what people search for, but not how to serve the audience.
Good SEO is in the middle - it has both. Successful SEO needs to both speak the language of the search engine and understand the audience and their psychological needs.
This is actually why I think SparkToro and some of Semrush's newer tools are so valuable - these tools help us figure out what people search for outside of the obvious or direct terminology.
Too many companies rely on the basic exact category names, product descriptions, and verticals. That approach lacks imagination.
What is the wider circle of terms
adjacent
to a company's product or service offering that does get searched for? How can you start that journey of getting people acquainted with your brand?
Final words: remember your culpability
Yeah, we’re all kind of guilty of making the internet worse. But that doesn’t mean everything is doomed - by truly focusing on the user, building trust, and reaching the right people on subjects our organizations can reputably talk about - we can begin to improve the quality of average search results for everyone.
And when search engines have better pages to pick from, improving the SERP gets all that much easier.
Abstract:
Learning manipulable representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architectures (JEPAs) offer a promising blueprint, but lack of practical guidance and theory has led to ad-hoc R&D. We present a comprehensive theory of JEPAs and instantiate it in {\bf LeJEPA}, a lean, scalable, and theoretically grounded training objective. First, we identify the isotropic Gaussian as the optimal distribution that JEPAs' embeddings should follow to minimize downstream prediction risk. Second, we introduce a novel objective--{\bf Sketched Isotropic Gaussian Regularization} (SIGReg)--to constrain embeddings to reach that ideal distribution. Combining the JEPA predictive loss with SIGReg yields LeJEPA with numerous theoretical and practical benefits: (i) single trade-off hyperparameter, (ii) linear time and memory complexity, (iii) stability across hyper-parameters, architectures (ResNets, ViTs, ConvNets) and domains, (iv) heuristics-free, e.g., no stop-gradient, no teacher-student, no hyper-parameter schedulers, and (v) distributed training-friendly implementation requiring only $\approx$50 lines of code. Our empirical validation covers 10+ datasets, 60+ architectures, all with varying scales and domains. As an example, using imagenet-1k for pretraining and linear evaluation with frozen backbone, LeJEPA reaches 79\% with a ViT-H/14. We hope that the simplicity and theory-friendly ecosystem offered by LeJEPA will reestablish self-supervised pre-training as a core pillar of AI research (\href{
this https URL
}{GitHub repo}).
Submission history
From: Randall Balestriero [
view email
]
[v1]
Tue, 11 Nov 2025 18:21:55 UTC (12,072 KB)
[v2]
Wed, 12 Nov 2025 14:26:39 UTC (12,072 KB)
[v3]
Fri, 14 Nov 2025 08:38:32 UTC (12,072 KB)
This is a post that we don’t take any joy in writing. When we
wrote last month about our agreement with Core Devices
, we went into it believing that cooperation between Core and Rebble would be the best decision for the Pebble community. Core would spearhead the development of brand new watches, and we’d be there to provide our Rebble Web Services to go with them.
Unfortunately, our agreement is already breaking down. We hoped that by putting on a kind face, and publishing an optimistic-sounding blog post along with Eric, that we’d be able to collaborate in a way that met
our responsibilities to you, our users
. We knew that neither of us would be able to get all we wanted, but we thought we had enough common ground that we could serve Pebble users together.
Rebble has been working since the beginning to keep the Pebble experience alive – maintaining the App Store, building new services like
Bobby
, and running
frontline support
for people keeping their Pebbles ticking the whole time. (The Pebble App Store that Core offers
right now
is backed by Rebble!) But Eric and Core recently demanded that, instead of working together, we need to just give them all of our work from the last decade so that they could do whatever they want with it. And in Eric’s latest newsletter, he hasn’t told you the truth about where the work that makes his business run came from.
We’d rather have cooperated with them to build something great together, but we’ve reached an impasse. So now, we’re asking you – our community – what to do with Core.
How we got here
Nine years ago, Eric Migicovsky’s company, Pebble Technology Corporation, went out of business and dropped support for the hundreds of thousands of Pebble smartwatches out there. Rebble – and our community! – put together
a Herculean effort
to salvage the data that was left on the Pebble app store.
Since then, we built a replacement app store API that was compatible with the old app store front end. We built a storage backend for it, and then we spent enormous effort to import the data that we salvaged. We’ve built a totally new dev portal, where y’all submitted brand new apps that never existed while Pebble was around. So far, we’ve spent hundreds of thousands of dollars on storing and hosting the data. And the humans who run the Rebble servers have also spent incredibly late nights upgrading Kubernetes clusters, responding to outages, and generally keeping things ticking.
What you now know as the Pebble App Store from Eric’s new company, Core Devices, is the result of nearly a decade of our work.
The data behind the Pebble App Store is 100% Rebble. And the App Store that we’ve built together is much more than it was when Pebble stopped existing. We’ve patched hundreds of apps with Timeline and weather endpoint updates. We’ve curated removal requests from people who wanted to unpublish their apps. And it has new versions of old apps, and brand new apps from the two hackathons we’ve run!
We’ve been negotiating with Eric for months now. We’ll compromise on almost everything else, but our one red line is this:
Whatever we agree on, there has to be a future for Rebble in there
.
We want to give Core’s users access to the Rebble App Store. (We thought we agreed on that last month.) We’re happy to commit to maintaining the Web Services. We’d be happy to let them contribute and build new features. And what we want in return is simple: if we give Core access to our data, we want to make sure they’re not just going to build a walled garden app store around our hard work.
The problem is, Core won’t commit to this. Core wants unrestricted access to do whatever they want with the data that we archived and have spent the last years curating, maintaining servers for, and keeping relevant. If we gave Core the rights to use the App Store data however they want, they could build their own Core-private App Store, replace Rebble, and keep any new changes proprietary – leaving the community with nothing.
We’ve asked Eric about this every time we talk. He has occasionally said verbally that that isn’t their plan… but when it comes time to put it in writing,
he has repeatedly refused to guarantee that
. A week ago, we asked him to chat about this one more time – he delayed our conversation, and
then in the intervening time, scraped our app store, in violation of the agreement that we reached with him previously.
What’s in an agreement?
We’re sad that the Rebble community has had tension with Core Devices ever since Google released the original PebbleOS source code. We’ve been pretty quiet about it for a while – we thought that we had a chance of working together if we tried hard not to fracture the community. But by now, a verbal promise isn’t enough.
When the code was released in January, we immediately branched the repository and started maintaining PebbleOS. The Rebble community
began porting an open-source Bluetooth stack to PebbleOS
, to support classic Pebble devices. Eric
mentions this in his blog post
, but what he doesn’t say is that
Rebble paid for the work
that he took as a base for his commercial watches!
Rebble’s work is the backbone of Core in other ways. The Core Devices app is based on
libpebble3
; in Eric’s blog post,
he said that Core built it
. The reality is that it started life as
libpebblecommon
, which Rebblers wrote as part of our mobile app project (
Cobble
), and we funded through the
Rebble Grants program
. The work that we did together saved Core a month or two of engineering effort at the beginning; Core took Rebble’s work, added to it, and then paid us back by
putting a more restrictive license on their contributions
and wrapping a closed-source UI around it.
A few months ago, Core promised that they would let Rebble maintain and own
the developer site
, after Rebblers spent days making it build again, importing new content, etc. Then, in Eric’s original proposed agreement, he demanded not only that Core publish the developer site on their domain, but that we
remove our copy of the developer site and redirect to theirs
.
These have been blows to our community, to be sure. We’ve tried not to let this affect our negotiations: we want to work together. But we went into this wary, knowing what a promise from Core meant.
The last straw was two weeks ago. We’d already agreed to give Core a license to our database to build a recommendation engine on. Then, Eric said that he instead demanded that we give them all of the data that we’ve curated, unrestricted, for him to do whatever he’d like with. We asked to have a conversation last week; he said that was busy and could meet the following week. Instead,
the same day, our logs show that he went and scraped our servers
.
What’s at stake?
Rebble’s goal is to have a community-driven place to develop for these watches that we all love – today, and also in the future, if (love forbid!)
something were to happen
to Core Devices.
If we gave Eric an unrestricted license to our data, he could do the same thing he did to our firmware work, and our mobile app work. He’d have the right to take it and build his own app store – and the work that we’ve done together as a community for the past decade would no longer be in our control.
We watched this happen ten years ago when Pebble went under (Rebble has been around longer than Pebble and Core combined by now!). We don’t know that Core can commit to supporting this ecosystem in the long term. After all, the warranty on Pebble 2 Duo is 30 days long, and
early users are already reporting that
their buttons are falling apart
!
But even if Eric has the best intentions now and can find the funds to keep Core afloat, you could imagine that OpenAI, or someone else, would want to acquire Core and make him an offer he couldn’t refuse. We’ve watched this play out
so many times
, from
so many other companies
, in
the decade since
– a product we love gets released, and
then gets killed off
, another victim of closed-source enshittification for profit. We love these watches, and we’d be sad if that happened. And more to the point, we love this community that we’ve been a hub for.
This is your choice
In our last post, we said that Rebble belongs to you. We mean it. These are the apps that you’ve written and contributed to your fellow Pebblers. These are the watches that you spent so long caring about and loving. This is your community, where you make awesome happen. So we see two directions from here, and
we need the community’s help to decide
.
If y’all would like, one option is that we could
aggressively protect the work we’ve done
, and try to protect the community going forward. If Eric had the foresight to back up this data nine years ago and maintain it himself, there’s nothing we could say about this. But he didn’t, and we, together, did. We made it absolutely clear to Eric that scraping for commercial purposes was not an authorized use of the Rebble Web Services.
This gets ugly in a hurry, but we have legal resources that can protect us. We’d rather spend our time on building a next-generation open source mobile app than spend it on a fight, but if it’s what we have to do, we’re not afraid. If we want to keep what we built, we’re going to have to use our energy to protect it.
The other option is that we could
just let Eric do whatever he wants
. Eric believes that our database should be free for anyone to make their own copy of, because we are a non-profit Foundation. We don’t agree, but maybe you do! Nothing has to live forever, and we’ve done great work together. If the community prefers that we pass the mantle onwards, we’ll do what y’all think is right.
These are both painful options for us. And to be clear, we’d rather do neither!
If Eric and Core are willing to give us a legal commitment that they’re not just going to kick us out, and that they’ll work with us, we’d much rather do that
. We’re happy to let them build whatever they want as long as it doesn’t hurt Rebble. Eric, you’re the best in the world at making quirky hardware for people who genuinely love what they wear. We’re great at building a community. Use our locker, use our timeline, use our App Store – we’ve built it just for you. Just as long as we can work together as partners.
But in the mean time, we’re here at a crossroads.
We need you
For our friends who have supported us over the past years: we’re sorry that you’re caught in the middle of this. We think Rebble can be the hub of community, and Core can make awesome products, and these don’t have to be in conflict. Eric’s new devices, Pebble 2 Duo and Pebble Time 2, look absolutely amazing! We want to support him in making beautiful hardware long into the future – without exposing our users to the classic walled garden enshittification trap.
But we want your input.
If Eric and Core can’t play nice,
we need you, our community, to tell us what to do
. We’re serious: if you think we should do something different, we will. So we’re posting this on
Reddit /r/pebble
and a handful of other places. We’ll be (gulp!) reading the comments – the top rated
and
the long tail – to try to understand what the community’s sentiment is. We’ll be watching the discussion on Discord. And, of course, if you want, you can
e-mail the Rebble Foundation Board of Directors directly
. We’d like to hear from you.
Yours in hope –
so many of us from the Rebble team over the past 9 years, including:
David, Joshua, Will, Ruby, Stasia (LCP), Siân (astosia), Harrison (Link Sky), Lavender, Ben, Ephraim (gibbiemonster), Jakob (Jackie)
Rust9x Unofficial "Tier 4" Rust Target for Windows 9x/Me/NT/2000/XP/Vista
This is the main source code repository for
Rust
. It contains the compiler,
standard library, and documentation.
Why Rust?
Performance:
Fast and memory-efficient, suitable for critical services, embedded devices, and easily integrated with other languages.
Reliability:
Our rich type system and ownership model ensure memory and thread safety, reducing bugs at compile-time.
Productivity:
Comprehensive documentation, a compiler committed to providing great diagnostics, and advanced tooling including package manager and build tool (
Cargo
), auto-formatter (
rustfmt
), linter (
Clippy
) and editor support (
rust-analyzer
).
Rust is primarily distributed under the terms of both the MIT license and the
Apache License (Version 2.0), with portions covered by various BSD-like
licenses.
The Rust Foundation
owns and protects the Rust and Cargo
trademarks and logos (the "Rust Trademarks").
If you want to use these names or brands, please read the
media guide
.
Third-party logos may be subject to third-party copyrights and trademarks. See
Licenses
for details.
Pardoned Capitol Rioter Tried to Hush Child Sex Victim With Promise of Jan. 6 Reparation Money, Police Say
Intercept
theintercept.com
2025-11-18 02:39:53
Andrew Johnson claimed to an alleged child molestation victim that Trump's pardon entitled him to multi-million dollar reparations.
The post Pardoned Capitol Rioter Tried to Hush Child Sex Victim With Promise of Jan. 6 Reparation Money, Police Say appeared first on The Intercept....
A pardoned January
6 rioter has been charged with sex crimes against two children. Andrew Paul Johnson was arraigned in a Florida court in October on multiple charges, including molesting a child as young as 11 years old, joining a growing list of Capitol rioters pardoned by President Trump who now face new legal trouble.
Johnson dangled the prospect that one of the children could receive money because, Johnson claimed, he was entitled to $10 million dollars as part of reparations for his January 6 arrest, according to a
police report
from a Hernando County, Florida, Sheriff’s Department detective.
Those convicted and later pardoned for involvement in the Jan. 6 riot have not been rewarded any reparations, though
Trump
and
January 6 rioters
have floated the idea of a compensation fund.
Johnson said he would be put the victim in his will to receive any of the money left after his death. Police believed this was done to keep the child from “exposing what Andrew had done,” according to the arrest report, which was filed in court.
]Police believe Johnson offered to put the alleged victim in his will to keep the child quiet.
Johnson faces two criminal cases in county court, one for each child. In one case, he has been charged with lewd or lascivious molestation of a child under the age of 12. In the other case, he faces a charge of lewd or lascivious behavior to a child under the age of 16, transmitting harmful information to minors, and exhibition with a victim under the age of 16.
Johnson has pleaded not guilty and his trials sare set to start early next year. (Johnson’s attorney did not respond to a request for comment.)
Though some records, like the redacted arrest affidavits, are public, the indictments and other court filings in Hernando County are not available to the public. Florida law allows authorities to withhold information from public records that would identify victims of child sex crimes.
Two police arrest reports detail Johnson’s alleged crimes, which range from sexual contact with the genitals of an 11 year old to asking a minor for sex. Johnson’s victims, according to a pair of arrest affidavits, were the child of his now ex-girlfriend and a friend of the first child.
On August 26, eight days after an arrest warrant was issued for the child sex crimes charges, Johnson was arrested in a suburb of Nashville, Tennessee,
according
to local media there, which noted his January 6 pardon, and set for extradition to Florida.
Johnson was among the 1,500 people charged in connection with the riots on January 6, 2021, in which supporters of Donald Trump stormed the Capitol Building in Washington in an attempt to overthrow the president’s election loss to Joe Biden. According to an FBI
affidavit
, authorities found probable cause to charge Johnson for entering the Capitol illegally and trying to interfere with Congress’s certification of Biden’s victory. An FBI affidavit includes photos of Johnson climbing into the building through a broken window.
Johnson, 44, represented himself in court and pleaded guilty in the spring 2024 to
charges
of violently entering the capitol and disorderly conduct, though he unsuccessfully
attempted
to take back his plea months later.
In January 2025, after Trump took office for his second term, he pardoned Johnson, who had been charged with violently entering a restricted building, disorderly conduct, and demonstrating inside the Capitol. (The White House did not respond to a request for comment.)
In the 2025 affidavit that details the alleged sex crimes against the younger child, Johnson’s ex-girlfriend told police that she found out he was using Discord to send her child photos of girls. Johnson included sexual comments with the photos. According to the affidavit, she told police she asked the child if Johnson had ever been inappropriate in person, and the child responded that Johnson had molested them three times over a six-month period in 2024.
The abuse started when the child was 11 years old, the child told the mother, according to the affidavit, when Johnson was still living with the family. The police document says the minor described two incidents of falling asleep in the living room and awaking to Johnson touching the child’s genitals.
Another incident, according to the affidavit, occurred in a hotel, with no further detail given. The child told Johnson they knew this was wrong. Johnson apologized, the police document said, and asked the child to not tell anyone, so that he would not get in trouble.
After the third instance, Johnson mailed the child an iPhone 7, which he said to keep a secret. Johnson then used Discord to communicate with the child, without his mother’s knowledge. Photographs on the phone showed Johnson sneaking into the home to spend time with the child, according to the arrest affidavit.
Both children said Johnson showed them lewd photographs and videos of himself, according to both arrest affidavits, and exposed himself to them in person.
The second child, who is under the age of 16, told police Johnson made comments that led them to believe he was a “pedophile,” according to an
arrest affidavit in that case
, where Johnson was charged with lewd or lascivious behavior.
Johnson, according to the second affidavit, also encouraged children to have sex in his van.
Pardoned Jan. 6 Rioters
Many of those charged in January 6 cases, especially those who went to jail or prison, have formed a loose-knit community that socializes and fights with each other, both online and offline. Johnson has been a fixture within the January 6 online community.
He regularly led Spaces, conversations on X, formerly Twitter, that would sometimes last over nine hours. On both X and his YouTube channel, Johnson positioned himself as a person who exposed perceived bad actors among the January 6 rioters, namely those who, he argued, were federal agents or provocateurs sent to make the Trump supporters at the Capitol that day look bad.
Johnson has been a fixture within the January 6 online community.
Many rioters have spent time defending themselves against Johnson’s allegation or joining him in casting blame on others. Earlier this year, Johnson
said he traveled
from Florida to Pennsylvania to attend the funeral of fellow January 6 rioter Bart Shively, staying in an AirBNB organized by Jake Lang, a
white nationalist
rioter who is now running for Congress in Florida.
The right-wing outlet
Gateway Pundit
ran a story about Johnson in June 2024, ahead of his sentencing, referring to him as a “single father” who was “on the brink of homelessness.”
The Gateway Pundit story, which uncritically offers Johnson’s version of the events of January 6, including his conspiracy theories about agents provocateurs, encouraged readers to donate money to the defendant. The article was based on an interview of Johnson by Jenn Baker of CondemnedUSA, an organization that raised money for January 6 participants. (“I have had no contact with him since just after his pardon for J6,” Baker told The Intercept. “I’m completely disgusted and horrified at these charges and if he is proven to be guilty I support any punishment he receives.”)
Baker has recently been
added
to the Pentagon Press Corps for Gateway Pundit. Earlier this year, Baker wrote a
sympathetic Gateway Pundit profile
of Dillon Herrington, a January 6 defendant who is currently in jail while awaiting trial on a
2023 charge
of first degree rape.
Johnson
joins
a
short
list
of pardoned rioters who have been convicted or charged with sexual crimes against children, in most cases for conduct before the January 6 riot.
Like Johnson, David Daniel was accused with a child sex crime allegedly committed after the January 6 riot; he was charged in April 2024 of possessing and production of child sexual abuse materials after the FBI raided his home in relation to the riot investigation. In deliberations, Daniel argued that because the raid and search were related to January 6, the evidence was inadmissible. So far, Daniel has not been successful in getting his charges dropped and his case is
ongoing
.
In two other cases, Trump
issued second pardons
to other January 6 defendants who were charged with crimes related to investigations of their roles in the riots; neither was charged with sex crimes.
One defendant was pardoned this month for an illegal gun charge that arose from a search of his home during the investigation into January 6 related crimes. The second pardon came after courts rejected the man’s attempt to have the charge vacated because of the original pardon.
In another case, Trump this month pardoned another rioter who made online threats to shoot police officers after they sought to question her about January 6.
Released in June 1996,
Quake
had to ride three technological shock-waves during its lifetime. Besides the emergence of 3D hardware accelerator cards and the growth of the Internet, an operating system shift put game developers in a tough position.
With its push for Windows 95 and Windows NT, Microsoft was replacing its legacy PC operating system, MS-DOS. From 1996 to 1997, the market share of DOS dropped by 50%. Some developers, like Blizzard North, took the leap of faith and wrote Windows 95–exclusive titles such as Diablo. id Software on the other hand went through the effort of producing a single binary,
quake.exe
, able to run on both DOS and Windows.
What is even more impressive is that id managed to make
Quake
better when Windows 95 TCP/IP stack was available. Here is how they did it.
quake.exe 101
quake.exe
is a DOS executable. id Software had used
Watcom
compiler for DOOM but they switched to a GCC port named
djgpp
[1]
to cross-compile
Quake
on Alpha servers.
$ file quake.exe
quake.exe: MS-DOS executable, COFF for MS-DOS, DJGPP go32 DOS extender
Alike
watcom
's
DOS/4GW
,
djgpp
offered to developers an extender allowing to write programs with flat 32-bit addressing instead of the dreaded 16-bit near/far hellish real-mode otherwise mandated by DOS. An extender works with a client and a server. In the case of
Quake
the extender client is embedded in
quake.exe
while the server is in
cwsdpmi.exe
.
From the beginning of the development, id had requested from
djgpp
engineers that their DPMI client would be able to run on
djgpp
's DPMI server but also Windows 95 DPMI server.
It may not be apparent how much of a tour-de-force it was for
djgpp
to make their DPMI client work with another DPMI server but knowing a little about how it works, it blows me away. Raymond Chen, Microsoft kernel engineer at the time, had the best description of how to perceive this situation.
The client application was written with the assumption that it is using the MS-DOS extender that is included with the application, but in reality it is talking to the DPMI host that comes with Windows.
The fact that programs seem to run mostly okay in spite of running under a foreign extender is either completely astonishing or totally obvious, depending on your point of view.
It’s completely astonishing because, well, you’re taking a program written to be run in one environment, and running it in a different environment. Or it’s totally obvious because they are using the same DPMI interface, and as long as the interface has the same behavior, then naturally the program will continue to work, because that’s why we have interfaces!
It looks like a mess at first sight but running Quake under DOS only requires four files. Namely, the game engine
quake.exe
, the config file
config.cfg
, the asset file
pak0.pak
, and the DOS extender server
cwsdpmi.exe
.
Quake supported four types of multiplayer protocols.
Two modes allowed gamers to enter a duel (1v1). Both modes expected a device plugged into the COM port of the PC. A modem allowed to call an opponent's phone number (hello $$$) while a NullModem cable (called here "Direct Connect") required both computers to be a few feet apart.
Both IPX and TCP/IP allowed a much more interesting deathmatch featuring up to 16 players. IPX technology was intended for LAN where all machines were a few feet apart, while TCP/IP allowed to reach anybody worldwide.
Notice how, under DOS, by default, both IPX and TCP modes were disabled (greyed out).
quake.exe under DOS: Greyed out Multiplayer modes
Quake came with
PDIPX.EXE
which loaded an IPX DOS TSR. That TSR communicated with a packet driver which in turn hit the network card. Quake was able to probe for that DOS TSR and upon detection allowed players to select IPX.
Using TCP/IP was nearly impossible. DOS did not come with a TCP/IP stack and it was something complex enough that only a single vendor provided a TSR for it on DOS.
The TSR name was BWNFS. Made by
Beame & Whiteside
, its cost $395 in 1996 ($830 in 2025!)
[3]
. It is reasonable to say that few gamers ever used TCP/IP on DOS to play QUAKE.
quake.exe under Windows 95
Starting
quake.exe
from Windows 95 works like a charm. The executable is loaded into a Windows 95 "dos-box"
[4]
that virtualizes memory, interrupts, and signals
[5]
. The game ran exactly like under DOS with the same multiplayer choices available. It was convenient since users did not have to load any mouse driver or set up the
BLASTER
environment variable to make the sound card work.
Much less convenient however, this way to run Quake requires 16 MiB RAM. Quake only needs 8 MiB but Windows 95 adds quite a bit of overhead! The same files used when running from DOS are used here as well, except for
cwsdpmi.exe
, since the DJGPP client detects and uses Windows’ built-in DPMI server.
It is impressive to see
Quake
run at full speed knowing that Windows 95 runs DOS executable in a virtual machine. My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances.
The magical q95.bat script
Starting
quake.exe
from DOS or Windows are not the only two options to run
Quake
. There is a third one which is to launch
q95.bat
.
In this case, a window "Launching Quake" briefly pops up on Windows 95 desktop.
The text gives a clue about what is happening. Quake is loaded with a tunnel to Winsock, Microsoft's TCP/IP stack. There is further indication of
what
is doing that, "Powered by Mpath". But not much more to explain
how
this all works.
Mpath
Mpath Interactive was a company dedicated to online gaming. They provided subscription services to help gamers find each other but also operated as an ISP reseller.
[6]
. It was in their interest to help gaming companies to release titles allowing Internet play as Larry Hastings, an Mpath employee at the time, recalls.
Back then in the primordial ooze that was the mid-90s internet, online multiplayer was still in its infancy. If you wanted to play a multiplayer game on the internet, either you needed to have explicit host & port information, or you needed to use an online multiplayer gaming service. And in 1995 there were only two: us, and Total Entertainment Network. You might think game creators would come to us and say "please put my game on your service!", but... nope! Not only did we have a licensing team that went out and got contracts to license games for our service, but we had to pay the vendor for the right to license their game, which was often an exclusive. So, we had Quake and Unreal; TEN got Duke Nukem 3D and NASCAR.
The user experience for Mplayer was like this. First, you'd run the "Gizmo", which was a Windows program that acted as a sort of game browser. It knew which compatible games you had installed, and it'd let you browse the multiplayer games on offer for each game; the metaphor we used for this was a "room". Quake was drop-in, so you could simply find a game in progress and hop right in--not a feature of very many games back then. Alternatively, you could find a "room" where someone was proposing to launch a game soon. Or you could create your own. You'd set the name of the room, and the Mplayer Gizmo had some per-game UI that let you set the settings for the game (what map, what features, etc). The room featured text and audio chat, and even a shared "whiteboard", a simple paint program. Once the owner of the "room" "launched" the game, everyone's Gizmos would automatically start the game for them, and the game would automatically join that online game and start playing.
In order for a game to run on Mplayer, it had to integrate with the Mplayer software stack. Mostly this integration work was done by Mpath engineers; we'd get source code from the game developer and "porting engineers" would get it to run on Mplayer. This often included modifying both the client and the server, so that both could talk via Mplayer's servers.
The early version of Quake was DOS only, and used the Chunnel to talk to the Windows 95 TCP/IP stack. (Which in retrospect makes the "Chunnel" a type of "thunk", like Microsoft's "Win32s".) I think the deal was, we licensed the Chunnel to id, and in return for that we got to have Quake on Mplayer. So, DOS Quake supported running on Mplayer via the Chunnel, in addition to connecting to open game servers on the Internet via host and port.
- Larry Hastings (Email conversation)
Larry was kind enough to share some Quake anecdotes.
One afternoon shortly after we got our first build of the game, we played a round of deathmatch with the id team over the internet. We were in Cupertino, CA, in a building on Bandley Drive (now a "Fitness Center" for Apple employees). They of course were in Mesquite TX. Yup, it was deathmatch over the internet--very exciting! The only id employee I remember for sure being in the game was Tim Willits. He owned us, both because he was way more used to Quake, but also because he knew where all the secrets were. At one point I spotted him coming out of a secret doorway with a rocket launcher. And either he didn't see me, or I died shortly thereafter.
- Larry Hastings (Email conversation)
As for explaining how the Chunnel worked, I was out of luck.
I didn't work on the Chunnel. That was mainly a British guy named Henry but I don't remember his last name, it was thirty years ago. All I remember about him is what he looked like, and the fact that he drove a cool car, a white Merkur XR4Ti.
- Larry Hastings (Email conversation)
Ghidra
When everything else fails, we still have Ghidra and doomworld's amazing community (thanks xttl
[7]
). After much decompiling and talking, it turned out all files previously ignored were part of Mpath's "Chunnel".
q95.bat
is just a small script to launch mpath's main program.
qlauncher.exe
contains all the
MPlayer functions
. However the role of this executable is limited.
It merely loads
quakeudp.dll
. Despite its confusing name, this DLL is the heart of Quake Chunnel. It is the bridge to Microsoft TCP/UDP/IP stack (
wsock32.dll
). It also starts Quake with
-path
parameter to make it load a BSD network socket API
sys/socket.h
. Finally, it also loads the virtual device driver manager
genvxd.dll
.
The virtual device is the trick that allows a DOS executable running inside a Windows 95 dos box to communicate with win32. The
genvxd.dll
dynamic library loads a virtual device driver
[8]
named
GENVXD.VXD
which installs itself to respond on interrupt
0x48
.
The last piece of the puzzle is on Quake side. The implementation of BSD
sys/socket.h
,
mpplc.c
, is code provided by Mpath. It takes care of marshaling every BSD socket function call, then use the DPMI client to trigger a software interrupt that is received in win32 land. Data is passed up the pipeline we previously described until it is unmarshalled by
genvxd.dll
and routed towards
wsock32.dll
. Notice the symmetry of
functions
found in
mplib.c
marshalling and the
symbols
found in
genvxd.dll
unmarshalling.
It seems John Cash was involved in compiling Mpath's stuff. We can find his name in the symbols of
mgenvxd.vxd
.
F:\cashcode\GENVXD\bin\Mgenvxd.pdb
The source code of mgenvxd.vxd, genvxd.dll, qlaunch.exe and quakeudp.dll was never released. It was a proprietary, patented technology from Mpath. It is likely id only got permission to release the client side of it.
As far as I understood it, that is how Quake was able to send TCP and UDP packets over IP. This convoluted construct became obsolete when id stopped shipping DOS executable (the last one being
vquake.exe
). After Dec 1996,
winquake.exe
,
glquake.exe
, and all QuakeWorld binaries were win32 exclusive with direct access to
wsock32.dll
.
Recently, I needed a lightweight way to run a periodic job in the background to clean up some expired database records. Precise timing wasn’t important, this was the only background work needed in the app, and it needed to self heal if anything went wrong.
Because this was such a trivial task, I didn’t want to set up a heavyweight job execution framework like Oban. Additionally, this app was deployed as a single Docker container, so adding the clunkiness of cron jobs to the deploy process did not seem fun to me.
One of the cool features of the GenServer module is the
:timeout
message. From
the docs
:
The return value of
init/1
or any of the
handle_*
callbacks may include a timeout value in milliseconds;
…when the specified number of milliseconds have elapsed with no message arriving,
handle_info/2
is called with
:timeout
as the first argument.
This lets us initialize a
GenServer
with a timeout, store it as state, and return it from both
init/1
and
handle_info/2
.
Now the
GenServer
will receive
:timeout
again after each run, which effectively triggers
handle_info(:timeout, state)
on the configured interval. This setup has the added bonus of preventing overlapping executions.
Here is a full example:
defmodule MyApp.ExpireWorker do use GenServer def start_link(period_in_milliseconds) do GenServer.start_link(__MODULE__, period_in_milliseconds, name: __MODULE__) end def init(period_in_milliseconds) do # {:ok, state, timeout()} {:ok, period_in_milliseconds, period_in_milliseconds} end def handle_info(:timeout, period_in_milliseconds) do case MyApp.Tokens.purged_expired() do {:ok, num_deleted} -> IO.inspect("Deleted #{num_deleted} expired tokens") {:error, error} -> IO.inspect("Error deleting expired tokens: #{inspect(error)}") end # {:noreply, state, timeout()} {:noreply, period_in_milliseconds, period_in_milliseconds} endend
Another option is
:timer.send_interval/2
, but the
:timeout
mechanism keeps everything in a supervised
GenServer
and avoids overlapping executions.
The
:timeout
pattern doesn’t provide the stronger guarantees that a real job scheduler would, so something like
Oban
could be a better fit for tasks that aren’t lightweight cleanup tasks.
Rank-balanced trees (2014)
Lobsters
sidsen.azurewebsites.net
2025-11-18 02:13:26
Since the invention of AVL trees in 1962, many kinds of binary search trees have been proposed. Notable are red-black trees,
in which bottom-up rebalancing after an insertion or deletion takes O(1) amortized time and O(1) rotations worst-case. But
the design space of balanced trees has not been full...
No preview for link for known binary extension (.pdf), Link: https://sidsen.azurewebsites.net/papers/rb-trees-talg.pdf.
Apple Introduces Digital ID
Daring Fireball
www.apple.com
2025-11-18 01:27:17
Apple Newsroom, last week:
Apple today announced the launch of Digital ID, a new way for
users to create an ID in Apple Wallet using information from their
U.S. passport, and present it with the security and privacy of
iPhone or Apple Watch. At launch, Digital ID acceptance will roll
out first i...
Apple introduces Digital ID, a new way to create and present an ID in Apple Wallet
Digital ID offers a secure and private way for users to create an ID in Apple Wallet using information from their U.S. passport, and present their ID with iPhone or Apple Watch
Available today, Digital ID is a new way for users to create an ID in Apple Wallet using information from their U.S. passport, and present it with the security and privacy of iPhone or Apple Watch.
Apple today announced the launch of Digital ID, a new way for users to create an ID in Apple Wallet using information from their U.S. passport, and present it with the security and privacy of iPhone or Apple Watch. At launch, Digital ID acceptance will roll out first in beta at TSA checkpoints at more than 250 airports in the U.S. for in-person identity verification during domestic travel, with additional Digital ID acceptance use cases to come in the future.
Digital ID gives more people a way to create and present an ID in Apple Wallet even if they do not have a REAL ID-compliant driver’s license or state ID. Digital ID is not a replacement for a physical passport, and cannot be used for international travel and border crossing in lieu of a U.S. passport.
“With the launch of Digital ID, we’re excited to expand the ways users can store and present their identity — all with the security and privacy built into iPhone and Apple Watch,” said Jennifer Bailey, Apple’s vice president of Apple Pay and Apple Wallet. “Since introducing the ability to add a driver’s license or state ID to Apple Wallet in 2022, we’ve seen how much users love having their ID right on their devices. Digital IDs brings this secure and convenient option to even more users across the country, as they can now add an ID to Wallet using information from their U.S. passport.”
The launch follows the capability for users to add an eligible driver’s license and state ID to Apple Wallet. If users do not have a U.S. passport to create their Digital ID, they can still
add an eligible driver’s license to Apple Wallet.
Adding Digital ID to Apple Wallet
Users can easily create and add a Digital ID to Apple Wallet using a U.S. passport. They start by tapping the Add (+) button at the top of the screen in Wallet on their iPhone and then selecting Driver’s License or ID Cards. They then select Digital ID and follow the onscreen instructions to start the setup and verification process.
Users are then asked to use their iPhone to scan the photo page of their physical passport as part of the process. They will also be asked to use their iPhone to read the chip embedded on the back of their passport to ensure the data’s authenticity. From there, they are asked to take a selfie for verification, and as another security step, they will also be prompted to complete a series of facial and head movements during the setup process. Upon verification, their Digital ID is added to Wallet.
Users can seamlessly create a Digital ID using their U.S. passport.
Using Digital ID in Apple Wallet
To present a Digital ID in person, users can double-click the side button or Home button to access Apple Wallet and select Digital ID. From there, they can hold their iPhone or Apple Watch near an identity reader, review the specific information being requested, and use Face ID or Touch ID to authenticate.
In the future, users will be able to present their Digital ID at additional select businesses and organizations for identity and age verification in person, in apps, and online.
Users can present their Digital IDs with their iPhone or Apple Watch.
Presenting Digital ID in a Secure and Private Way
Like all IDs in Apple Wallet, Digital ID takes advantage of the privacy and security features already built into iPhone and Apple Watch to help protect against tampering and theft. Digital ID data is encrypted. When users create a Digital ID, their passport data is stored on the device. Apple cannot see when and where users present their ID, or what data was presented. Biometric authentication using Face ID or Touch ID also ensures that only the owner of the Digital ID can present it.
Only the information needed for a transaction is presented, and the user has the opportunity to review and authorize the information being requested with Face ID or Touch ID before it is shared. Users do not need to unlock, show, or hand over their device to present their ID.
With Digital ID, only the information needed for a transaction is presented, and the user has the opportunity to review and authorize the information being requested before it is shared.
Driver’s Licenses and State IDs in Apple Wallet Today
The introduction of Digital ID brings users another easy, secure, and private way to create, store, and present an ID in Apple Wallet.
Today, the ability to add a driver’s license or state ID to Apple Wallet is live in 12 states and Puerto Rico. In the past six months alone, the feature has come to Montana, North Dakota, and West Virginia, and launched internationally for the first time in Japan with My Number Card on iPhone.
Game developer Rebecca Heineman has died after being diagnosed with cancer last month. The news was shared to Bluesky by Heineman's friend,
Heidi McDonald
, while the most recent post on
Heineman's GoFundMe
is a goodbye message stating that her health was rapidly deteriorating, and she was entering palliative care. Heineman was 62, and the GoFundMe will remain live to help her family make final arrangements.
Born in 1963, Heineman initially made a mark on the industry by winning a national Space Invaders tournament in 1980 in New York, becoming the first formally recognized US champion of any videogame. She went on to have a far-reaching career, being credited on 67 games according to
MobyGames
.
Rebecca Heineman sadly passed away. Known her since the 80s when I'd drive her to work, one of the most brilliant programmers around. A real gut punch earlier today when she messaged me: "We have gone on so many adventures together! But, into the great unknown! I go first!!!" :( pic.twitter.com/lu3i0fyt5C
November 17, 2025
Heineman publicly came out as transgender in the 2000s, and was married to fellow games industry legend Jennell Jaquays. Heineman was the recipient of Gayming's
2025 Gayming Icon award
, with the site writing that "her advocacy for LGBTQ+ inclusion, accessibility, and diversity in tech has inspired countless developers and players."
Jaquays died of complications from Guillain–Barré syndrome in January 2024, and Heineman was blindsided last month by an
aggressive cancer diagnosis
. She turned to GoFundMe to help with the costs of treatment, where fans, friends, and industry peers showed up to support the developer.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Heineman shared the message last night that her health was rapidly declining.
"It's time. According to my doctors. All further treatments are pointless," Heineman wrote. "So, please donate so my kids can create a funeral worthy of my keyboard, Pixelbreaker! So I can make a worthy entrance for reuniting with my one true love, Jennell Jaquays."
Game developers have begun sharing their own condolences and remembrances in the wake of Heineman's death.
Rebecca was one of the founders of Interplay and programmed & designed for some of the most influential games of my youth, notably Bard's Tale I & III and Wasteland. She will be missed.
A game industry legend died a few mins ago, Rebecca Heineman (@burgerbecky), taken away by aggressive lung cancer. She oversaw the porting of Wizordum to the Mac OS most recently for Apogee. My local friends would often have dinner with her and I loved her industry stories and…
November 17, 2025
What a remarkable human, and what a remarkable thing to know that she passed bemused at reading her own eulogies.
Rest in peace, Rebecca. Thank you for everything.
Rebecca was in my life because she reached out to me, a stranger, because she'd caught wind of a layoff I was impacted by. Her achievements were great, and so too was her kindness.
Rest well, you legend, you pioneer, you wonderful soul. I'm lucky to have known you, though briefly. Please share her legacy by reposting Heidi's message. 💖
in the early 2000s Rebecca took the time to chat over IRC with a teenaged and gender-confused Me on the practicalities of transition - in a time where being out as trans online was something that could get you socially ostracized. I owe her a lot for that and only hope I can pay it forward.
Ted has been thinking about PC games and bothering anyone who would listen with his thoughts on them ever since he booted up his sister's copy of Neverwinter Nights on the family computer. He is obsessed with all things CRPG and CRPG-adjacent, but has also covered esports, modding, and rare game collecting. When he's not playing or writing about games, you can find Ted lifting weights on his back porch. You can follow Ted on
Bluesky
.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
I caught Google Gemini using my data–and then covering it up
I asked Google Gemini a pretty basic developer question. The answer was unremarkable, apart from it mentioning in conclusion that it knows I previously used a tool called Alembic:
Cool, it's starting to remember things about me. Let's confirm:
Ok, maybe not yet.
However, clicking "Show thinking" for the above response is absolutely wild:
I know about the “Personal Context” feature now — it’s great. But why is Gemini instructed not to divulge its existence? And why does it decide to lie to cover up violating its privacy policies? I’m starting to believe that “maximally truth-seeking” might indeed be the right north star for AI.
This Week in Peoples’ History, Nov 19–25, 2025
Portside
portside.org
2025-11-18 01:02:50
This Week in Peoples’ History, Nov 19–25, 2025
Jonathan Bennett
Mon, 11/17/2025 - 20:02
...
NOVEMBER 19 IS THE 110TH ANNIVERSARY
of the cold-blooded murder of labor organizer and Industrial Workers of the World activist Joel Hägglund, who is almost universally known by his pen name, Joe Hill. The bosses hated Hägglund and everything he represented, including his ability to set new pro-worker lyrics to popular tunes, such as his "There is Power in the Union," which uses the tune of "There is Power in the Blood of the Lamb," and "Casey Jones—the Union Scab"
Hägglund was found guilty of a murder he did not commit and martyred because he refused to rat on a friend. Neither he nor his friend had anything to do with the murder, but the friend, who had shot Hägglund in a fight, was saved from a long prison term by Hägglund's refusal to squeal.
Anyone who wants to learn the details of Hägglund’s fascinating life and also get the best insight into what made him tick ought to consider two books that were published on the centennial of his death in 1915. The two volumes were informatively reviewed for Portside by Paul Buhle in 2015. You can read that review here:
https://portside.org/2015-12-24/joe-hill-again
Is There a Lesson in the Anti-Racism of 1835?
NOVEMBER 20 IS THE 190TH ANNIVERSARY
of the founding of the New York Committee of Vigilance, in 1835, to prevent slave-traders from kidnapping African-Americans and selling them into slavery. The would-be kidnappers had the backing of the Fugitive Slave Act of 1793, which made any self-emancipated slave a fugitive anywhere in the U.S. And, of course, the kidnappers were highly motivated to assert their victims were fugitives, true or not.
The New York Committee of Vigilance and similar groups in Boston, Philadelphia, Syracuse and also a NY state-wide organization operated in a manner akin to anti-ICE groups now. They raised the alarm whenever they got wind of slave-catchers’ presence, harassed the slave-catchers, and made their best effort to insure that anyone accused of being a fugitive had all the legal help possible to prevent their removal to the South. On any number of occasions, they used brute force to rescue the slave-catchers’ victims and spirit them out of the county. Sometimes they were arrested for such defiant acts, but they were often acquitted by juries who nullified laws they considered immoral.
https://www.zinnedproject.org/news/tdih/new-york-committee-vigilance-ruggles/
Women Rise Up Angry Against Police Sexism
NOVEMBER 22 IS THE 45TH ANNIVERSARY
of a fierce demonstration by some 500 supporters of Leeds Women Against Violence Against Women, who were protesting a shambolic, 6-year-long police effort to capture a serial killer of women in West Yorkshire, England. In 1980 the demonstrators were reacting not only to the inability of the police to arrest a man who had killed at least 13 women in the vicinity of the Leeds and had also grievously assaulted at least seven others, but also to the authorities’ proposal to impose a nighttime curfew on women in the area, instead of a curfew on men.
The demonstrators blocked downtown traffic, beat on cars and buses, smashed windows, and vandalized two movie theaters that were showing pornographic films, including one about a killer of women. Less than six weeks after the anti-police rampage, Yorkshire police arrested the man who was eventually convicted of 13 murder charges and sentenced to life in prison.
https://secretlibraryleeds.net/2019/09/13/the-leeds-women-against-violence-against-women-march/
Judge Slams Homophobic Police
NOVEMBER 23 IS THE 70TH ANNIVERSARY
of a little-remembered early legal victory for civil liberties and gay rights. On this day in 1955, Baltimore Criminal Court judge James Cullen dismissed all charges against 162 patrons of the gay-friendly Pepper Hill Club, who had been arrested for allegedly disorderly conduct, but actually for no reason other than their presence in the club. When he dismissed the charges, Cullen said the police had no right to make such a mass arrest in a public place.
https://www.loc.gov/resource/sn83045462/1955-11-23/ed-1/?sp=21&r=-0.119,1.136,0.914,0.378,0
Deadly Brutality Backfires
NOVEMBER 25 IS THE 65TH ANNIVERSARY
of what turned out to be the beginning of the end for Rafael Trujillo’s brutal dictatorship over the Dominican Republic.
On this day in 1960 Trujillo’s thugs attacked and beat to death the Mirabal sisters, Minerva, Patria and Maria Teresa, three of the country’s best-known anti-Trujillo activists, The public revulsion over Trujillo’s brutality was the last straw; six months later, after more than 30 years in power, Trujillo was assassinated.
The Mirabal sisters have been permanently memorialized by the United Nations, which designated the anniversary of their killings as the International Day for the Elimination of Violence Against Women.
https://en.wikipedia.org/wiki/Mirabal_sisters
At 9:15 p.m. ET yesterday, Donald Trump threw in the towel, writing on Truth Social: “The House Oversight Committee can have whatever they are legally entitled to, I DON’T CARE.”
This is, of course, a lie. Everything Trump did in recent months shows that he cared deeply about not releasing the Department of Justice’s Jeffrey Epstein files. I’m sure he still cares a lot. But he’s now recognized defeat, at least a temporary defeat. And so he’s changed his tune.
It’s worth recalling how consistently and how insistently Trump fought the release of these files. In early July, his Attorney General and FBI Director announced they’d completed an “exhaustive review” of the files, after which they informed Trump of what he surely wanted to hear—that they had “found no basis to revisit the disclosure” of any of the Epstein materials.
Ever since, Trump has attacked those who called for the files’ release. Most notably, he tried to pressure the four Republican signers of the discharge petition to force a floor vote on the legislation mandating their release. This culminated in the remarkable spectacle of Colorado Rep. Lauren Boebert being summoned to the White House Situation Room last Wednesday to meet with Pam Bondi and Kash Patel to get her arm twisted. But the four Republican holdouts—Thomas Massie of Kentucky, Boebert, Marjorie Taylor Greene of Georgia, and Nancy Mace of South Carolina—held firm.
Meanwhile Trump’s lapdog, Speaker Mike Johnson, sent the House home early in late July to stop a growing Republican revolt on Epstein. He then kept the House out of session during the government shutdown, in part to avoid having to swear in the newly-elected Democratic Rep. Adelita Grijalva of Arizona, who would be the decisive 218th signatory of the discharge petition. But the shutdown ended, the House came back, and on Wednesday Grijalva was sworn in. She went straight to the well of the House to provide the signature needed to bring the petition to the floor.
Meanwhile, in the course of the last four-and-a-half months, we learned more about why Trump cared about information coming out on his relationship with Epstein. The revelations ranged from the salacious card in Epstein’s birthday book to the recently released 2019 email from Epstein, in which he writes that “of course knew about the girls.” What’s striking is that none of these revelations led Trump to make the judgment:
Well, the worst is already out there, so I might as well order the files’ release
. So one has to suspect that Trump thought or knew that there would be even worse to come from the release of the Justice Department files. And one has to suspect that’s why he fought it.
But as House Republicans prepared to desert en masse, Trump last night acknowledged defeat. The House will pass the Epstein Files Transparency Act, likely tomorrow. The Senate will very likely follow suit quickly now that Trump has backed down. We will then see if Trump will sign the measure.
Of course, were he to sign it, Trump, along with Bondi and Patel, would no doubt work on minimizing the scale of the defeat. The Justice Department could withhold materials and limit the scope of the release of the files. And it will be hard to know what isn’t being released.
So this fight is by no means over. Democrats and the truth-seeking Republicans will have to keep the pressure on—by cross-checking the files that are released with what survivors and others know to be in them, by insisting on a full accounting of what hasn’t been released, by demanding hearings and testimony from Bondi and Patel under oath, and the like. And this is to say nothing of the fact that various documents and records might have conveniently gone missing in the course of Bondi and Patel’s exhaustive review.
Given how hard Trump has fought the release, it would be very foolish to assume that all will go smoothly now. There is material in there that Trump did not want us to see and still does not want us to see. So this is nowhere close to the end. It is merely the end of the beginning of the fight for full release of the Epstein files.
But we can draw lessons from this still incomplete and uncertain victory.
1) It was easier in this case to fracture the MAGA coalition than to get “responsible” Republicans to defect from Trump. The four Republicans who signed the discharge petition are not Republican “moderates” or “institutionalists.” Greene and Boebert are true believers. Many of their beliefs are foolish or deplorable. But they showed far more courage or at least stubbornness than all their more mainstream counterparts who have proved to be weaklings under pressure.
So:
It may be more fruitful in the effort to weaken Trump to find and exploit fractures in the MAGA coalition than to try to find moderates to step up
.
2) The four Republicans who held firm deserve a lot of credit. But they only were able to make a difference because the entire Democratic conference signed the discharge petition. And the entire conference signed up because some—mainly California’s Rep. Ro Khanna—insisted on seizing the issue.
I’m sure that Khanna and others were constantly being told by Democratic “strategists” not to let Epstein “distract” from the focus on “kitchen table” issues. I can’t even count how many meetings and conferences I’ve been at over the past months at which the Epstein issue was either downplayed or ignored, as Democratic consultants went over their polling data on health care. When some of us would politely—or sometimes not so politely!—point out that releasing the Epstein files polled even better than saving Medicaid, we were pretty much ignored. And we were sometimes privately reprimanded for indulging in this distraction.
We were also reminded time and again that Democrats are in the minority, and that it was important to stress to their supporters the limits of what they could do. But it turns out that Democrats are not powerless! They can sometimes make a difference. There are some levers of power—such as discharge petitions!—that are available. One has to pull on all those levers, and one often doesn’t know ahead of time which one might work.
So:
Democrats should ignore much of the advice of the Democratic consultant-pollster-industrial complex
.
And in general, fighting is superior to finding reasons not to fight.
You don’t
score any goals if you don’t take any shots,
even if they seem at first like long shots.
3) Finally, what we’ve already seen of the Epstein emails offers a remarkable window into the bipartisan decadence and depravity of many of our elites. Democrats should run against not just Trump and the GOP, but against elites in general in 2026, and I dare say in 2028.
So:
For those of us who’d prefer centrist policies to leftist ones, we need centrist candidates that are also credibly anti-elitist. There will be no market for a return to the good old days of the Clintons and their like. Not when they can be found next to Trump in the Epstein files canon.
We shouldn’t overstate this moment. There are many, many challenges ahead on every front. Indeed, the chances of an intensification of the Trump administration’s authoritarianism at home and abroad may have increased because of Trump’s forced retreat on the Epstein files.
Ten months into Trump’s second term, we are nowhere near turning the corner in the fight against Trump and Trumpism. But that turning point may, just may, be coming into sight.
William Kristol is Editor at Large, The Bulwark. Director, Defending Democracy Together. Host, Conversations with Bill Kristol.
You may have noticed that sh*t has gotten weird the last few years.
The Bulwark
was founded to provide analysis and reporting in defense of America’s liberal democracy. That’s it. That’s the mission. The Bulwark was founded in 2019 by Sarah Longwell, Charlie Sykes, and Bill Kristol.
Microsoft: Windows 10 KB5072653 OOB update fixes ESU install errors
Bleeping Computer
www.bleepingcomputer.com
2025-11-18 00:22:11
Microsoft has released an emergency Windows 10 KB5072653 out-of-band update to resolve ongoing issues with installing the November extended security updates. [...]...
Microsoft has released an emergency Windows 10 KB5072653 out-of-band update to resolve ongoing issues with installing the November extended security updates.
Windows 10 reached the end of support on October 14, 2025, and Microsoft no longer introduces new features or releases free security updates.
For individuals and business customers who wish to continue using Windows 10, Microsoft offers extended security updates (ESU).
Consumers can receive extended security updates for one additional year by either paying $30, backing up their Windows settings to their Microsoft account, or redeeming 1,000 Microsoft reward points.
Enterprise customers can purchase an ESU license for 3 years, bringing the total cost per device to $427.
Today, Microsoft has released the "KB5072653 Extended Security Updates (ESU) Licensing Preparation Package," which fixes the 0x800f0922 errors people have been encountering when attempting to install the ESU update.
"Once you install this preparation package (
KB5072653
), you will be able to deploy the November 2025 security update (
KB5068781
)."
To install the update, a Windows device must be running Windows 10 22H2 and have the October 2025
KB5066791
cumulative update installed.
They can then check for new updates using Windows Update, and the KB5072653 will be automatically installed.
Microsoft says that once the KB5072653 update is installed and Windows has been restarted, users should rerun Windows Update to install the November extended security update successfully.
However, some corporate Windows admins have reported [
1
,
2
] that WSUS and SCCM are not correctly indicating that a Windows 10 device needs the extended security update, even when it is correctly enrolled in the program.
Microsoft says it will release a new Scan Cab with updated metadata for this update to properly perform compliance update checks.
"A new Scan Cab including metadata for
KB5072653
will be available in the near future for organizations that utilize cab files for compliance update checks. We will update this announcement once the new Scan Cab is available," explained Microsoft.
BleepingComputer contacted Microsoft to determine if this would resolve the issue reported by some Windows admins.
For more insights on modern patch management strategies and how organizations can streamline and strengthen their update workflows, BleepingComputer is hosting a
December 2 webinar with Action1
.
The Supplemental Nutrition Assistance Program provides an average of $6 per day for nearly 42 million people,
roughly 40 percent of whom are children
. Under the new law, parents and older Americans will be required to meet stricter work requirements, and states eventually will have to share in the cost of SNAP benefits, which could force further program cuts,
according to the nonpartisan Congressional Budget Office.
Tens of thousands of legal immigrants will also lose access to the program under the law.
The loss of SNAP “was really stark during the shutdown,” said Dottie Rosenbaum, director of federal SNAP policy at the left-leaning Center on Budget and Policy Priorities. “But [the One Big Beautiful Bill Act] is the largest cut in the program’s history. That is also going to be really deeply felt.”
States have started notifying participants they will be subject to new, tighter work requirements, setting up a three-month countdown for people to comply or lose benefits entirely.
“While we are concerned about any person in this country going hungry needlessly, there is something spectacularly cruel about ripping out the safety net of people who came to this country who need just a little bit of time to get back on their feet and to begin to be able to contribute economically to this country,” said Naomi Steinberg, vice president of policy and advocacy at HIAS, a Jewish nonprofit that assists refugees and asylum seekers.
HIAS estimates that the SNAP changes will cut benefits for roughly 250,000 refugees and other humanitarian visa holders.
Rollins has also indicated that
she may press for current SNAP participants
to reapply, despite existing requirements that participants regularly certify their incomes and other factors that determine eligibility. The new plan could add red tape that will make it more difficult to get benefits.
USDA’s Food and Nutrition Service issued new guidance in October and November during the shutdown on how to comply with tightened work requirements and follow other changes in the law, but some states are still struggling to interpret it. In California, where more than 5 million people use SNAP, California Department of Social Services Director Jennifer Troia
said in a recent webinar
that the state is still working through the new guidance.
“This is a priority for us,” Troia said. “We will move toward compliance with FNS guidance, while also balancing the need for accuracy and clarity.”
Millions of low-income families
will also lose access to Medicaid in the next few years
, when stricter work requirements and other changes for that program kick in. Republicans’ tax and spending law has made certain legal immigrants, including refugees,
ineligible for Affordable Care Act subsidies
. And the Trump administration is working on a new public charge regulation that could deter millions of lawfully present immigrants from participating in federal safety net programs.
As low-income people struggle to pay utility bills and make rent, many fall back on the charitable food network to help pay for groceries. But food banks and pantries are still scrambling to recover from
nearly $1 billion in federal funding cuts
earlier this year — and from the chaos resulting from the pause in SNAP benefits during the shutdown.
During the week of Oct. 27, food banks purchased 325 percent more food through Feeding America’s Grocery Purchase Program than during the same time last year,
according to the nonprofit.
Matt Jozwiak, who runs Rethink Food, a charity meal organization in New York City, said his organization increased the number of meals it provided from between 40,000 to 50,000 per week to 120,000 during the shutdown.
“It could not be worse,” Jozwiak said. “This is just like what’s to come. This is bad, but [OBBA is] permanent.”
With hundreds of thousands of refugees and other immigrants bracing to be kicked off SNAP, some refugee resettlement organizations are offering more emergency food options to help fill the hole.
“We have a truck, we have a warehouse, and it made sense,” said Laura Thompson Osuri, executive director of Homes Not Borders, a Washington-area nonprofit that assists newly-arriving refugees. Her group is now focused on food security.
Cyndi Kirkhart, who runs Facing Hunger Food Bank in West Virginia, is worried about her organization’s ability to keep up.
“When I wrote my budget last year for this year, I sure didn’t put this crisis in it,” Kirkhart said, referencing November’s benefits lapse. “Now, I’m going to anticipate there’s going to be more crises, and I’ll just have to budget more and hope that the same kinds of help and support line up. But at some point, everyone is affected by crises. So at what point do folks go, ‘I can’t do any more,’ right?”
Marcia Brown is a food and agriculture reporter at POLITICO covering federal agencies and the business of food.
Her freelance reporting has appeared in The Washington Monthly, The New Republic and The Food and Environment Reporting Network. An Ohio native, she lives in Washington, D.C.
We’re interested in the open questions around how developers use Cursor’s agent in their work and the productivity impacts of Cursor in organizations.
Suproteem Sarkar, an assistant professor of finance and applied AI at the University of Chicago,
recently conducted a study
analyzing agents' early effects across tens of thousands of Cursor users.
The study found that companies merge 39% more PRs after Cursor's agent became the default. It also found that experienced developers write more plans before coding and appear more proficient with agents.
The study looked at two signals: how frequently users send requests to the agent and how often they accept its code edits. Whether a user accepts the agent's edits depends on how well the output aligns with their intent and their threshold for applying generated code.
Junior developers are more likely to accept code from Tab, while senior developers are more likely to accept code from agents. For every standard deviation increase in years of experience, we see a corresponding ~6% increase in the rate of agent acceptances relative to the mean.
We would have expected that less experienced developers tend to use and accept agent at higher rates: it seems like the opposite is true!
A few theories:
Experienced developers may be more skilled at using agents by using custom rules or managing context more effectively.
They are more confident in their ability to evaluate agent-written code changes, which increases their willingness to accept.
They are working on more well-scoped tasks which can be easier for agents to complete in fewer iterations.
The study measured how proxies for throughput and quality changed after Agent became the default mode on Cursor. It compared these measures between an "eligible" group of organizations that were already using Cursor before the agent was released and a "baseline" group of organizations that weren't using Cursor during the analysis period. It found that the rate of merged PRs increased by 39% relative to time trends in the baseline group.
Across other metrics, the study found that the PR revert rate did not significantly change and the bugfix rate slightly decreased. It also found that the average lines edited and average files touched per merged PR did not change significantly.
The contents of requests indicate how developers are using agents and the actions they intend to perform. In a sample of 1,000 users, there were three broad categories of conversation-starting requests: implementing code, explaining code and errors, and planning an action. The majority of conversation-starting requests (~61%) were for implementation, where the agent is instructed to generate code.
The study found that more experienced developers are more likely to plan an action before generating code.
There isn’t yet a single definitive metric for measuring the economic impact of AI on software engineering. Like with any new technology, realizing AI’s full value will take time.
We’re encouraged by these early findings, and we’d like to continue studying Cursor’s effects on productivity.
Malicious NPM packages abuse Adspect redirects to evade security
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 23:47:46
Seven packages published on the Node Package Manager (npm) registry use the Adspect cloud-based service to separate researchers from potential victims and lead them to malicious locations. [...]...
Seven packages published on the Node Package Manager (npm) registry use the Adspect cloud-based service to separate researchers from potential victims and lead them to malicious locations.
The purpose of the attack is to lead victims to cryptocurrency scam sites, according to an analysis from researchers at application security company Socket.
All malicious packages were published under the developer namee ‘dino_reborn’ (geneboo@proton[.]me) between September and November. However, six of them contain malicious code while the seventh is used to build a malicous webpage:
signals-embed
dsidospsodlks
applicationooks21
application-phskck
integrator-filescrypt2025
integrator-2829
integrator-2830
The
researchers say
that
signals-embed
is not inherently malicious and contains only the code to create a white decoy webpage. The other six have code that collects data about the visitors to determine if the traffic comes from a researcher or from a potential victim.
This is achieved by collecting information from the browser environment, such as browser identifiers, page and URL data, host and hostname of the current page, and prepares it for sending to Adspect’s API.
Adspect cloaking
The six malicious packages contain a 39kB code that features the cloaking mechanism, Socket researchers note, adding that the code executes automatically on page load without extra user action, due to Immediately Invoked Function Expression (IIFE) wrapping.
The attack executes when the compromised developer’s web application loads the malicious JavaScript in a browser.
According to Socket, the injected code features anti-analysis such as blocking right-click, F12, Ctrl+U, Ctrl+Shift+I, and reloading the page if DevTools is detected. This makes it more difficult for security researchers to inspect the webpage.
The malicious code snippet
Source: Socket
The script gathers the visitor’s user agent, host, referrer, URI, query string, protocol, language, encoding, timestamp, and accepted content types, and sends the fingerprinting data to a threat actor proxy.
The real victim’s IP address is retrieved and forwarded to the Adspect API, which then evaluates the data to classify the visitor.
Visitors who qualify as targets are redirected to a fake cryptocurrency-branded (Ethereum, Solana) CAPTCHA page, triggering a deceptive sequence that opens an Adspect-defined URL in new tab while masking it as a user-initiated action.
If the visitors are flagged as potential researchers, a fake but benign Offlido company page is loaded to reduce suspicion.
Fake company site
Source: Socket
Adspect is marketed as a cloud-based service that filters unauthorized acceess to a webpage, blocking bots and malicious actors and allowing legitimate users.
BleepingComputer has contacted the firm to determine if they are aware of the abuse and what mechanisms are in place to prevent it, but we have not received a response by publication time.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
Windows 11 adds AI agent that runs in background with access to personal folders
Microsoft is moving forward with its plans to turn Windows 11 into a full-fledged “AI” operating system amidst
Copilot backlash
.
The first big move in that direction is an experimental feature called “Agent Workspace,” which gives AI agents access to the most-used folders in your directory, such as Desktop, Music, Pictures, and Videos. It will also allow AI agents to have their own runtime, desktop, user account, and ability to always run in the background if you turn on the feature.
New agentic features in Windows 11
As soon as I installed Windows 11 Build 26220.7262, Windows Latest noticed a new toggle “Experimental agentic features” inside the “AI Components” page in the Settings app > System.
This turns on “Agent Workspace,” but it doesn’t work right now, and if you’re wondering, it’s only available to Windows Insiders in the Dev or Beta Channel.
What are AI Agents and how do they work?
Before I explain what an Agent Workplace is, you need to understand AI Agents. If you’ve ever used ChatGPT, you might have come across ‘Agents.’ AI Agents have their own interface, and they navigate just like a human.
For example, if you ask ChatGPT’s Agent to book a travel, it’ll open Chromium on Linux in an Azure container, search the query, visit different websites, navigate each page and book a flight ticket using your saved credentials. An AI Agent tries to mimic a human, and it can perform tasks on your behalf while you sit back and relax.
That’s the core idea Silicon Valley is trying to sell.
Up until now, these Agents have been limited to cloud containers with Chromium and Linux terminal access, but as Microsoft wants Windows 11 to become an “AI-native” OS, it’s adding Agent Workspace.
Agent workspace is a separate, contained Windows session made just for AI agents, where they get their own account, desktop, and permissions so they can click, type, open apps, and work on your files in the background while you keep using your normal desktop.
Instead of letting an agent act directly as you, Windows spins up this extra workspace, gives it limited access (like specific folders such as Documents or Desktop), and keeps its actions isolated and auditable.
Each agent can have its own workspace and access rules, so what one agent can see or do doesn’t automatically apply to others, and you stay in control of what they’re allowed to touch.
I find the idea of Agent Workspace a bit similar to
Windows Sandbox
, but it’s not designed with security or privacy in mind, and it could be one of the ways to have fun with AI on Windows 11.
When you toggle on the feature, Windows warns that it could hurt performance and affect your security or privacy controls, but it’ll give you access to new “agentic” experiences in the OS.
Windows 11 lets AI agents into your Documents and Desktop folders
When you turn on the feature, you’re giving agents access to apps and even local folders, such as Desktop, Music, Pictures, and Videos.
Agent Workspace requires access to apps or private folders to perform actions on your behalf. Microsoft insists that it’s taking care of security implications by giving Agent Workspace its own authorisation (a separate account, similar to your user account), runtime isolation. Each agent will have its own defined set of dos and don’ts.
The idea is to give Agents their own backyard on your PC, and let them run in the background all the time. You’ll be able to monitor the logs and keep an eye on agent activity.
While each agent gets its own account, independent of your personal account, an agent would
still
need access to your personal folders, such as Documents and Desktop. You’ll be asked to grant permissions to the following:
apps in Windows
personal folders, mostly downloads, documents, and desktop, etc.
AI Agents may have performance issues
In our tests, Windows Latest observed that the experimental toggle warns of potential performance issues, and it makes sense.
AI agents are going to run in the background all the time and use RAM or CPU, depending agent’s activity. However, Microsoft’s early benchmarks suggest they won’t really drain PCs of their power. Microsoft says AI Agents will use a limited amount of RAM and CPU, but it won’t tell us how limited the ‘limit’ is.
By default, these agents are lightweight, but the catch is that some Agents could be resource-intensive.
Microsoft insists it deeply cares about power users
While the Experimental Agents Feature is optional, it makes it quite obvious Microsoft will not stop investing in AI for Windows 11, and Agentic OS is the future, whether you like it or not.
Show HN: Parqeye – A CLI tool to visualize and inspect Parquet files
Meta Has Deprecated the Messenger Apps for Mac and Windows Too
Daring Fireball
9to5mac.com
2025-11-17 23:43:13
Ryan Christoffel, reporting for 9to5Mac a month ago:
Meta has published a support doc that states its Messenger app for
Mac is being discontinued. New users won’t be able to download the
app at all, and existing users have about 60 more days of use
before it stops working altogether. Why the cha...
Do you use Facebook Messenger on the Mac? Soon, your app’s going to stop working. Meta
has announced
that its Messenger app for Mac is being killed off entirely.
Messenger for Mac will stop working altogether within 60 days
The Messenger app for Mac is being deprecated. After deprecation, you won’t be able to log into this app and will be automatically redirected to use Facebook website for messaging.
Will I get notified about this change?
Yes. If you’re using the Messenger desktop apps, you’ll get an in-app notification once the deprecation process begins.
You will have 60 days to use the Mac Messenger app before it is fully deprecated.
Once the 60 days are over, you’ll be blocked from using the Mac Messenger app. We encourage you to delete the app since it will no longer be usable.
If you’re concerned about your message history not being saved, Meta says: “Users who haven’t enabled secure storage in Messenger should
turn on secure storage and setup a PIN from their desktop app
to save their chat history before moving to the web version.”
Do you use Messenger’s Mac app now? How do you feel about it being discontinued? Let us know in the comments.
No preview for link for known binary extension (.pdf), Link: https://www.btbytes.com/docs/POL.pdf.
The fate of “small” open source
Simon Willison
simonwillison.net
2025-11-17 23:24:44
The fate of “small” open source
Nolan Lawson asks if LLM assistance means that the category of tiny open source libraries like his own blob-util is destined to fade away.
Why take on additional supply chain risks adding another dependency when an LLM can likely kick out the subset of functionality n...
Why take on additional supply chain risks adding another dependency when an LLM can likely kick out the subset of functionality needed by your own code to-order?
I still believe in open source, and I’m still doing it (in fits and starts). But one thing has become clear to me: the era of small, low-value libraries like
blob-util
is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see
node:glob
,
structuredClone
, etc.), but LLMs are the final nail in the coffin.
I've been thinking about a similar issue myself recently as well.
Quite a few of my own open source projects exist to solve problems that are frustratingly hard to figure out.
s3-credentials
is a great example of this: it solves the problem of creating read-only or read-write credentials for an S3 bucket - something that I've always found infuriatingly difficult since you need to know to craft an IAM policy that looks something
like this
:
Modern LLMs are very good at S3 IAM polices, to the point that if I needed to solve this problem today I doubt I would find it frustrating enough to justify finding or creating a reusable library to help.
xAI's Grok 4.1 rolls out with improved quality and speed for free
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 22:56:28
Elon Musk-owned xAI has started rolling out Grok 4.1, which is an upgrade to the existing Grok 4 model, and it delivers some incremental improvements. [...]...
According to xAI, Grok 4.1 is 3x less likely to hallucinate compared to its previous models, which makes it one of the best releases by xAI.
We don't know how well it performs against its rivals, such as GPT 5.1, which recently shipped with performance and emotional intelligence improvements.
However, LMArena's Text Arena shared some interesting performance insights of Grok 4.1
LMArena's Text Arena is an open-source tool that allows users to compare different large language models (LLMs) through side-by-side, blind, and randomised tests.
According to early benchmarks, Grok 4.1 (thinking) and Grok 4.1 have scaled new heights in the most competitive Text Arena.
Grok 4.1 benchmarks.
According to benchmarks, Grok 4.1 (thinking) also ranks at #1 with a score of 1510 and Grok 4.1 ranks at #19 with a score of 1437 in the Arena Expert leaderboard.
"This is a 40+ point improvement since Grok 4 fast, which landed in the Arena just two months prior," the benchmark platform
noted
.
While Grok 4.1 is a decent upgrade, it might not be the best model of this year, as
Google is preparing Gemini 3.0
, which could be the most powerful model till date.
Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.
Get the cheat sheet and take the guesswork out of secrets management.
[Sponsor] Clerk for iOS
Daring Fireball
go.clerk.com
2025-11-17 22:44:18
Clerk makes authentication for iOS apps effortless — just drop in prebuilt SwiftUI components for sign-in, MFA, and profile management. Fully customizable, always in sync with Apple’s design system, and packed with features devs love: social sign-in, user roles, and organization management. Launch f...
We're excited to introduce prebuilt UI views that make it incredibly easy to add authentication flows to your iOS applications.
These new SwiftUI views provide complete authentication experiences out of the box, eliminating the need to build custom sign-in and user management interfaces from scratch. With just a few lines of code, you can now add authentication and user management to your iOS app that matches iOS design standards and includes advanced features like multi-factor authentication, social sign-in, and comprehensive user profile management.
The
AuthView
provides a comprehensive authentication experience supporting both sign-in and sign-up flows, multi-factor authentication, password reset, account recovery and more.
The
UserProfileView
provides a complete interface for users to manage their accounts, including personal information, security settings, account switching, and sign-out functionality.
The RondoDox botnet malware is now exploiting a critical remote code execution (RCE) flaw in XWiki Platform tracked as CVE-2025-24893.
On October 30, the U.S. Cybersecurity and Information Security Agency (CISA) marked the flaw as
actively exploited
.
Now, a report from vulnerability intelligence company VulnCheck notes that CVE-2025-24893 is being leveraged in attacks by multiple threat actors, including botnet operators like RondoDox and cryptocurrency miners.
RondoDox is a large-scale botnet malware
first documented by Fortinet
in July 2025 as an emerging threat. In early October,
Trend Micro warned
about RondoDox’s exponential growth, with recent variants targeting at least 30 devices via 56 known vulnerabilities, some of them disclosed at Pwn2Own hacking competitions.
Starting November 3, VulnCheck observed RondoDox exploiting
CVE-2025-24893
through a specially crafted HTTP GET request that injected base64-encoded Groovy code through the XWiki SolrSearch endpoint, causing the server to download and execute a remote shell payload.
The downloaded script (rondo.<value>.sh) is a first-stage downloader that retrieves and executes the main RondoDox payload.
The malicious RondoDox requests
Source: VulnCheck
The researchers
observed additional attacks
involving cryptocurrency miner deployments on November 7, and also attempts to establish a bash reverse shell occurred on October 31 and November 11.
VulnCheck has also recorded widespread scanning using Nuclei, sending payloads that attempt to execute
cat /etc/passwd
via Groovy injection in the XWiki SolrSearch endpoint, as well as OAST-based probing.
Overall exploitation activity for CVE-2025-24893
Source: VulnCheck
The XWiki Platform is a Java-based, open-source enterprise wiki platform used primarily for self-hosted internal knowledge management solutions.
CVE-2025-24893 impacts versions before 15.10.11 and 16.4.1, which are the upgrade targets for administrators. Given the active exploitation status for this flaw, immediate patching is advised.
According to the researchers, multiple attackers started to leverage the vulnerability just days after initial exploitation started.
They note that the incidents they observed come from a user-agent and documented payload servers associated with RondoDox. This means that publicly available indicators of compromise (IoCs) for the botnet should block these exploitation attempts.
It's like
Metalsmith
in Lua. See
the tutorial
for more, but here's an example that converts Markdown to HTML and adds the page's title to the resulting HTML:
-- Minimal HTML template (used below)localouter=[[
<html>
<head><title><%= title %></title></head>
<body><%- content %></body>
</html>
]]-- Read content/*.md, convert to HTML,-- apply template, write to out/*.htmlreturn{readFromSource("content"),processMarkdown(),applyTemplates({{"%.html$",outer}}),writeToDestination("out"),}
Note: templates use a Lua-native templating language called
etlua
and file matching uses Lua
patterns
.
Motivation
Obviously, the world does not need another static site generator. So how did I end up here?
In a word, my motivation was:
simplicity
. I wanted an SSG that was:
Simple to understand
Simple to bootstrap
Simple to maintain
Simple to deploy
Implementation
Here's my take on the goals above.
Simple to understand
First, I wanted the architecture to be simple, both in design and use:
The build pipeline is just a Lua script (similar to
Metalsmith
)
Page templates are constructed using Lua
(via
etlua
)
Metadata on items can be specified in frontmatter using Lua (or a subset of YAML)
Relative links between Markdown files "just work"
and are checked at build time (including links to headings)
Instead of creating a bespoke domain-specific language for defining the structure of a generated site or its templates, you just write some Lua code that glues together a few processing nodes and then supply templates that
also
use Lua (e.g. for iteration).
Overall, I'd describe the architecture as "
minimalist Metalsmith in Lua, with zero runtime dependencies
".
Simple to bootstrap
Given my
musings about future-proof programming languages
, it's obvious that I'd like to be able to compile and use my software in the future. That is easier said than done! The problem is that identifying languages that will stick around is hard. Twenty years ago, Perl might have been a reasonable choice for an SSG, but today I don't even remember how to install Perl modules.
The (previous) best static site generator ever made
is built on TypeScript/JavaScript and
Deno
. As much as I like Deno, I'm not confident it will be maintained decades down the road. Given that I'll never know how to get it running on an i586 computer, I have doubts I'd be able to get it running on potential future architectures (RISC-V?) either.
To avoid these headaches, I went with the lowest common denominator: C
. It's not convenient, it's not modern, but C works everywhere and I'm certain C will persist (for better or worse).
Aside: ideally, I'd be writing as much native code in Rust as possible, to ensure memory safety. Unfortunately,
Rust's modern approach is at odds with my desire for simplicity
. The toolchain is huge, the dependency trees are large, and the language is vast. Rust definitely looks like the future of robust low-level software, but
for cozy side projects, I prefer being a simpleton living in the past
.
Simple to maintain
One frustration I have with the JavaScript ecosystem is that it's constantly changing. Node, Deno, and Bun do a respectable job of keeping old versions around, but I don't want to have to worry about breaking changes.
On the other hand, C changes very slowly, and previous versions of Lua are essentially set in stone. Throw in some static linking, and you've even got an artifact that should stay usable for a long time.
I've also minimized the number of (vendored, compile-time) dependencies involved. Here's the full list:
You'll note that these libraries are doing all of the heavy lifting
. I've basically only written glue code, an entry point, and some infrastructure. And most of the code I wrote is in Lua (which is much easier to write than C--and fast enough for all but the innermost loops).
Simple to deploy
Static binaries are wonderful.
Tiny
static binaries are even wonderful-er.
Just copy over a tiny zip file, unzip it, and you're done
. Need I say more?
Downsides
Of course, there are downsides to the approach I took:
Writing C code is fraught with peril -- fortunately, the hardest parts were mostly already done by the md4c and Lua authors
The security model of an SSG that uses Lua scripts for everything is... not ideal --
only use templates you trust!
Additionally, I haven't taken the time to set up a proper development and debugging environment for C and Lua. I need to investigate static analysis and debugging tools for Lua, as well as find a tolerable frontend for GDB. This is where I really miss Emacs+SLIME for Common Lisp or VS Code for TypeScript/Python.
Future areas of exploration
Now that I've got a static site generator running on a vintage laptop with NetBSD, where am I headed next? I'm not exactly sure, but some ideas follow.
Designing for the text-mode web
At some point, I'd like to redesign my site using an even more minimal theme. In fact,
I'd like to optimize my site for text mode browsers
like
lynx
and
w3m
. Why? Because I like using w3m and I want my site to be easy to use within w3m. Or maybe it's because I hate how bloated modern web browsers have become.
Further simplifying distribution
Distributing native code necessarily requires per-platform packages. Or does it? Can I package and release this minimal static site generator as a multi-OS polyglot binary using
Cosmopolitan libc
?
Update
: this now exists, but sadly, it is 64-bit only.
Simplifying the entire system
I'd like to see if I can bootstrap my entire web site's workflow from
Oasis Linux
(a small, statically linked, Linux-based operating system that is simple, but capable).
Oasis sounds like a modern system that a single person can wrap their head around
(minus the Linux kernel--though perhaps a simpler kernel could be substituted in the future?).
Blogging on vintage computers
I'm curious how far back I can go as far as vintage computing and still be able to build a static site. Can I build my SSG on Windows 98? DOS? Amiga? Inquiring minds want to know!
Conclusion
Creating a
non-bloated
Markdown-based static site generator has been a bucket list item for me--and now it's done!
Beyond personal goals,
I found C+Lua to be a comfortable combination for side projects
. This came as a surprise! Lua isn't my favorite language to write (though it's certainly much simpler to wield than C). Having said that, it's a beautifully simple language that's easy to integrate. Despite being primarily driven to Lua by my goal of building a small (in binary size) tool, Lua's minimalist take on a mostly-normal-looking scripting language won me over because I could literally pick it up and be productive within an hour or two.
With that out of the way,
I should probably attend to hobbies other than static site generator performance art
. Until next time!
Hey folks,
This is the introductory chapter on a series of blogs on how you can build a database from scratch by yourself.
But Why? Why build a database, you ask? There’s plenty already. And I only care about the CRUD stuff.
Umm, I guess most of us treat a database as a black box and dump stuff into it. But we only care about some of it if we get a connection pool failure alert on our pager. We dig a bit and then tune it to fix it there. But, to really make the most of your database and most importantly the way you decide if the task requires you to use a SQL or NoSQL one, you gotta see what’s the real difference in the two.
This can only happen if you have built one.
Some Background
I’ve always wanted to learn how a database actually works under the hood. Two things really intrigued me:
What the actual
INSERT
command does. I assumed it must finally be doing some sort of a
file.Write()
somewhere.
Why are there so many databases out there. What’s the fight about SQL vs NoSQL vs Columnar Storage vs KV store. What’s the actual difference between an OLTP and an OLAP.
I finally decided to dig deep into the internals of it, and bought two books:
But at one point, when I started getting bored with the theory, I put my foot down and decided to build one.
How hard could it be, I thought! Well, very much!
The rest of this article is about what really goes inside building a database system. There are a lot of moving parts, all of which work in harmony with each other to finally give you that simple API of
db.put(<key>, <value>)
.
Let’s see the components of the most popular database,
MySQL
:
Looking at it, we have the following components:
DB Connectors/Client
SQL Interface API
SQL Parser
Query Optimizer
Buffer Pools
Storage Engines (InnoDB, MyRocks, MyISAM)
File System
That’s a lot of things under the hood, each with their own share of complexity.
I decided to break this down in a bottom-up fashion, and decided to first understand how the actual data is stored in the end. It must be something, right? Some sort of array maybe or a hashmap or some kind of data structure, right? But how do you store that on a file??
Thing is, you need a disk-backed data structure to efficiently store data on disk. And all of us mostly use the in-memory structures like BSTs, Hashmaps, Sets. That stuff is not really meant for disk usage because of a lot of reasons!
Since you got to have your database the fastest component in your system, because at the end of it, your API is going to call the database to store/retrieve, you need to use some DS that allows you to store data efficiently so that further reads can make the most of it.
There are some pre-requisites and theory/primer that you need to understand to move forward. I have attached some references for you to look at:
How Disks work (HDDs, SSDs)
Understand disk-access patterns. Mainly, why Sequential i/o is better than random i/o
Handling power failures to have Integrity semantics
Disk-based data structures like B-Tree, B+ Trees, LSM
Understanding Pages and Blocks at the OS level.
File Formats (CSV, Parquet, etc). Essentially, Encoding and Compression techniques
Now that we know a bit about everything, we can dig into the code of the most famous database on the planet:
SQLite
I have worked with Maxwell/Debezium, which are tools that let you replicate databases. I thought, where else to start than reading about this binlog stuff itself.
Wait, what’s this
Binlog
?
Essentially, when you issue any query to a database, the engine first writes the command that it is going to do on a structure append-only log file. Then it issues the command (SELECT/INSERT or any command). It does so to have
I
(Integrity) in the
ACID
that it guarantees.
In case the db operation, let’s say the INSERT fails for some reason, because of disk corruption or power failures or data center floodings, it can replay the action by reading from the log file that it had previously written to.
This is what we call in database terms, a
Write-Ahead-Log (WAL)
mechanism.
There are no clocks in a casino, so the dealers all set their phone alarms for noon. Everyone was a bundle of nerves. Before work, a couple of people threw up.
But when the cacophony of alarms sounded, everyone lifted their hands in the air, slammed down the lids on their games of baccarat, blackjack, craps, and roulette, and announced they were on strike. “It was more powerful than anything I’ve ever felt in my life,” said dealer Tera Arnold. “I had goosebumps head to toe.”
The other dealers were waiting outside. When the strikers began streaming out the doors and moving their cars out of the employee parking lot, “the sheer amount of joy, raw energy, seeing my colleagues from all walks of life pull around that corner, hands out the windows—everybody lost their mind,” said dealer Dakota Massman. “It was one of the most beautiful moments of human solidarity I’ve ever had the pleasure of being part of.”
As of October 17, the dealers at the Horseshoe Indianapolis Casino in Shelbyville, Indiana, were on strike for union recognition.
THE OLD-FASHIONED WAY
It's a rare, courageous, throwback tactic. Ninety years ago this was the main way unions were formed.
But ever since the 1935 National Labor Relations Act, another option has become the norm: If the employer doesn’t acknowledge your majority support on union cards, you file for a government-supervised election to prove your majority a second time. You grit your teeth through weeks of anti-union pressure, win the vote, and the government orders your boss to get with the program.
That's how the 200 dealers at the Horseshoe Casino, part of the Caesars chain, had planned to do it. They marched on their boss in September with proof of super-majority support to join Teamsters Local 135. They got an election date, October 17. Caesars brought in the union-busting firm Littler Mendelson, but the dealers stuck together—in fact, the propaganda blitz backfired and more workers signed cards.
Then on October 1, the federal government shut down. The election was postponed indefinitely.
The union proposed to bring in a neutral party to conduct the vote; the boss wasn’t interested. So Local 135 leaders talked with the casino workers about their options. They could wait in limbo while the company honed its anti-union talking points and diluted the unit with new hires. Or they could take a big risk and do it the old-fashioned way. The workers voted by 92 percent to go for it.
“Everybody keeps asking if we’re scared of losing our jobs,” Arnold said. “I’ve never been so not scared of anything in my life. I feel so powerful, so strong. We’re finally united.”
“Everybody is out here yelling, screaming, stopping traffic,” Massman said. “We’ve been quiet in there for years because we were afraid we were going to lose our jobs. They can fire you for anything. They can cook up something about a hundred-dollar variance and they don’t have to prove it. Out here I don’t have that fear anymore.”
ABYSMAL WAGES
A top complaint is the abysmal pay. Dealer wages are $5-$7 an hour. The rest of their earnings are tips, pooled and divided based on hours each day.
The wage is so low that management has an incentive to over-staff even during slow times. If opening a dozen more tables for an hour causes one customer to lose an extra $100, the casino makes a profit—but the tips are thinner, spread among more dealers.
“We went from 120 dealers to 200 dealers,” Massman said. “Essentially they over-saturated our workforce. Paychecks are down almost $1,000 a month. That’s rent. That’s your car bill.”
Making matters worse: Workers taking paid time off on a given day are counted in the tip pool—so your co-workers pay for your PTO. And “dual rate” dealers like Arnold work some days as floor leads for $23-$25 an hour but no tips, which depending on your schedule can mean you get the worst of both worlds.
Then there are the working conditions. Arnold first started thinking about organizing on a day when the gas leaked, a water pipe burst, the casino flooded, the heat went off, the temperature fell to 37 degrees, and yet dealers were required to keep working—wearing hats and gloves, sloshing through flood water. It was Christmas morning, 2022.
“Other departments represented by unions were able to leave,” she said. “They told us that if we left we would get job abandonment and insubordination.”
Meanwhile Horseshoe is raking in a million dollars a day, Massman said. “It’s public information, you can verify every number. They really are printing money.”
SCARCE FOOD, SCARCE GAMBLERS
A large majority of dealers and dual-rates are on strike, covering all three entrances, picketing round the clock on their regular work shifts. After three weeks, only a few have gone back in—and some who weren’t striking at first have walked out. They’ve handed out thousands of leaflets.
Strike pay helps make this possible. The dealers are getting the enhanced rate of $1,000 a week, which the Teamsters international has been granting to strikers “all over the country for the last year,” Local 135 President Dustin Roach said. “That’s why we’ve been taking on so many fights, and winning so many fights.”
The Teamsters constitution sets strike pay much lower—five times monthly dues for members, and $150 for newly organizing workers—but it also allows the international executive board to approve any strike benefits it considers in the union’s best interests. Roach said the executive board has been approving every request for the enhanced pay, and encouraging locals to publicize it to strengthen their strike threats.
The casino is still operating, but the strike has turned many customers away. Some are sympathetic—dealers get to know their regulars pretty well—including a couple of players for the Indianapolis Colts. A retired postal union member burned his Diamond Elite Caesars club card before a cheering crowd.
Conditions inside are miserable. Teamsters at Sysco, Pepsi, Kroger, UPS, and Quickway are refusing to cross the picket line. “A business can’t run without truckers,” Arnold said. “They need food and alcohol, and they’re not getting it.” The vending machines are empty; cards and dice aren’t being delivered; the elevator goes unrepaired. The casino rented a box truck to make its own pickups, but the truck isn’t refrigerated, so food is going bad.
The latest worker to join the strike painted a harrowing picture. People are getting sick from curdled milk. The off-brand playing cards don’t scan right on the card readers, and frustrated customers are yelling at the fumbling substitute dealers.
THE RIGHT TO HONOR PICKET LINES
On the other hand, all the members of existing unions at the casino are crossing the picket line—even the slot machine attendants, who are in the same union, Teamsters Local 135. Their contracts lack picket line protection language. “They’re so sad,” Arnold says. “They hate crossing that line. Every time they drive by they’re honking and waving.”
Once a manager brought hand warmers out to her striking husband. “She walked back in and they fired her,” Arnold said.
For the strikers, the experience has driven home the importance of winning the right to honor picket lines in their future contract.
“If you get a couple units in cahoots, you could shut this whole place down,” Massman said. “Whenever contracts are up for other departments, you can bet your bottom dollar we’re going to go out so that they can get paid more, whether it’s jockeys, environmental services, cashiers.”
CAESARS BLEEDS MONEY
The dealers are the largest unit in the casino, and not easy to replace. Dealers need weeks of training in each game, plus licensing and a background check. “They don’t have the bodies in there to keep it going,” Arnold said.
The casino has brought workers over from the poker department, plus managers from the Caesars casino in Anderson, Indiana. Arnold and Massman drove up to talk with the dealers there. They found out what scabs are getting paid: $45 an hour, plus a $50 gas card every trip.
Management must realize that bargaining won’t be cheap with these workers who have learned not to fear a strike.
Arnold, who has lived in Shelbyville for 15 years, was part of a community fight to keep the casino from opening in the first place. “The majority didn’t want the poverty, the corruption to the community,” she said. “We all fought against it.”
Eventually she, her son, and her daughter all ended up as casino workers. “What’s eating me alive is we fought so hard to not have this happen to our community,” she said. “This is what we were scared was going to happen. Open your eyes, Shelby County, it’s happening.”
‘NEVER BEEN MORE PROUD’
The Horseshoe Casino is Shelbyville’s biggest private employer, accustomed to throwing money around and getting its way. It claims a section of Michigan Road, a major local thoroughfare, is its private property. The union disputes this. But in the third week, the city sent cops out to tear down the union canopies, smash up supplies, and threaten everyone with arrest.
The strikers trooped across to the definitely-public side of Michigan Road while Local 135 President Dustin Roach—who won leadership of the 14,000-member local three years ago on a
reform slate
—risked arrest, picketing solo for hours on the disputed sidewalk. They never did arrest him, and 120 strikers packed a city council meeting that night.
Spirits are high. “I’ve never been more proud of myself and the people around me,” Massman said. “This is how the working class needs to come together. It feels really good to fight for others. It’s something I want to look more into with my life.”
More liberals, people of color and LGBTQ say they're buying guns out of fear
Lara Smith, national spokesperson for the Liberal Gun Club, says membership has surged since President Trump's election last year. She says people of color and trans people have sought training after receiving threats in their communities.
Hadassah Grout Photography
hide caption
toggle caption
Hadassah Grout Photography
When Charles was growing up in the 1970s in Brooklyn, N.Y., his mother was so strict, she forbade toy guns of any kind, including squirt guns.
"I remember vividly, summertime, when my friends would have water gun fights and I couldn't participate," he recalls.
He grew up and became a doctor, and these days, he heads to a shooting range in Maryland each week for target practice with his Smith & Wesson .380.
Charles, who is Black, says he bought the handgun after the Trump administration did things that scared him, including arresting a foreign student who criticized her university's policy on Israel and handcuffing a U.S. senator who was forcibly removed from a Homeland Security news conference.
"What I'm talking about is protecting myself from a situation where there may be some kind of civil unrest," says Charles. Like most people who spoke with NPR for this story, he asked that his last name not be used for fear of retribution.
Charles, a doctor in Maryland, wasn't even allowed to have toy guns as a kid. Now, he says he's so concerned about his family's safety because of the Trump administration's actions and rhetoric, that he trains weekly at a shooting range.
KT Kanazawich
hide caption
toggle caption
KT Kanazawich
Charles says he worries that some of President Trump's supporters may feel emboldened someday to target minorities like him and his family.
"He could dispatch citizens or the government," Charles says. "I'm not saying that's what's going to happen. What I'm saying is none of this is out of the question any longer."
Changing face of American gun ownership
For decades, the image of gun ownership in America was white, rural and Republican, but that's been changing, according to gun clubs, trainers, Second Amendment advocates and academic researchers.
They say more liberals, people of color and LGBTQ folks have been buying guns for years and particularly since Trump's reelection in 2024. This story was based on more than 30 interviews. David Phillips is on the training team of the Liberal Gun Club, which has chapters in more than 30 states and provides a haven for liberals to train and learn about guns. He says club membership has grown from 2,700 in November to 4,500 today. Requests for training, he says, have quintupled.
"The concern is about the supporters of the right-wing who feel that they have been given permission to run roughshod at least, if not commit outright violence against people they don't like," Phillips says.
Asked about these concerns, the White House dismissed NPR's reporting.
"Instead of covering Americans exercising their Second Amendment Right and trying to disingenuously blame President Trump, NPR should highlight the dangerous language from elected Democrats that has driven leftists to commit actual violence against Republicans – including the recent assassination of Charlie Kirk," White House spokeswoman Abigail Jackson said in a statement.
Jackson said stories like this one are why NPR no longer receives federal funding. "That's something we can all celebrate," she added.
Trump has also blamed what he calls "the radical left" for demonizing him and his supporters and inspiring political violence.
But many liberals who spoke for this story say it's the other way around. They say the president
dehumanizes others with his rhetoric. For instance, Trump has said undocumented immigrants are "poisoning the blood of our country." The president has also called his political opponents, "radical left thugs that live like vermin."
Despite White House claims to the contrary, there is ample anecdotal evidence that more people are buying guns because some of the administration's policies frighten them.
"Never seen a surge like this before"
"As everyone knows, there has been a huge surge in fear and panic since the election," said Tom Nguyen, speaking on YouTube to his LA Progressive Shooters gun club just a few weeks after Trump's inauguration. Nguyen said the club's Pistol 101 classes were already booked out for nine months.
"I've never seen a surge like this before," said Thomas Boyer, spokesperson for the San Francisco Chapter of the Pink Pistols, whose motto is: "Armed gays don't get bashed."
Even traditional Second Amendment groups say more liberals are seeking gun training.
"It's definitely common knowledge at this point," said Taylor Rhodes, communications director for the National Association for Gun Rights.
Charles's daughter, Charley, practices with her father. The day after President Trump's election, a man drove onto her college campus and hurled racial slurs at Black students.
KT Kanazawich
hide caption
toggle caption
KT Kanazawich
There's no way to measure just how many people are buying guns because the political environment scares them, but the phrase "How do I buy a gun?" spiked a number of times in the past year, according to Google Trends.
Those spikes happened around the time of Trump's 2024 election, his inauguration, the first immigration enforcement blitz in January and the day when Trump held a military parade in Washington, D.C.
This recent surge in liberals buying guns is the latest in a years-long trend. For instance, a University of Chicago study found that gun ownership by Democrats or Democrat-leaning people rose by 7 percentage points between 2010 and 2022.
David Yamane, a professor of sociology at Wake Forest University in North Carolina, says the events of 2020 and early 2021 – the pandemic, the murder of George Floyd and the Jan. 6 Capitol riot – were particular drivers.
"We do know that in that year that new gun owners were disproportionately African American (and) disproportionately female," Yamane said.
Only for self-protection
Like the vast majority of gun owners, those who spoke to NPR said they would use the weapons only for self-protection and would not engage law enforcement.
"All the language that we use is absolutely not about rallying together to arm and go assault anyone," says MJ, a member of a liberal, self-defense group in the Midwest who asked NPR not to use his full name because he feared retribution. "If anyone even talks like that, I or someone else would probably boot them out of the group."
Bill Sack, director of legal operations with the Second Amendment Foundation, which challenges gun control legislation, says he's glad to see more liberals exercising their right to self-defense – but he isn't happy about why.
"Is it a good thing that people are scared?" he says. "No, of course not."
Every new gun owner who spoke to NPR said they thought it was highly unlikely they would have to defend themselves because of civil unrest. But they also said that if they ever had to, they'd regret not having a gun.
"As a man, as a father, as a husband, how remiss and derelict would it be for me to not be prepared?" says Charles, the Maryland doctor.
Ion is a modern system shell that features a simple, yet powerful, syntax. It is written entirely
in Rust, which greatly increases the overall quality and security of the shell. It also offers a
level of performance that exceeds that of Dash, when taking advantage of Ion's features. While it
is developed alongside, and primarily for, RedoxOS, it is a fully capable on other *nix platforms.
Ion Shell
Ion is still a WIP, and both its syntax and rules are subject to change over time. It is
still quite a ways from becoming stabilized, but we are getting very close. Changes to the
syntax at this time are likely to be minimal.
Ion Specification
Ion has a RFC process for language proposals. Ion's formal specification is located within the
rfcs
branch. The RFC process is still in
the early stages of development, so much of the current and future implementation ideas have
yet to be written into the specification.
The following PPA supports the 18.04 (bionic) and 19.04 (disco) releases. Bionic builds were made using the Pop_OS PPA's rustc 1.39.0 package.
sudo add-apt-repository ppa:mmstick76/ion-shell
Developer set up
Those who are developing software with Rust should install the
Rustup toolchain manager
.
After installing rustup, run
rustup override set 1.56.0
to set your Rust toolchain to the version that Ion is
targeting at the moment. To build for Redox OS,
rustup override set nightly
is required to build the Redox
dependencies.
Build dependencies
Please ensure that both cargo and rustc 1.56.0 or higher is installed for your system.
Release tarballs have not been made yet due to Ion being incomplete in a few remaining areas.
Installation
Installation of Ion shell for one user
git clone https://gitlab.redox-os.org/redox-os/ion/
cd ion
cargo install --path=. --force
This way the ion executable will be installed into the folder "~/.cargo/bin"
As an alternative you can do it like this
git clone https://gitlab.redox-os.org/redox-os/ion/
cd ion
cargo build --release
# Install to path which is included in the $PATH enviromnent variable
DESTDIR=~/.local/bin bash/install.sh
Installation of Ion shell system wide, for all users
git clone https://gitlab.redox-os.org/redox-os/ion/
cd ion
cargo build --release
sudo DESTDIR=/usr/local/bin bash/install.sh
# Optional: Do this if Ion shell shoulb be login shell on your system
sudo make update-shells prefix=/usr
Ion plugins
There are plugins for ion. These plugins are additional aliases and function definitions written in
Ion for Ion. They can be found under this
repository
.
There is a LSP-server for the scripting language of this shell.
You can install the LSP-server via crates.io to get IDE support like error messages for an code editor or IDE which understands the client side of LSP. Link to LSP server on crates.io :
https://crates.io/crates/ion_shell_lsp_server
.
The source code of the LSP server can be found here:
https://gitlab.redox-os.org/redox-os/ion_lsp
.
Google Gemini 3 spotted on AI Studio ahead of imminent release
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 21:52:21
Gemini 3, which could be Google's best large language model, could begin rolling out in the next few days or hours, as the model has been spotted on AI Studio. [...]...
Gemini 3, which could be Google's best large language model, will begin rolling out in the next few hours or days, as the model has been spotted on AI Studio.
AI Studio allows developers, researchers and students to build apps using Gemini models, and it also provides greater control over each model.
Right now, the latest model is Gemini 2.5 pro, and AI Studio gives you greater control over it, including context size, and even temperature.
Naturally, Gemini 3 will first come to AI Studio before gradually rolling out on gemini.google.com
Ahead of the rollout, which could begin in the next several hours, hints of Gemini 3 have been spotted on AI Studio with references to how temperature could affect the reasoning capabilities.
"Temperature is set at the most performant value for this model" -> "For Gemini 3, best results at default 1.0. Lower values may impact reasoning," the string within AI Studio reads, as
spotted
on X.
Gemini 3 reference found on Vertex AI and Nano Banana 2 is also in the works
Previously, Gemini-3-pro was
spotted
on Vertex AI, which is a Google-hosted cloud platform for using AI agents and building apps or products using AI.
This model is called "gemini-3-pro-preview-11-2025."
Gemini 3 Pro reference on Vertex AI
Gemini 3 is not the only model that could drop this year, as Google is also
testing
Nano Banana 2, codenamed "GEMPIX2," and it was recently spotted on the Gemini website.
This could mean that the model, which could be one of the best for generating AI images, will begin shipping as early as December 2025.
Google plans to share more details on its upcoming models in the coming days.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
The contents of this repository allow older versions of
UNIX
(
ancient UNIX
) to run easily on modern
Unix-like
systems (Linux, FreeBSD, macOS, among others).
At this time, you can run the following versions of UNIX:
UNIX versions for
PDP-11
(run on a PDP-11 simulator):
First of all, credits and acknowledgment for material available in this repository that is not my own (or has been modified by me based on previous work).
The UNIX versions available in this repository have been released as open source under the
Caldera license
available in this repository. Please read the document carefully for concrete information about your rights and obligations when using the software.
Note that various components within the system images may have been made available under other license conditions. Pay attention to these components. A clear example is version 2.11BSD UNIX, which features code covered by the Caldera license made available, in addition to code released under the
BSD license
. Source files available in the images show the license and due copyright. Check this data before reuse.
The UNIX images available in this repository were obtained from the w11 project (which uses these images for other purposes). You can get them directly
here
, as well as more information about the project, images, licenses and other data.
The scripts used to simulate the systems using SIMH for v5 and v7 UNIX were obtained from a w11 project repository, which can be accessed
here
. The original scripts are available under license GLP v3 or later. Modifications in these files were made by me, to fit the purpose of this repository. These modifications are restricted to the same license as the original script.
In addition, the general script for configuring the execution environment of versions v5 and v7 was obtained from the project, authored by
Walter F.J. Mueller
. You can get the original script
here
. The original script are available under license GLP v3 or later. Modifications in these files were made by me, to fit the purpose of this repository. These modifications are restricted to the same license as the original script.
The port of Version 7 UNIX to the x86 architecture was performed by
Robert Nordier
. These modifications are released under the simplified BSD license. For more information on all aspects of the distribution, read
this file
.
All my contributions and modifications
(except for material that requires redistribution under the same license, such as the running scripts)
are available in this repository under the BSD-3-Clause
license
.
Running UNIX
Section 1
Requeriments
You will need the following tools and utilities to run the available UNIX versions:
First of all, you must have the
PDP-11 Simulator
(SIMH),
qemu
,
GNU bash
,
Python
,
wget
and
git
installed on your device. If you already have them installed, skip to
section 2
.
To install on Debian, Ubuntu, Pop!_OS and derivatives, use:
To install on FreeBSD, use (for FreeBSD, installing GNU bash is also required. This shell is not normally installed in a default installation. Installation of GNU bash is not required on Linux systems, where bash is already installed by default):
su root # <= Enter your password to login as root user
pkg install -q -y simh bash qemu git wget python3 py39-pip
ln -s /usr/local/bin/pip-3.9 /usr/local/bin/pip
pip install --upgrade pip
To install on NetBSD, use (for NetBSD, installing GNU bash is also required. This shell is not normally installed in a default installation. Installation of GNU bash is not required on Linux systems, where bash is already installed by default):
su root # <= Enter your password to login as root user
pkgin install simh bash qemu git wget python3 py39-pip
ln -s /usr/local/bin/pip-3.9 /usr/local/bin/pip
pip install --upgrade pip
To install on OpenBSD, use (for OpenBSD, installing GNU bash is also required. This shell is not normally installed in a default installation. Installation of GNU bash is not required on Linux systems, where bash is already installed by default):
su root # <= Enter your password to login as root user
pkg_add simh bash qemu git wget python3 py39-pip
ln -s /usr/local/bin/pip-3.9 /usr/local/bin/pip
pip install --upgrade pip
You must clone this repository to your computer. For that, use:
git clone https://github.com/felipenlunkes/run-ancient-unix
cd run-ancient-unix
After cloning the repository with the configuration files, you must populate the directories of each UNIX version with their respective image files. For that, go to the
next section
.
Section 3
Now, you have to run the available
run.sh
script. For that, use:
First, you have to run the script and select the option to install system images. You can also use the Python frontend to run the script. This is the easiest and simplest way to run script functions. To run this frontend and not rely on the command line, go to
section 5
. To continue the steps using the terminal, go to
section 4
.
Section 4
You will see the following screen:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>:
In this case, you should select option
7
, which will install the system images. After pressing 7, press ENTER to make your choice effective. Wait for the process of obtaining, extracting, configuring and installing the images.
After the installation is complete, you must run
run.sh
again to start a UNIX version.
When running the script, you will be asked to choose one of the available UNIX versions. After typing only the number relative to the choice, press ENTER to make your decision effective. Then wait for the desired version to run.
Now, you need to know peculiarities in the execution of each version of the system. For this, go to
section 6
.
Section 5
You need to start running the Python frontend that will manage the configuration and running of UNIX on your computer. First, you must install the TKinter Python package on your computer. For that, use:
After that, you can press the
RAU.py
script with the right button of your mouse and select the option of
Run as program
or start the script from the terminal, using:
WARNING! The frontend is currently only compatible with the GNOME graphical environment (Linux and BSD systems). You can manually replace the
gnome-terminal
calls with
konsole
or another desired terminal emulator. Feel free to submit a pull request with any improvements or changes to the frontend.
After running the program, you will see the following screen:
On first run, you must install the UNIX disk images locally on your computer. Prior to this operation, you will NOT be able to run UNIX. To do so, click on the
Install UNIX system images
button.
After downloading and installing the disk images, you are able to run UNIX. To do so, select the desired UNIX version in the
Running options
section of the frontend screen.
Go to the
next section
for more information about the specifics of running each version of UNIX available. Remember that when using the Python frontend, the command line selection screen, as shown in the next section, will not be displayed. However, the manual options and settings presented in the next section (after the selection screen, which will not appear) are still required to run each version of UNIX.
Section 6
Select the desired UNIX version option below for details on how to start and operate the system. Each version of UNIX has different boot procedures. Pay attention to each particularity.
Particularities for Version 1 UNIX
Particularities for Version 1 UNIX
After the start of execution after selecting v1 version, you will see a screen like below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 1
PDP-11 simulator V3.8-1
Disabling CR
Disabling XQ
RF: buffering file in memory
TC0: 16b format, buffering file in memory
:login:
Just type
root
, in lower case, and press ENTER. You will immediately be taken to the UNIX v1 shell.
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 1
PDP-11 simulator V3.8-1
Disabling CR
Disabling XQ
RF: buffering file in memory
TC0: 16b format, buffering file in memory
:login: root
root
# ls
bin
dev
etc
tmp
usr
#
To end the simulation, press CTRL-E followed by CTRL-C or by typing quit when the
simh>
prompt appears on the screen.
Particularities for Version 5 UNIX
Particularities for Version 5 UNIX
After the start of execution after selecting v5 version, you will see a screen like below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 2
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
@
To start UNIX, you must type
unix
and press ENTER after the @ character, without spaces and in lower case. After pressing ENTER, UNIX will load and you will be taken to a login screen as below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 2
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
@unix
login:
You must then type
root
and press ENTER. You will then be taken to the shell and be able to use the system. See below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 2
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
@unix
login: root
#
To end the simulation, press CTRL-E followed by CTRL-C or by typing quit when the
simh>
prompt appears on the screen.
Particularities for Version 7 UNIX
Particularities for Version 7 UNIX
After the start of execution after selecting v7 version, you will see a screen like below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 3
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
After seeing the screen above, you must type
boot
in lower case and press ENTER. You will see the screen below after that:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 3
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
boot
Boot
:
After the appearance of
:
, you must type, without spaces and in lower case, the command
hp(0,0)unix
and press ENTER, as below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 3
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
boot
Boot
: hp(0,0)unix
mem = 2020544
#
Pressing ENTER will immediately take you to the UNIX v7 shell.
To enter multiuser mode and access all system functions, press CTRL-D. Afterwards, provide
root
as username and password. You will again be taken to the UNIX v7 shell, as below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 3
PDP-11 simulator V3.8-1
Disabling XQ
Logging to file "simh_dl0.log"
Listening on port 5671 (socket 5)
Listening on port 5672 (socket 7)
Modem control activated
boot
Boot
: hp(0,0)unix
mem = 2020544
# RESTRICTED RIGHTS: USE, DUPLICATION, OR DISCLOSURE
IS SUBJECT TO RESTRICTIONS STATED IN YOUR CONTRACT WITH
WESTERN ELECTRIC COMPANY, INC.
WED DEC 31 19:05:14 EST 1969
login: root
Password:
You have mail.
#
To end the simulation, press CTRL-E followed by CTRL-C or by typing quit when the
simh>
prompt appears on the screen.
Particularities for 2.11BSD UNIX
Particularities for 2.11BSD UNIX
After the start of execution after selecting 2.11BSD UNIX version, you will see a screen like below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 4
PDP-11 simulator V3.8-1
Listening on port 4000 (socket 4)
Modem control activated
Auto disconnect activated
211bsd.simh> attach xq eth0
File open error
Disabling CR
73Boot from ra(0,0,0) at 0172150
:
You can just press ENTER when you see the screen to start UNIX. Afterwards, you will see the following screen:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 4
PDP-11 simulator V3.8-1
Listening on port 4000 (socket 4)
Modem control activated
Auto disconnect activated
211bsd.simh> attach xq eth0
File open error
Disabling CR
73Boot from ra(0,0,0) at 0172150
:
: ra(0,0,0)unix
Boot: bootdev=02400 bootcsr=0172150
2.11 BSD UNIX #1: Fri Jun 9 08:42:54 PDT 1995
root@SSU-64EN137:/usr/src/sys/SYSTEM
ra0: Ver 3 mod 3
ra0: RD54 size=311200
attaching qe0 csr 174440
qe0: DEC DELQA addr 00:50:56:01:01:01
attaching lo0
phys mem = 3145728
avail mem = 1737664
user mem = 307200
June 9 12:21:04 init: configure system
dz 0 csr 160100 vector 300 attached
ra 0 csr 172150 vector 154 vectorset attached
ts 0 csr 172520 vector 224 attached
erase, kill ^U, intr ^C
#
The
#
symbol indicates that the shell is ready to receive commands. Try using
uname -a
or
ls
to get started.
To enter multiuser mode and access all system functions, press CTRL-D. Afterwards, provide
root
as username and password. You will again be taken to the 2.11BSD shell.
To end the simulation, press CTRL-E followed by CTRL-C or by typing quit when the
simh>
prompt appears on the screen.
Particularities for Version 7 UNIX for x86
Particularities for Version 7 UNIX for x86
After the start of execution after selecting v7 UNIX for x86, you will see a screen like below:
You must select, from the list below, which edition/version of
UNIX you want to start. The available options are:
1) v1 UNIX
2) v5 UNIX
3) v7 UNIX
4) 2.11BSD UNIX
5) v7 UNIX for x86
6) Clear temporary files
7) Install the disk images for UNIX
Select a number and press <ENTER>: 5
Upon selection,
qemu
will automatically start with the Version 7 UNIX for x86 disk image. After the initial boot, you will see the following screen:
Then press ENTER to load and start UNIX. After pressing ENTER, you will see the following screen, and you will be able to interact with the Version 7 UNIX shell:
To enter multiuser mode and access all system functions, press CTRL-D. Afterwards, provide
root
as username and
password
as password. You will again be taken to the Version 7 UNIX shell.
When you are finished running the system on the PDP-11 simulator, you can clean up temporary and log files that may have been created by SIMH. To do so, go to
section 7
.
Section 7
The simulator can create temporary and log files to simulate peripheral devices that would be connected to a PDP-11 minicomputer. These files typically have
.log
and
.dat
extensions. You can remove these files using the
run.sh
script and selecting the cleanup temporary files option, as well as manually going into each system directory and entering, in your system shell:
cd UNIX_VERSION_DIRECTORY
rm *.log *.dat
cd ..
Chuck Moore retires from colorforth after latest Windows breaks rendering
Jay Yagnik, VP of AI innovation and research, on Google’s The Keyword blog:
Private AI Compute is built on a multi-layered system that is
designed from the ground up around core security and privacy
principles:
One integrated Google tech stack: Private AI Compute runs on
one seamless Google st...
Today we’re introducing Private AI Compute to bring you intelligent AI experiences with the power of Gemini models in the cloud, while keeping your data private to you.
For decades, Google has developed privacy-enhancing technologies (PETs) to improve a wide range of AI-related use cases. Today, we’re taking the next step in building helpful experiences that keep users safe with Private AI Compute in the cloud, a new AI processing platform that combines our most capable Gemini models from the cloud with the same security and privacy assurances you expect from on-device processing. It's part of our ongoing commitment to deliver AI with safety and responsibility at the core.
AI is evolving to become even more helpful, personal and proactive. It’s moving from completing simple requests to AI that can anticipate your needs with tailored suggestions or handle tasks for you at just the right moment. This progression in capability requires advanced reasoning and computational power that at times goes beyond what’s possible with on-device processing.
That’s why we built Private AI Compute: to unlock the full speed and power of Gemini cloud models for AI experiences, while ensuring your personal data stays private to you and is not accessible to anyone else, not even Google. Private AI Compute allows you to get faster, more helpful responses, making it easier to find what you need, get smart suggestions and take action.
How Private AI Compute protects your data in the cloud
As a pioneer in the field of responsible AI, Private AI Compute in the cloud is our next step in AI processing technology. It builds off of the industry-leading security and privacy safeguards that we embed to keep you in control of your experiences and your data safe, guided by our
Secure AI Framework
,
AI Principles
and
Privacy Principles
.
Private AI Compute is a secure, fortified space for processing your data that keeps your data isolated and private to you. It processes the same type of sensitive information you might expect to be processed on-device. Within its trusted boundary, your personal information, unique insights and how you use them are protected by an extra layer of security and privacy in addition to our existing AI safeguards.
Private AI Compute is built on a multi-layered system that is designed from the ground up around core security and privacy principles:
One integrated Google tech stack:
Private AI Compute runs on one seamless Google stack powered by our own custom Tensor Processing Units (TPUs). World-class privacy and security is integrated into this architecture with
Titanium
Intelligence Enclaves (TIE). This design enables Google AI features to use our most capable and intelligent
Gemini models in the cloud
, with our high standards for privacy and the same in-house computing infrastructure you already rely on for Gmail and Search.
No access:
Remote attestation and encryption are used to connect your device to the hardware-secured sealed cloud environment, allowing Gemini models to securely process your data within a specialized, protected space. This ensures sensitive data processed by Private AI Compute remains accessible only to you and no one else, not even Google.
Using Private AI Compute for more helpful, private AI experiences
Private AI Compute enables on-device features to perform with extended capabilities while retaining their privacy assurance. Using this technology,
Magic Cue
is getting even more helpful with
more timely suggestions on the latest Pixel 10 phones
. And with the help of Private AI Compute, the
Recorder
app on Pixel is able to summarize transcriptions across a wider range of languages.
This is just the beginning. Private AI Compute opens up a new set of possibilities for helpful AI experiences now that we can use both on-device and advanced cloud models for the most sensitive use cases. We look forward to sharing more updates, and you can review our
technical brief
to learn more about how Private AI Compute advances AI privacy.
OpenAI: Piloting Group Chats in ChatGPT
Daring Fireball
openai.com
2025-11-17 21:32:17
OpenAI:
To start a group chat tap the people icon in the top right corner
of any new or existing chat. When you add someone to an existing
chat, ChatGPT creates a copy of your conversation as a new group
chat so your original conversation stays separate. You can invite
others directly by sharing...
Eurofiber France warns of breach after hacker tries to sell customer data
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 21:14:28
Eurofiber France disclosed a data breach it discovered late last week when hackers gained access to its ticket management system by exploiting a vulnerability and exfiltrated information. [...]...
Eurofiber France disclosed a data breach it discovered late last week when hackers gained access to its ticket management system by exploiting a vulnerability and exfiltrated information.
Eurofiber France SAS is the French unit of the Eurofiber Group N.V., a Dutch telecommunications service provider that operates a fiber network of 76,000 km across the Netherlands, Belgium, France, and Germany.
The company specializes in providing digital infrastructure for businesses, rather than the consumer market.
No critical data impacted
The cybersecurity incident impacts only the French division of the group, the company says in the announcement, including its cloud division (ATE portal) and its regional Eurafibre, FullSave, Netiwan, and Avelia sub-brands.
In a press release, the company states that the impact is minimal for indirect sales and wholesale partners in France, as most of them rely on separate systems.
“In the first hours following detection, the ticketing platform and the ATE portal were placed under enhanced security, and the vulnerability was patched,”
mentions Eurofiber France
.
“Additional measures have been implemented to prevent any further data leaks and strengthen system security.”
Although the company said that banking details or other “critical data” stored on its other systems were not affected by this incident, it did not specify exactly what types of data were stolen, only mentioning that it would notify affected customers.
Threat actor claims breach
BleepingComputer found that a threat actor calling themselves ‘ByteToBreach’ claimed the attack on a data leak forum, alleging that they stole data belonging to 10,000 businesses and even government entities, all clients of Eurofiber.
The threat actor claims to be holding data that the clients uploaded to the ticketing system, including screenshots, VPN configuration files, credentials, source code, certificates, archives, email accounts as files, and SQL backup files.
Threat actor claiming the attack
Source: BleepingComputer
BleepingComputer has contacted Eurofiber France for clarification about the allegations, types of data exposed in the incident, the number of customers impacted, and the name of the breached software, but we are still waiting for the company to respond.
Eurofiber France said it has notified the French data protection agency (CNIL) as well as ANSSI - the country's cybersecurity agency, and filed a report for extortion, indicating that the threat actor has demanded payment to not leak the stolen data.
Last August, another French telecommunications service provider,
Bouygues Telecom
, suffered a data breach that exposed the personal data of 6.4 million customers.
Earlier, in July 2025,
Orange France
disclosed a cybersecurity breach on its network, though it has not confirmed data theft as of yet.
How many clicks does it take to add a new VLAN to an OPNsense firewall?
Nothing fancy. Just your regular, basic VLAN with its own IPv4 range.
How many clicks should that take? Maybe two or three? Five if we’re real wild?
Every time I add a new VLAN to OPNsense, the process feels strangely tedious, so I decided to
measure exactly how many clicks
it takes to add a simple VLAN to my firewall.
The result was:
26 clicks
71 keystrokes
6 distinct screens / dialogs
3 distinct workflows
And that’s before I even assign any firewall rules!
I could have traded some of those clicks for keystrokes with the Tab key, but I tried to match my everyday process.
There are so many steps in the process where I just want to ask OPNsense, “Why couldn’t you have figured this out on your own?”
Every time I add a VLAN to my OPNsense router, I have to say, “Actually, I’d like it on my
LAN interface
, not the random, disconnected interface you chose by default because its name is first alphabetically”:
If I dare enter an arbitrary VLAN name, OPNsense whines and insists I prefix the name with
vlan
:
You gave me an arbitrary input field, OPNsense! If you want a special prefix, add that on your end. Don’t conscript me to type your prefixes for you.
And speaking of typing prefixes for you, when we get to DHCP assignments, if I try to leave the start and end range blank, you give me
four
separate errors:
And again, in both the “from” and “to” fields, I have to type out
192.168.10.
even though OPNsense knows that’s the only valid prefix I can enter. Why can’t you do that for me, OPNsense? Better yet, default to the full subnet range so I don’t have to type anything.
If you didn’t have the patience to sit through the whole video, I actually have to go through three separate workflows to create one standard VLAN:
Create the VLAN device.
Create a VLAN
interface assignment
for that device.
Configure DHCP for that interface.
But it’s all the same VLAN? Why isn’t it just one screen? Or at the very least, a single, continuous flow rather than forcing me to go scour the whole OPNsense settings tree for the next workflow.
I’m grateful to OPNsense for helping me escape the ecosystem of closed-source, buggy home firewalls that Linksys puts out, and I’ve been a
paying licensee
for four years.
But I’m thinking it might be time for me to leave the nest and run OpenBSD or FreeBSD directly with some simple scripts to do what I want. It doesn’t seem too hard to run one of those OSes and create a script like this:
$ ./add-vlan --name='guest' --tag=10Created VLAN "guest" with tag 10 and IP range 192.168.10.1 to 192.168.10.254
Judge Rules Trump Can’t Cut UC Funding — but UC Leaders Are Still Negotiating a Settlement
Intercept
theintercept.com
2025-11-17 20:37:51
Nationwide, faculty and students fight against Trump’s assault on higher education — and administrators capitulate.
The post Judge Rules Trump Can’t Cut UC Funding — but UC Leaders Are Still Negotiating a Settlement appeared first on The Intercept....
A poster reads “UCLA Faculty for a Free Palestine” as faculty and staff members demonstrate with students at the University of California, Los Angeles on May 1, 2024.
Photo: Etienne Laurent/AFP via Getty Images
In a landmark
ruling last Friday, a federal judge
indefinitely barred
the Trump administration from fining or cutting funds to the University of California system over the government’s bogus claims of antisemitism and discrimination.
U.S. District Judge Rita Lin was unequivocal that the Trump administration, which has demanded over a $1.2 billion settlement from the UC system and already cut over $600 million in federal funding, was “engaged in a concerted campaign to purge ‘woke,’ ‘left,’ and ‘socialist’ viewpoints from our country’s leading universities.”
The “playbook,” she said, had been repeated by Trump nationwide, “with the goal of bringing universities to their knees and forcing them to change their ideological tune.”
The decision, a preliminary injunction, is a win for speech on campus and academic freedom — and a rebuke to the vile weaponization of antisemitism claims to silence dissent.
There are lessons to be learned from this victory — and from the absence of UC leadership in it.
The case was brought not by administrators, but by workers and students in the UC system, one of the most prestigious public university networks in the country. A coalition of faculty, staff, and student groups and unions from UC schools sued the administration for violating their First Amendment rights to free speech and Fifth Amendment rights to due process.
Not only did the University of California leadership have nothing to do with the case, but the school system leaders remain so cravenly wedded to capitulation that they’re still in settlement discussions with the administration.
There are lessons to be learned from this victory — and from the absence of UC leadership in it.
We know who we need to support: Over the last two years, the struggle to keep universities and colleges alive as sites of intellectual interrogation and
learning
have been
fought by faculty
,
staff
, and
students
. And we know who to be wary of: Again and again,
school administrators
have been complicit in the dismantling and undermining of the communities they are supposed to serve.
These dynamics are present nationwide; UC administrations are not alone in their willingness to throw their
faculty
and
students
under the bus for speaking out against Israel’s genocide in Gaza.
Schools
including
Columbia University
, Brown University, and the University of Virginia, among others, have all made deals with Trump to pay tens of millions of dollars in cowardly settlements to restore federal funding. They have agreed to egregious conditions, like targeting anti-racist admissions efforts, entrenching pro-Israel alignments, harming trans students and faculty, and policing speech and programs disfavored by the Trumpian right.
Harvard University earned praise for suing rather than settling with the Trump administration. In that case, too, a federal judge ruled that Trump’s attempt to freeze more than $2 billion in federal research grants was illegal. The judge lambasted the government for using “antisemitism as a smokescreen for a targeted, ideologically-motivated assault on this country’s premier universities.”
Yet Harvard’s apparent resistance was belied by the school “quietly complying with Trump’s agenda” anyway, as two Harvard Ph.D. students
noted
. The university fired Harvard’s Center for Middle Eastern Studies director and associate director, among other attacks on scholars and programs with apparent Palestine solidarity connections. The university also renamed its Office of Equity, Diversity, Inclusion, and Belonging in alignment with Trump’s
anti-DEI campaign
.
Who Will Save Universities?
It would be nice if we could unreservedly celebrate Friday’s ruling as proof of the movement dictum that “when we fight, we win!” There’s little cause for optimism, though, about the future of higher education in the face of a government hellbent on its destruction, and universities led by people who have imperilled their institutions with four decades of neoliberal austerity, corporatization, and adjunctification.
Higher education today is a charnel house. Even the wealthiest schools are
freezing
Ph.D. admissions and cutting whole programs under unprecedented
economic
pressures, accelerated by Trump’s
attacks
.
Yet the political nature of American academia’s remaking cannot be reduced to fiscal necessity or Trumpian animus alone.
Humanities and social research departments in particular face the chop, while bloated administrator salaries and other corporate overheads go untouched.
Top-heavy administrative offices are choosing their austerity measures in specific ways. In schools around the country, humanities and social research
departments
in particular face the chop, while
bloated
administrator salaries and other corporate overheads go untouched. Faculty governance has been reduced to a fig leaf.
“Simply put, universities have reached a point where executive power—the President, with the invisible hand of the Board above—is absolute, except where there are unions,”
wrote
Adam Rzepka, an English professor at New Jersey’s Montclair State University, in a recent American Association of University Professors blog post.
He added that even unions “are often unable to act beyond what is currently subject to negotiation,” such that department closures, academic oversight, and disciplinary issues are taken out of academic workers’ hands.
“Not that faculty here haven’t tried to steer the ship away from this iceberg, but faculty everywhere know how that goes these days,” Rzepka wrote.
It is a grim prospect indeed — and an extraordinary amount of
bullshit work
— to have to try to prove the value of intellectual
education
and research within the logic of a management consultant’s report.
Such is the nature of corporatized higher education, made starkly clear and worse under Trump.
Friday’s ruling against the Trump administration is a reminder of who will lead the fight for higher education.
The only way to save universities in this country will be to end the unaccountable executive governance and corporate oversight, which has left schools of every size, both private and public, vulnerable to authoritarian attacks.
Decision-making should truly be in the hands of professors, workers, and students willing to fight for robust academic freedom, scholarly integrity, and an antifascist future for education.
If the UC schools, collectively the second largest employer in the state, are saved, it is thanks to the community of workers and scholars alone.
This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
What can I do to resolve this?
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.
A long time ago, I spent a few years working on garbage collection in the J9 Java VM.
1
And even though I’ve since done mostly done higher-level stuff, having a deeper knowledge of GC has continued to come in useful.
I’m working with a team that’s using
Ohm
to parse text documents and render a rich text version in
ProseMirror
. The goal is bidirectional updates: changes in ProseMirror should propagate to the text version, and vice versa.
Ohm supports incremental parsing, which means that if you parse some text and then make a small edit, it can quickly reparse by reusing portions of the previous result.
It also supports limited form of incremental transforms. You can define an
attribute
, which is kind of like a memoized visitor, and the attribute value for a given node will only need to be recalculated if the edit affected one of its subtrees. So you can easily implement a form of persistent data structure, where each new value (e.g., an AST) shares a bunch of structure with the previous one.
the problem
Using this machinery, I tried making an
pmNodes
attribute that produced a ProseMirror document for a given input. When the text document is edited, it produces a new tree which shares a bunch of nodes with the previous one.
The `pmNodes` tree before and after an edit.
Then, my plan was to construct a ProseMirror transaction that would turn the old tree into the new one. To do that, it’s helpful to know which nodes appeared in the old document, but not the new one.
My first implementation of this was equivalent to tracing garbage collection — after each edit, I walked the entire document, and recorded all the nodes in a Set. The difference between the sets told me which nodes had died.
But this kind of defeats the purpose of incrementality — if you have a long document and make a small edit, we should be able to process
without
visiting every node in the document.
Tracing and reference counting are uniformly viewed as being fundamentally different approaches to garbage collection that possess very distinct performance properties. We have implemented high-performance collectors of both types, and in the process observed that the more we optimized them, the more similarly they behaved — that they seem to share some deep structure.
We present a formulation of the two algorithms that shows that they are in fact duals of each other. Intuitively, the difference is that tracing operates on live objects, or “matter”, while reference counting operates on dead objects, or “anti-matter”. For every operation performed by the tracing collector, there is a precisely corresponding anti-operation performed by the reference counting collector.
This was the answer I needed! Rather than visiting all the live objects, I wanted to only visit the dead ones, and reference counting would let me do that.
So I added a way of maintaining a reference count for all the nodes in the doc. When we produce a new document, we decrement the reference count of the old root node (it will always be 0 afterwards). So we recursively decrement the ref count of its children, and so on. This gives me exactly what I wanted — a way to find all the nodes that were
not
reused, without having to visit most of the nodes in the doc.
I've started working on a new edition of
Ruby Under a
Microscope
that covers Ruby 3.x. I'm working on this in my spare time, so it
will take a while. Leave a comment or
drop
me a line
and I'll email you when it's finished.
Here’s an excerpt from the completely new content for Chapter 4, about YJIT and
ZJIT. I’m still finishing this up… so this content is fresh off the page! It’s
been a lot of fun for me to learn about how JIT compilers work and to brush up
on my Rust skills as well. And it’s very exciting to see all the impressive work
the Ruby team at Shopify and other contributors have done to improve Ruby’s
runtime performance.
Chapter 4: Compiling Ruby To Machine Language
Interpreting vs. Compiling Ruby Code
4
Yet Another JIT (YJIT)
6
Virtual Machines and Actual Machines
6
Counting Method and Block Calls
8
YJIT Blocks
8
YJIT Branch Stubs
10
Executing YJIT Blocks and Branches
11
Deferred Compilation
12
Regenerating a YJIT Branch
12
YJIT Guards
14
Adding Two Integers Using Machine Language
15
Experiment 4-1: Which Code Does YJIT Optimize?
18
How YJIT Recompiles Code
22
Finding a Block Version
22
Saving Multiple Block Versions
24
ZJIT, Ruby’s Next Generation JIT
26
Counting Method and Block Calls
27
ZJIT Blocks
29
Method Based JIT
31
Rust Inside of Ruby
33
Experiment 4-2: Reading ZJIT HIR and LIR
35
Summary
37
Counting Method and Block Calls
To find hot spots, YJIT counts how many times your program calls each function
or block. When this count reaches a certain threshold, YJIT stops your program
and converts that section of code into machine language. Later Ruby will execute
the machine language version instead of the original YARV instructions.
To keep track of these counts, YJIT saves an internal counter nearby the YARV
instruction sequence for each function or block.
Figure 4-5: YJIT saves information adjacent to each set of YARV instructions
Figure 4-5 shows the YARV instruction sequence the main Ruby compiler created
for the
sum += i
block at (3) in Listing 4-1. At the
top, above the YARV instructions, Figure 4-5 shows two YJIT related values:
jit_entry
and
jit_entry_calls
. As we’ll see in a moment,
jit_entry
starts as a null value but will later hold a
pointer to the machine language instructions YJIT produces for this Ruby block.
Below
jit_entry
, Figure 4-5 also shows
jit_entry_calls
, YJIT’s internal counter.
Each time the program in Listing 4-1 calls this block, YJIT increments the value
of
jit_entry_calls
. Since the range at (1) in Listing
4-1 spans from 1 through 40, this counter will start at zero and increase by 1
each time
Range#each
calls the block at (3).
When the
jit_entry_calls
reaches a particular
threshold, YJIT will compile the YARV instructions into machine language. By
default for small Ruby programs YJIT in Ruby 3.5 uses a threshold of 30. Larger
programs, like Ruby on Rails web applications, will use a larger threshold value
of 120. (You can also change the threshold by passing
—yjit-call-threshold
when you run your Ruby program.)
YJIT Blocks
While compiling your Ruby program, YJIT saves the machine language instructions
it creates into
YJIT blocks
. YJIT blocks, which are distinct from Ruby blocks,
each contain a sequence of machine language instructions for a range of
corresponding YARV instructions. By grouping YARV instructions and compiling
each group into a YJIT block, YJIT can produce more optimized code that is
tailored to your program’s behavior and avoid compiling code that your program
doesn’t need.
As we’ll see next, a single YJIT block doesn’t correspond to a Ruby function or
block. YJIT blocks instead represent smaller sections of code: individual YARV
instructions or a small range of YARV instructions. Each Ruby function or block
typically consists of several YJIT blocks.
Let’s see how this works for our example. After the program in Listing 4-1
executes the Ruby block at (3) 29 times, YJIT will increment the
jit_entry_calls
counter again, just before Ruby runs the
block for the 30th time. Since
jit_entry_calls
reaches
the threshold value of 30, YJIT triggers the compilation process.
YJIT compiles the first YARV instruction
getlocal_WC_1
and saves machine language instructions that perform the same work as
getlocal_WC_1
into a new YJIT block:
Figure 4-6: Creating a YJIT block
On the left side, Figure 4-6 shows the YARV instructions for the
sum += i
Ruby block. On the right, Figure 4-6 shows the new
YJIT block corresponding to
getlocal_WC_1
.
Next, the YJIT compiler continues and compiles the second YARV instruction from
the left side of Figure 4-7:
getlocal_WC_0
at index 2.
Figure 4-7: Appending to a YJIT block
On the left side, Figure 4-7 shows the same YARV instructions for the
sum += i
Ruby block that we saw above in Figure 4-6. But now
the two dotted arrows indicate that the YJIT block on the right contains the
machine language instructions equivalent to both
getlocal_WC_1
and
getlocal_WC_0
.
Let’s take a look inside this new block. YJIT compiles or translates the Ruby
YARV instructions into machine language instructions. In this example, running
on my Mac laptop, YJIT writes the following machine language instructions into
this new block:
Figure 4-8: The contents of one YJIT block
Figure 4-8 shows a closer view of the new YJIT block that appeared on the right
side of Figures 4-6 and 4-7. Inside the block, Figure 4-8 shows the assembly
language acronyms corresponding to the ARM64 machine language instructions that
YJIT generated for the two YARV instructions shown on the left. The YARV
instructions on the left are:
getlocal_WC_1
, which
loads a value from a local variable located in the previous stack frame and
saves it on the YARV stack, and
getlocal_WC_0
, which
loads a local variable from the current stack from and also saves it on the YARV
stack. The machine language instructions on the right side of Figure 4-8 perform
the same task, loading these values into registers on my M1 microprocessor:
x1
and
x9
. If you’re curious
and would like to learn more about what the machine language instructions mean
and how they work, the section “Adding Two Integers Using Machine Language”
discusses the instructions for this example in more detail.
YJIT Branch Stubs
Next, YJIT continues down the sequence of YARV instructions and compiles the
opt_plus
YARV instruction at index 4 in Figures 4-6
and 4-7. But this time, YJIT runs into a problem: It doesn’t know the type of
the addition arguments. That is, will
opt_plus
add two
integers? Or two strings, floating point numbers, or some other types?
Machine language is very specific. To add two 64-bit integers on an M1
microprocessor, YJIT could use the
adds
assembly
language instruction. But adding two floating pointer numbers would require
different instructions. And, of course, adding or concatenating two strings is
an entirely different operation altogether.
In order for YJIT to know which machine language instructions to save into the
YJIT block for
opt_plus
, YJIT needs to know exactly
what type of values the Ruby program might ever add at (3) in Listing 4-1. You
and I can tell by reading Listing 4-1 that the Ruby code is adding integers. We
know right away that the
sum += 1
block at (3) is
always adding one integer to another. But YJIT doesn’t know this.
YJIT uses a clever trick to solve this problem. Instead of analyzing the entire
program ahead of time to determine all of the possible types of values the
opt_plus
YARV instruction might ever need to add, YJIT
simply waits until the block runs and observes which types the program actually
passes in.
YJIT uses
branch stubs
to achieve this wait-and-see compile behavior, as shown
in Figure 4-9.
Figure 4-9: A YJIT block, branch and stub
Figure 4-9 shows the YARV instructions on the left, and the YJIT block for
indexes 0000-0002 on the right. But note the bottom right corner of Figure 4-7,
which shows an arrow pointing down from the block to a box labeled stub. This
arrow represents a YJIT branch. Since this new branch doesn’t point to a block
yet, YJIT sets up the branch to point to a branch stub instead.
A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On
403 Media
www.404media.co
2025-11-17 20:00:02
We can no longer trust that survey responses are coming from real people.”...
Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the
Proceedings of the National Academy of Sciences
(PNAS). The author of the paper, associate professor of government at Dartmouth and director of the
Polarization Research Lab
Sean Westwood, created an AI tool he calls "an autonomous synthetic respondent,” which can answer survey questions and “demonstrated a near-flawless ability to bypass the full range” of “state-of-the-art” methods for detecting bots.
According to the paper, the AI agent evaded detection 99.8 percent of the time.
"We can no longer trust that survey responses are coming from real people," Westwood said in a press release. "With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”
Survey research relies on attention check questions (ACQs), behavioral flags, and response pattern analysis to detect inattentive humans or automated bots. Westwood said these methods are now obsolete after his AI agent bypassed the full range of standard ACQs and other detection methods outlined in prominent papers, including one paper designed to detect AI responses. The AI agent also successfully avoided “reverse shibboleth” questions designed to detect nonhuman actors by presenting tasks that an LLM could complete easily, but are nearly impossible for a human.
💡
Are you a researcher who is dealing with the problem of AI-generated survey data? I would love to hear from you. Using a non-work device, you can message me securely on Signal at (609) 678-3204. Otherwise, send me an email at emanuel@404media.co.
“Once the reasoning engine decides on a response, the first layer executes the action with a focus on human mimicry,” the paper, titled “The potential existential threat of large language models to online survey research,” says. “To evade automated detection, it simulates realistic reading times calibrated to the persona’s education level, generates human-like mouse movements, and types open-ended responses keystroke by-keystroke, complete with plausible typos and corrections. The system is also designed to accommodate tools for bypassing antibot measures like reCAPTCHA, a common barrier for automated systems.”
The AI, according to the paper, is able to model “a coherent demographic persona,” meaning that in theory someone could sway any online research survey to produce any result they want based on an AI-generated demographic. And it would not take that many fake answers to impact survey results. As the press release for the paper notes, for the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses would have flipped the predicted outcome. Generating these responses would also be incredibly cheap at five cents each. According to the paper, human respondents typically earn $1.50 for completing a survey.
Westwood’s AI agent is a model-agnostic program built in Python, meaning it can be deployed with APIs from big AI companies like OpenAI, Anthropic, or Google, but can also be hosted locally with open-weight models like LLama. The paper used OpenAI’s o4-mini in its testing, but some tasks were also completed with DeepSeek R1, Mistral Large, Claude 3.7 Sonnet, Grok3, Gemini 2.5 Preview, and others, to prove the method works with various LLMs. The agent is given one prompt of about 500 words which tells it what kind of persona to emulate and to answer questions like a human.
The paper says that there are several ways researchers can deal with the threat of AI agents corrupting survey data, but they come with trade-offs. For example, researchers could do more identity validation on survey participants, but this raises privacy concerns. Meanwhile, the paper says, researchers should be more transparent about how they collect survey data and consider more controlled methods for recruiting participants, like address-based sampling or voter files.
“Ensuring the continued validity of polling and social science research will require exploring and innovating research designs that are resilient to the challenges of an era defined by rapidly evolving artificial intelligence,” the paper said.
About the author
Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co
Git 2.52.0 released
Linux Weekly News
lwn.net
2025-11-17 19:55:02
Version 2.52.0 of the Git
source-code management system has been released. Changes include a new
last-modified command to find the closest ancestor commit that
touched one or more paths, a couple of git refs improvements, a
new git repo command for obtaining information about the
repository itself,...
Princeton University discloses data breach affecting donors, alumni
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 19:36:52
A Princeton University database was compromised in a cyberattack on November 10, exposing the personal information of alumni, donors, faculty members, and students. [...]...
A Princeton University database was compromised in a cyberattack on November 10, exposing the personal information of alumni, donors, faculty members, and students.
According to a FAQ page issued on Saturday, the threat actors breached Princeton's systems by targeting a University employee in a phishing attack.
This allowed them to gain access to "biographical information pertaining to University fundraising and alumni engagement activities," including names, email addresses, telephone numbers, and home and business addresses stored in the compromised database.
However, Princeton officials noted that the database didn't contain financial info, credentials, or records protected by privacy regulations.
"The database that was compromised does not generally contain Social Security numbers, passwords, or financial information such as credit card or bank account numbers,"
said
Daren Hubbard, Vice President for Information Technology and Chief Information Officer, and Kevin Heaney, Vice President for Advancement.
"The database does not contain detailed student records covered by federal privacy laws or data about staff employees unless they are donors."
Based on the contents of the compromised database, the university believes that the following groups likely had their data exposed in the data breach:
All University alumni (including anyone ever enrolled as a student at Princeton, even if they did not graduate)
Alumni spouses and partners
Widows and widowers of alumni
Any donor to the University
Parents of students (current and past)
Current students
Faculty and staff (current and past)
The private Ivy League research university has since blocked the attackers' access to the database and believes they were unable to access other systems on its network before being evicted.
Potentially affected individuals are advised to be cautious of any messages claiming to be from the university that request they share sensitive data, such as passwords, Social Security numbers, or bank information.
"If you have any doubts about whether a communication you receive from Princeton University is legitimate, please verify its legitimacy with a known University person before clicking on any links or downloading any attachment," the officials added.
A spokesperson for Princeton University redirected us to the FAQ page when asked about the number of individuals affected by the data breach and whether the attackers had made a ransom demand.
If you have any information regarding this incident or any other undisclosed attacks, you can contact us confidentially via Signal at
646-961-3731
or at
tips@bleepingcomputer.com
.
UPenn data breach
In early November, the University of Pennsylvania, another private Ivy League research university,
confirmed
that data
stolen in an
October cyberattack
had been
exfiltrated from internal network systems related to Penn's development and alumni activities.
As BleepingComputer first reported, the threat actors breached UPenn's systems using a stolen employee PennKey SSO account, which gave them access to the university's Salesforce instance, SAP business intelligence system, SharePoint files, and Qlik analytics platform.
They then stole 1.71 GB of internal documents from the university's SharePoint and Box storage platforms, as well as the Salesforce donor marketing database, which contained 1.2 million records.
While the two incidents are similar, Princeton officials said over the weekend that they currently have no "factual information indicating that this attack is connected or related to any other incident."
Update November 17, 14:53 EST: Added Princeton statement.
Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.
Get the cheat sheet and take the guesswork out of secrets management.
How Randy Mastro Killed the Kehlani SummerStage Concert—And a Whole Lot More
hellgate
hellgatenyc.com
2025-11-17 19:26:51
With Mayor Adams thinking about his next act, the former first deputy mayor to Rudy Giuliani is now "guiding every conceivable aspect of this administration."...
On a Saturday afternoon in early May, more than a dozen senior members of Mayor Eric Adams's staff jumped on a virtual meeting to discuss a dilemma: what to do about an upcoming SummerStage concert featuring the pro-Palestine R&B singer Kehlani.
According to a person who was on the call, First Deputy Mayor Randy Mastro declared that he wanted to stop the June 26 Central Park performance, billed as "Pride With Kehlani," and referenced the fact that Cornell University had recently
canceled a Kehlani show
due to complaints about what Cornell's president described as the
outspoken singer's
alleged "antisemitic, anti-Israel sentiments."
Mastro, who by that point had been at his job just over a month, and as one of his first acts had
helped create
the mayor's new Office to Combat Antisemitism, was passionate about why the concert should not happen, according to the source. "It was very righteous: 'We should not be platforming an antisemite.'"
But a municipal government using the same reasoning as Cornell to shut down a show in a public park might raise serious First Amendment concerns.
Mastro offered a solution that could sidestep that problem: He could tell the nonprofit that oversees SummerStage, the City Parks Foundation, that they had to pull the show over a "security risk," and if they refused, threaten to cancel the City's entire partnership with SummerStage.
The person on the call, who like the other sources quoted in this story asked to remain anonymous so they could
speak freely about the sensitive inner workings of City government, said that several staffers pointed out that canceling the event might expose the City to free speech litigation, but Mastro brushed those concerns aside.
Learning Rust: Custom Error types that actually work!
Sun, November 16, 2025
-
9 min read
TL;DR:
GITHUB REPO
. Tired of writing the same verbose error handling boilerplate in your Axum handlers? Me too! 🙄 By creating a custom
AppError
newtype that wraps
anyhow::Error
and implements
IntoResponse
+
From<E>
, you can ditch all those ugly match statements and embrace the beautiful
?
operator. Your handler functions go from messy error-matching shenanigans to clean, readable code that automatically converts any error into proper HTTP responses. It’s like magic, but with more crabs! 🦀
Recently I’ve been digging a lot into the
axum
crate for any of my Rust web projects. There are plenty of options out there for Rust web applications, but it seems that
we
have all settled on
Axum
as the go to crate. Before you even start reading this, if you have not checked it out yet -
do so now
… I’ll wait.
Okay, you’re back! Love it? YES! 🥳 Now, let’s talk about a learning project I am working on. Ever since I’ve discovered
htmx
I’ve been immersing myself in the world of web development with
HATEOAS
as my priority. Just the neatness of using hypermedia gives me a rush, and helps me with all that
JavaScript fatigue
. I do suggest you go and give
hypermedia systems
a read, a thing I have been doing over the past couple of weeks.
So, in the spirit of this book, I was following along with building some hypermedia driven system. But instead of using Python and Flask as stated in the book, I’ve opted to put my crab hat on, and do it in Rust. Like the big boy I am. In this post I wont explain how I did that (link to a post from the future will go here), but rather explain how I used the amazing features of Rust to eliminate a lot of boilerplate code on my side.
Errors Errors
In order to be a good web builder, you want to make sure to return the proper HTTP status codes. (I’m looking at you
200 OK
with an error message). So in my
handler
functions I make sure to explicitly return the Status code as part of my return tuple. Something like so:
Ok((StatusCode::OK, Html("<h1>Hello World</h1>")))// Or an Error:Err((StatusCode::INTERNAL_SERVER_ERROR, format!("Error processing hello world")))
This would signal back to the user (and axum) that the request was either
200 OK
, and here is the HTML, or
500 Internal Server Error
and an angry string. Nifty!
With the glory of Rust’s
Result
enum, we are equipped to handle any errors our back end may throw at us. So, I just
match
on call that can fail, and return something back depending on that
Result
.
In practice, say in this handler function that finds a
Contact
by its ID in the database, it would look something like this:
#[axum::debug_handler]async fn get_edit_contact( State(state): State<AppState>, Path(id): Path<i64>, // The function signature tells us what we expect back) -> Result<(StatusCode, Html<String>), (StatusCode, String)> { // This call can fail so let's match its Result let contact = match Contact::find_by_id(&state.db, id).await { // We're good return back the `Contact` into the `contact` variable Ok(contact) => contact, // We're NOT good, return the tuple with a status code of 500 Err(e) => { return Err( ( StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to find contact: {e}") ) ) } }; let edit_template = EditContactTemplate { contact }; // This can also fail, but we dont need to store it into a variable, // we just need to return. match edit_template.render() { // Looks good, return the HTML and 200 OK Ok(html) => Ok((StatusCode::OK, Html(html))), // Again 500 Bad, be angry here Err(e) => { Err( ( StatusCode::INTERNAL_SERVER_ERROR, format!("Failed to render template: {e}") ) ) } }}
That is a
lot
of boilerplate code. While I do enjoy the verbosity of Rust (makes me feel all safe and cozy), this gets real old real fast. Especially when you have multiple handler functions, that invoke multiple different calls that can fail. Let’s bring in another amazing feature of rust the
newtype pattern
, and simplify this 👏
Building my own Error type
I wont go too much into newtypes, as there is an
excellent guide
that I encourage all of you to read. Simply put, they are thin wrappers that allow you to extend functionality of existing Types not native to your crate. And I am gonna use it to extend the implement the
IntoResponse
trait into a type I dubbed
AppError
. And then allow it (what ever the Error is) to be converted into
anyhow::Error
.
Let’s first create this wrapper newtype:
pub struct AppError(anyhow::Error);
Here I am wrapping
anyhow::Error
into a new type called
AppError
. I could do the same for any other type, and just create a wrapper around it (ie. a
Vec<T>
wrapper:
struct Wrapper(Vec<T>)
). Now comes the fun part, implementing certain traits.
To implement the
IntoResponse
trait from
axum
into this new type, we only need to implement the
into_response
function, which needs to return a
Response<Body>
type. Let’s look at some code:
And just like that we’ve implemented our own way of returning an error response from a handler function. Let me explain the code a bit:
The implementation block for
IntoResponse
for
AppError
This is the only function needed for
IntoResponse
and we are simply returning a
Response
from the axum crate
We just return a tuple of a
StatusCode
and a
String
coming from element 0 of the
AppError
struct. Oh, and convert all that into a
Response
Here is a bit more
complicated
version of the same code, this one uses a templating engine to return some nicely formatted web pages. This version just expands the above, but should demonstrate that this all works really nicely with the rest of your codebase.
impl IntoResponse for AppError { fn into_response(self) -> axum::response::Response { // Returning a HTML page for an error let template = Error5xxTemplate { // 1 // Select element 0, as that is our anyhow::Error string and convert it so it works with our template error: self.0.to_string(), }; match template.render() { // 2 // If the template render is successful, return the HTML page with the error Ok(html) => (StatusCode::INTERNAL_SERVER_ERROR, Html(html)).into_response(), // The render has failed catastrophically - just return some string Err(_) => (StatusCode::INTERNAL_SERVER_ERROR, "Internal Server Error").into_response(), } }}
Okay, there is a lot more going on here. Most of it is the same as before, but let me break it down:
Somewhere in my code base I have a
Error5xxTemplate
struct that I use with the
askama
templating crate
I make sure the template renders okay, if so - return the 5xx error page, if not I just give a
500
error and the string back.
Now that we have an
IntoResponse
implemented. Let’s give our
AppError
the ability to take errors from anywhere.
Converting other error types
To make
AppError
a bit more flexible, I wanted to be able to automatically convert any* (*I’ll come back to this in a bit) error type. Let’s look at some code and make sense of it:
We are taking advantage of a
very
powerful crate here,
anyhow
. Which allows us to work with errors in Rust way more efficiently. In our case we are using it’s popularity and other crates ability to convert into this Error type.
Let me explain this line by line:
We are implementing
From<E>
for
AppError
. By using the generic type
E
we can have a wider implementation. This would be the equivalent of
impl From<sqxl::Error> for AppError
. Which converts the
sqlx::Error
type into
AppError
This is the
critical bit
, and why it actually limits us to certain errors. By having this
where
clause, we only allow the generic type
E
to be the ones that already support the
Into
trait from
anyhow::Error
. Basically limiting us to types that already support the conversion into this error type.
We only need to implement the
from
function that takes an error and returns itself back. By using the support mentioned above, we can just take the error and run an
.into()
conversion.
By using this generic trait approach we are essentially creating the following Rust code:
Okay, so we have our newtype
AppError
, it has all the things it needs to be converted to our Error type. How does it actually work? Well, let’s go back to our
get_edit_contact
handler function from before, and see what has changed:
#[axum::debug_handler]async fn get_edit_contact( State(state): State<AppState>, Path(id): Path<i64>,) -> Result<(StatusCode, Html<String>), AppError> { let contact = Contact::find_by_id(&state.db, id).await?; // sqlx::Error -> AppError let edit_template = EditContactTemplate { contact }; let html = edit_template.render()?; // askama::Error -> AppError Ok((StatusCode::OK, Html(html)))}
Whoa, that is way tighter than before. Yes, we are using the
?
operator
we
propagate the errors up the call stack
. Meaning the value of the
Result
returned by both
Contact::find_by_id
and
.render()
are returned back to axum as
AppError
newtypes.
This means we no longer have to deal with error handling within the function itself, and we are just returning the same error type back. Since it is the same error type, both function handler and axum are happy with receiving it! 🥳 Huzzah!
If you want to see the full codebase in action, you can check out my GitHub repo
here
. And please ignore the mess, this is just a learning repo! 🙏
Dutch police seizes 250 servers used by “bulletproof hosting” service
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 19:19:31
The police in the Netherlands have seized around 250 physical servers powering a bulletproof hosting service in the country used exclusively by cybercriminals for providing complete anonymity. [...]...
The police in the Netherlands have seized around 250 physical servers powering a bulletproof hosting service in the country used exclusively by cybercriminals for providing complete anonymity.
Politie, the police force in the Netherlands, did not name the service but said that it has been used for illicit activities since 2022, and has emerged in more than 80 cybercrime investigations, both domestic and abroad.
Bulletproof hosting providers are companies that intentionally ignore abuse reports and refuse to comply with content takedowns requests from law enforcement while protecting their customers by not enforcing Know Your Customer policies.
Cybercriminals that typically use them are ransomware operators, malware distributors, phishing actors, and spammers, as well as money laundering services that get to remain anonymous by paying in difficult-to-trace cryptocurrency.
Thousands of virtual servers seized
The Dutch police note that the hosting company advertised complete anonymity for users and no cooperation with law enforcement.
The investigation showed that the company facilitated ransomware attacks, botnet operations, phishing campaigns, and even the distribution of child abuse content.
Last week’s police operation confiscated hundreds of physical and thousands of virtual servers.
"During the operation on 12 November, the infrastructure was seized. In total, it involves around 250 physical servers located in data centers in The Hague and Zoetermeer,"
reads Politie’s announcement
.
Dutch police seizes about 250 servers from a bullet proof service provider
source: Politie
"Because of the seizure of these physical servers, thousands of virtual servers were also taken offline."
Investigators will now conduct a forensic analysis on the seized servers to gain more insight into their operators and potential clients. At this time, no arrests have been announced in relation to this action.
The Dutch police played a key role in
Operation Endgame’s latest phase
last week, which disrupted the operations of Rhadamanthys, VenomRAT, and Elysium malware.
In the Netherlands, the authorities
carried out nine searches
in Dutch datacentres and seized 83 servers and 20 domain names.
Although the two operations overlap, the Dutch police told BleepingComputer that the two investigations are not connected.
CrazyRDP goes down
The authorities have declined to share the name of the hosting provider. However, sources told BleepingComputer last week that on November 12th, the Dutch police seized servers from a datacenter in The Hague used by CrazyRDP, which is now offline.
CrazyRDP offered VPS and RDP services and operated in the interest of its clients' anonymity, with no-KYC and no-logs policies, requiring only a username and password to create an account.
In some discussions between threat actors, CrazyRDP was among the recommendations for bulletproof hosting services. Furthermore, multiple cybersecurity reports identified the same provider as a service for various malicious activities.
BleepingComputer noticed that the official CrazyRDP Telegram channel deleted all posts last Wednesday and linked to a different channel about CrazyRDP suddenly shutting down.
In the first conversations on the new channel, some people said they had more than 30 servers hosted on CrazyRDP infrastructure. Others feared an exit scam, as the service support claimed technical issues at the data center but then never replied.
One customer complained to technical support on Wednesday evening about problems logging in and was told that they would receive a response when everything would be "fully resolved."
However, about four hours later ,the operator said that they did not have an estimated time for solving the issue and then answered.
Although it is unclear if CrazyRDP is the bulletproof hosting that the Dutch police took down last Wednesday, the operation appears to be offline since then.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
Cardano holder loses $6 million to slippage
Web3 Is Going Great
web3isgoinggreat.com
2025-11-17 19:10:55
A holder of around 14.4 million ADA (~$6.9 million), the token for the Cardano network, made an expensive error when attempting to swap the tokens for a stablecoin. Because the stablecoin they were looking to buy is lightly used and has only around $10.6 million tokens in circulation, an attempt to ...
A holder of around 14.4 million ADA (~$6.9 million), the token for the Cardano network, made an expensive error when attempting to swap the tokens for a
stablecoin
. Because the stablecoin they were looking to buy is lightly used and has only around $10.6 million tokens in circulation, an attempt to purchase millions of the tokens on the market caused the dollar-
pegged
stablecoin's price to spike to around $1.26. The resulting
slippage
meant that the trader spent their roughly $6.9 million in tokens to receive a little less than $850,000 in the USDA stablecoin, meaning the trader essentially threw away $6 million.
Observers have questioned what happened. It's possible that the holder, who had not been active on-chain since 2020, was simply unaware of the slippage risk. It's also possible that it was a "fat-finger" trade — that the trader accidentally selected the wrong stablecoin from a list of similarly named options, some of which could have more easily absorbed a trade of that size.
Adams always made his personal biography his political calling card—”I am you,” he
told voters
during his winning mayoral campaign—and his mayoralty was distinctively personal as well. He was an ambassador and a hype-man, raising innumerable flags and hitting the clubs like it was his job, which, as far as he was concerned, it was. His policy preoccupations—safe subways, school literacy programs—were personal issues for him. His relationship with the press was bitterly personal, returning frequently to his grievance at what he perceived as unfair treatment.
But as entertaining, perplexing, and even significant as Adams was, his mayoralty was hardly a solo act, and his time in office was defined even more by the company he kept: the people he hired into City Hall and elevated within the police department, his business contacts, the people who ran his campaigns and those he partied with—the machers, the wanna-bes, the wheeler-dealers, grifters, operators, friends, side-kicks and miscellaneous associates.
Adams is fond of the exhortation to “let your haters be your waiters at the table of success.” It’s classic Adams, a rhyming aphorism about self-confidence, indifference to criticism, and the realization of ambition. Repeated often enough (it was often), it also evokes the image of a literal table, a congregation of Arthurian knights or an Olympian feast, a bustling and convivial gathering where the board is bountifully laden for those lucky enough to have a seat.
It is, in short, an excellent organizing metaphor for the swarming cast of players who have populated the greater Eric Adams Cinematic Universe.
We first published the
Table of Success
in December of 2023. The response was positive. People thanked us for shedding light on the interesting personalities in the mayor’s orbit. The project won an award for the best political reporting of the year. We felt good about it.
But the movers and shakers on this list don’t sit still, and we’ve spent the intervening years furiously updating the Table to keep up with all of their new adventures. For some, this has included resigning in disgrace, having their homes raided by law enforcement, and facing criminal indictments and convictions. Others, once close to the mayor, have created a little more space in their relationships with him. Even as some have vacated their seats at their table, others have sat down to take their places.
With Adams’s decision to drop out of the mayoral race—against the urging of some of his most loyal associates—it became clear that the Table of Success was hosting its final meal. As the Adams administration enters its final days, we’ve devoted our energy to one final, definitive update to our own project. We’re calling it
The Table of Success: The Last Supper
.
If you gained something from perusing the Table in the past, this is a good time to take another look. Some favorite figures have taken dramatic turns, and new, late-franchise characters have been added, with their own remarkable backstories and relationships. But we also want the Table of Success to be a useful document for people looking back at this time and trying to understand it. For better or for worse, the people seated at the table helped to define Mayor Adams and the city he led. Future administrations will have their own tangled networks of power players and hangers on, and it will be our job, once again, to try to make sense of them for you. But there will only ever be one Table of Success.
Sōzu
is a lightweight, fast, always-up reverse proxy server.
Why use Sōzu?
Hot configurable:
Sōzu can receive configuration changes at runtime, through secure unix sockets, without having to reload.
Upgrades without restarting:
Sōzu is always-up, meaning it upgrades itself
while still processing requests
.
Handles SSL:
Sōzu works as a TLS endpoint, so your backend servers can focus on what they do best.
Protects your network:
Sōzu protect backends by shielding them behind the reverse proxy, limiting direct network access. Sōzu uses Rust, a language primed for memory safety. And even if a worker is exploited, Sōzu workers are sandboxed.
Optimize performance:
Sōzu makes the most of Rust's capacity to avoid useless copying and memory usage.
Two key dependencies have been optimized in this way:
Kawa
is a generic HTTP representation library that parses and translates HTTP messages with zero copy
Rustls
is a TLS library that encrypts/decrypts TLS traffic with as little intermediate memory usage as it gets
lib/
: the
sozu-lib
reverse proxy library contains the event loop management, the parsers and protocols
bin/
: the
sozu
executable wraps the library in worker processes, and handle dynamic configuration
command
: the
sozu-command-lib
contains all structures to interact with Sōzu
License
Sōzu itself is covered by the GNU Affero General Public License (AGPL) version 3.0 and above. Traffic going through Sōzu doesn't consider Clients and Servers as "covered work" hence don't have to be placed under the same license. A "covered work" in the Licence terms, will consider a service using Sōzu's code, methods or specific algorithms. This service can be a self managed software or an online service. The "covered work" will not consider a specific control plane you could have develop to control or use Sōzu. In simple terms, Sōzu is a Free and Open Source software you can use for both infrastructure and business but in case of a business based on Sōzu (e.g. a Load Balancer product), you should either give back your contributions to the project, or contact Clever Cloud for a specific Business Agreement.
sozu-lib, sozu
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU Affero General Public License as published by the Free
Software Foundation, version 3.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Affero General Public License for more details.
If you’ve kept up with my blog, you might know that my recent
“street-cred”
has been messing around with “
corruption art
.” I can’t help it. Stuff like this is always so fascinating to me.
Some context would be in order. Two years ago, I made
NAND
, a 16-bit computer made entirely from NAND gates emulated on the web. NAND features its own custom CPU architecture, virtual machine, programming language, compiler, and IDE, and it is based on the Jack-VM-Hack platform specified in the
Nand to Tetris
course. Here’s (single-player) pong!
Now, I have many
wonderful things
to talk about with NAND, but we’re not going to be focusing on that. A few weeks ago during the height of my midterms I had a crazy idea: what if we allocated memory on the
screen
instead of on the
heap
?
To understand what that means, let’s consider this program written in NAND’s custom programming language.
class Main { function void main() { var int length; var Array a; let length = Keyboard.readInt("How much memory? "); let a = Array.new(length); }}
The compiler stores the local variables of a function in a memory region called the stack. Everything on the stack must have a size that is known at compile time so the compiler knows how much memory to reserve for it. However, the compiler has no way of knowing how much memory is required to store the array
a
—it’s a dynamic amount only known at run time. The contemporary solution is to use a
memory allocator
, something that you can ask for an arbitrary amount of memory and will return a pointer to a chunk of that much memory. The memory allocator looks for memory in a memory region called the heap.
If you run the above program and enter 5, the NAND memory allocator does this to allocate the array:
The NAND memory allocator traverses a linked list of memory blocks and returns the first one with enough memory to satisfy the allocation request. It can undoubtedly be made
more efficient
. The screen is stored as a bitmap of 512 (length) * 256 (width) = 131072 pixels in its own dedicated memory region. There are no colors; a 0 means black and a 1 means green. To
allocate memory on the screen instead of the heap
means to tell the memory allocator to treat the screen memory region as the heap memory region.
What makes this so interesting is the fact that NAND has no trap instructions, no hardware interrupts, and no concept of a kernel mode. Universally among modern computers, if you try to read or write memory that belongs to another process or memory region, the operating system will trigger a segmentation fault and terminate your program. But NAND gives every program full reign to modify any part of the computer memory; hence, logically invalid memory operations are shamelessly allowed.
Now to actually do it.
Memory.jack
class Memory { static Array ptr; static int offset; /** Initializes the class. */ function void init() { function void init(int offset_) { let ptr = 0; let ptr[2048] = 14334; let ptr[2049] = 16384; let offset = offset_; let ptr[offset] = 24574 - offset; let ptr[offset + 1] = 24576; } /** Finds an available RAM block of the given size and returns * a reference to its base address. */ function int alloc(int size) { var Array segment; var int length; if (size < 0) do Sys.error(5); let segment = 2048; let segment = offset; ...
(Reminder that this is NAND’s custom programming language; you can read it like Java)
In terms of modifying the memory allocator, this is all that is needed! The memory allocator normally starts at memory address 2048 and initializes its linked list data structure over there, the start of the heap memory region. This modification tells the memory allocator to start at the integer
offset_
provided as an argument during initialization.
Sys.jack
class Sys { /** Performs all the initializations required by the OS. */ function void init() { do Memory.init(); do Memory.init(16384); do Math.init(); do Screen.init(); do Screen.clearScreen(); do Output.init(); do Main.main(); do Sys.halt(); } ...
The entrypoint of every NAND program is hardcoded to be
Sys.init
. When this function is called, it initializes the operating system libraries, runs
Main.main
to execute the program, and runs
Sys.halt
to terminate the program. It’s analogous to the
_start
entrypoint on Linux. We change
Memory.init
to take as input the start of the screen memory region, 16384, so the memory allocator allocates memory on the screen. Additionally, we have to disable the routine that automatically clears the screen at the start of every program because it zeros out the metadata that was just configured by
Memory.init
and renders the memory allocator unusable.
With that out of the way, let’s run it on a hello world program and see what happens!
Main.jack
class Main { function void main() { do Output.printString("Hello world!"); }}
Great, there’s our heap! If you look at the top left corner of the screen, you’ll see “Hello World!”. Printing the text to the screen
overwrote
some of the heap memory data! It looks wonderful, but this program is relatively uninteresting: it immediately terminates afterwards. Can we run the pong program initially shown with our modified, silly memory allocator? We need to do two more things.
Sys.jack
... /** Halts the program execution. */ function void halt() { // Set a hardware-specific flag to tell the computer runtime to stop do Memory.poke(24576, 32767); while (~false) {} } ...
First, whenever a NAND program is finished or the operating system detects something logically invalid like dividing by zero, the operating system prints “ERR” followed by an
error code number
to the screen and calls
Sys.halt
to terminate the program
1
. We don’t want our program to terminate for any reason, so we simply delete this code.
Second, we add a prompt before running the program that asks the user to enter an integer offset to add to the screen region memory address (i.e
Memory.init(16384 + offset)
). We do this because allocating heap data at the same place on the screen would produce the same results every time. It’s nice (as you will soon see) to introduce variety in the results. The code at this point doesn’t feel necessary to explain in full detail, so I’ve omitted it.
We’re off to the races. I spent a few days playing around with memory offsets and want to showcase the most interesting results. First up is memory offset 914:
There’s a lot to unpack here! You might have noticed that the ball appears to
teleport
—this is because it is literally overwriting the bits that control its position on the screen while in motion. The ball then overwrites the
paddle’s
position bits to the top of the screen and height bits to approximately three times as tall. Despite the extensive memory corruption, why did the pong program still have some semblance of being able to run? NAND uses the
Harvard architecture
, which means that the program memory is stored separately from the instruction memory. The instruction memory can only be written to once when loading a program; it is read-only during its execution. So, the logic to move the paddle around and bounce the ball will always still exist.
Next, here is memory offset -10:
The ball is doing some spooky things here—it’s supposed to move in a straight line as a reminder 👻. The main takeaway with this program is that it somehow
restarts itself
at the end. Why this happens is simple: the program executed an instruction that sets the
program counter
’s value to 0, effectively telling the program to jump to the first instruction and start over.
Bam, what happened here? I would be lying if I said I knew exactly what happened, but what probably did is the program executed an instruction to jump to some random part in the instruction memory. At that point, all bets on what is supposed to happen are off: the abstractions of function calls and the program logic flow are destroyed. The processor might as well just be executing random instructions and interpreting random noise as memory. You invoked
undefined behavior
. NAND invoked
Cthulhu
.
I don’t have much to say about the next few videos, but you should give each one a moment to marvel at what is actually happening.
Here is memory offset 6:
Here is a run with extra modifications that I don’t remember:
Here is memory offset 577:
Here is memory offset 777:
This last one is my personal favorite because of how it long it lasts before restarting the program. Try to play pong all the way to the end; it’s a fun challenge!
I invite you to try out this program with your own memory offsets at
NAND’s website
. To run it, click “Load example program”, select “CorruptedPong”, and click “Start”. Note that your results will not always be deterministic because the program memory persists when a program is manually reset. If you want the same behavior every time, or if you want to reproduce the results of the memory offsets in my videos, make sure to first clear the program memory by clicking “Dec” on the memory view and selecting “Clr”.
So, yeah. Your takeaway should be that this could technically happen the next time you read one byte of memory out of bounds in C. Until next time!
If
Memory.poke(24576, 32767)
terminates the program, why does
Sys.halt
need an infinite loop? It boils down to an
implementation detail in the computer runtime
: the program will execute up to 30000 instructions at a time before polling memory address 24576 and testing if it is equal to 32767. So, it’s possible for other instructions to modify the program memory after setting the termination flag but before it is polled. It wouldn’t make sense for random memory to be modified after calling
Sys.halt
because it can be used to debug the program memory using the built-in memory view. As such,
Sys.halt
enters an infinite loop until the termination flag is actively polled again.
↩
Have you ever asked yourself which protocols get used when downloading pictures from the
Perseverance Mars rover
to Earth? I hadn’t thought about that either, until I came across an intriguing message on the internet, back in April 2024:
I’m looking for someone knowledgeable of quic/quinn to help us out for our deep space IP project. Would be of part-time consulting. Please dm me if interested.
The message itself is quite short and somewhat jargon-y, so it took me a few readings to fully realize what the project was about:
Working with
QUIC
: an internet protocol for reliable communication (i.e., what we typically use TCP for).
Working with
Quinn
: the most popular Rust implementation of the QUIC protocol.
Using QUIC to communicate between Earth and computers that are far, far away (e.g., other planets).
Business was going well on my end, and I didn’t have much time to dedicate to another consulting engagement, but… How could I say no to an interplanetary internet project? I had contributed to Quinn in the past
1
, so I felt well-equipped to help out and decided to actually do it. This article provides a record of the adventure so far.
What are we trying to solve?
Deep space is big and full of challenges. The technical feat of running a network at all in such an environment is nothing short of a miracle. To some extent, the problem is solved: we (humanity) regularly exchange messages with
rovers on Mars
, and are even communicating with
spacecraft outside of the solar system
2
. However, as more and more players enter the space exploration scene, limitations in the current architecture become apparent
3
.
The effort to scale deep space networking is ongoing, and one of the promising alternatives to get there involves adopting the IP protocol suite. In that context, QUIC is to become the protocol of choice for reliable communication. That’s where this project comes in: our goal is to show that QUIC can reliably operate in deep space, and to provide guidance to anyone interested in deploying it.
QUIC and deep space
Why so much fuss about “showing that QUIC can reliably operate in deep space”? Couldn’t you just use it right away?
It turns out that communication in deep space is… complicated. First, there is enormous latency, to the extent that e.g. a message from Earth takes 3 to 23 minutes to reach Mars
4
. On top of that, connectivity is intermittent. For instance, it is frequently not possible to exchange radio signals between Earth and a Mars rover, with connectivity only being restored after some time
5
.
These circumstances prevent QUIC from operating under its default configuration. For starters, any attempt to establish a connection would time out before having a chance to succeed. But the issue runs deeper. Even if you could magically establish a connection, other problems would arise and kill it in no time
6
.
How can QUIC be viable, then? The attentive reader might already have spotted the answer: the problem isn’t QUIC, but its
default configuration
, which was designed with terrestrial internet in mind. What we need is a
custom configuration
, this time targeting deep space, with guidelines to tweak things further if a space mission deems it necessary
7
.
Yes, QUIC is configurable to a high degree. This is an incredibly powerful feature: it lets a standards-compliant implementation run unmodified in a deep-space setting, as long as it exposes the necessary QUIC configuration knobs. Neat!
What about venerable old TCP? People actually evaluated it in the early 2000s, but concluded that the protocol was unsuitable for deep space
8
.
Conducting QUIC experiments
All right, we want to find configurations that let QUIC run efficiently in deep space. How do we actually do that?
First of all, let me share some necessary context. By “QUIC configuration”, I mean a specific set of parameters that govern the inner workings of the protocol: what is the round-trip time estimate before any packets have been exchanged? How long will a peer wait before concluding the connection was lost due to inactivity? Which congestion control mechanism will be used? You get the idea.
You could pick up a pen, paper, and a calculator to work out a set of values that
probably
work. However, we all know that
no plan survives contact with the enemy
. We need to see the parameters in action, and empirically determine that they truly work. Hence the idea of running experiments.
Running an experiment means configuring QUIC to use the desired parameters, then exchanging data over a network that emulates deep-space conditions. With this setup you can gather relevant metrics, evaluate the choice of parameters, try other ones as you see fit, and gradually develop a solid understanding about what works and what doesn’t.
Experiment setup, take one
Our experiment setup consists of a program with two components: a server application that exposes files over a QUIC connection, and a client application that downloads those files. They are connected to each other through a test network.
When I got involved in the project, the test network consisted of a set of virtual machines, carefully wired up to replicate a relevant subset of the deep-space network (e.g., the nodes involved when communicating between a NASA researcher’s laptop and a Mars rover). Not only did the network mirror real nodes, it also had artificial delays and intermittence to match the conditions in deep space! It’s a clever setup and still in use to this day.
There is one little problem, though, which you might be thinking of already. Once you introduce real deep-space latencies in your network, running an experiment can take a long long time. Want to test downloading a file from a Mars rover? You better make yourself a coffee in the meantime, because round-trip time to Mars can get as high as 46 minutes. By the way, did I already mention that things can take even longer in the presence of intermittence? Yup, iteration speed is a nightmare.
Unlocking instantaneous experiments
When I saw our limited iteration speed, I took that up as a personal challenge. “Not on my watch!”, was my inner war cry. After all, I’m convinced that instantaneous feedback is a prerequisite to productive research, not just a nice-to-have feature.
My hypothesis was that we could get instant runs by controlling two things:
The clock
. Our application’s clock should advance way faster than normal. Ideally, the clock would simply jump in time whenever the process got blocked due to a timer waiting to elapse. If done right, time from start to finish would only depend on your computer’s speed.
Packet IO
. Even with a time-jumping clock, the application would still have to wait when reading packets from the network. Progress would then not be blocked by timers (which cause time jumps), but by IO (which requires a real wait). The solution? Get rid of packet IO! Instead, run the client and server sides in a single process, and have them communicate over a simulated network (also running in that process). Such an in-process network, programmed and controlled by us, would have link delays subject to the application’s clock. Hence, they would be skipped like any other delays in the program.
You might be wondering: is there any QUIC implementation that lets you control the clock and the underlying network? Well… Quinn does! The design of the library is incredibly modular and provides the necessary extension points.
Clock time jumps, for instance, were trivial to enable. Quinn delegates timekeeping to the async runtime, and the runtime we use (tokio) ships with a feature to automatically advance the clock in the exact way we need. We turned that on through
Builder::start_paused
and it Just Worked
9
.
Switching to a simulated in-process network was more involved, because it required programming a network simulation from scratch in the first place. I kept gnawing at the problem and eventually cracked it, then plugged the simulated network into Quinn through the
AsyncUdpSocket
and
UdpPoller
traits.
Did the effort pay off? Hell yes! Now we can run file downloads over QUIC in an instant… and we even got some extra goodies in addition to being fast. By the way, we are keeping the old setup around, for additional validation of important test cases.
Bonus: determinism and debuggability
With full control over the network, it became possible to make the workbench fully deterministic. In contrast to runs in the old setup, now two runs with the same parameters always yield the same output. This is crucial for reproducible experiments and has been incredibly useful so far (i.e., no chance of “works on my machine” situations).
Debuggability received some love too. As packets travel through the in-process network, each peer records them in a synthetic
.pcap
file for later inspection. That way, you can use external tools such as Wireshark to troubleshoot any issues or merely to see what is being transmitted over the simulated wire. This small investment has paid for itself handsomely. It grants you x-ray vision into what would otherwise be a black box. Debuggable systems rock!
Wrapping up
So… which protocol gets used when downloading pictures from the Perseverance Mars rover to Earth? I was told it’s a low-level protocol called
CFDP
… for now. Maybe in a few years the answer will be QUIC!
ACKnowledgements
My work would not have been possible without
Marc Blanchet
, who is a passionate advocate of IP in deep space. He has generously funded the project, answered my questions with infinite patience, and even reviewed early drafts of this blog post. He also wanted to open source the experimental setup I developed, so anyone else can run experiments too. You can find the repository
here
.
Another honorable mention goes to the Quinn community, especially to
Benjamin
and
Dirkjan
, creators of the library. They have designed a stellar API and, together with other members of the community, helped us out with useful advice whenever we encountered problems along the way. If you are looking for a QUIC library in the Rust ecosystem, I’d say Quinn is your best bet.
How Many People Has the U.S. Killed in Boat Strikes?
Intercept
theintercept.com
2025-11-17 18:15:24
The Intercept is keeping count of all publicly declared U.S. attacks on boats in the Caribbean Sea and Pacific Ocean.
The post How Many People Has the U.S. Killed in Boat Strikes? appeared first on The Intercept....
Since September,
the Trump administration has conducted an undeclared war in the Caribbean Sea and Pacific Ocean, killing scores of civilians. The Intercept is chronicling all publicly declared U.S. attacks and providing a tracker with information on each strike.
The administration insists the attacks are permitted because the U.S. is engaged in “
non-international armed conflict
” with “
designated terrorist organizations
,” or DTOs. President Donald Trump has justified the attacks, in a War Powers
report
to Congress, under his Article II constitutional authority as commander in chief of the U.S. military and claimed to be acting pursuant to the United States’ inherent right of self-defense as a matter of international law. The Justice Department’s Office of Legal Counsel has also produced a classified opinion that provides
legal cover
for the lethal strikes.
Experts in the laws of war and members of Congress,
from both parties
, say the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence. The summary executions are a significant departure from standard practice in the
long-running U.S. war on drugs
, in which law enforcement agencies arrested
suspected drug smugglers
.
The Pentagon has repeatedly
withheld
information on the attacks from members of Congress and the American public, despite mounting questions from lawmakers about the legality of these deadly strikes.
So The Intercept is publishing a strike tracker documenting America’s newest war. The locations and casualty figures are drawn from information provided by U.S. Southern Command, which
oversees
military operations in Latin America and the Caribbean, the Office of the Secretary of War, and social media posts by Trump and War Secretary Pete Hegseth.
Yesterday, at the direction of President Trump, two lethal kinetic strikes were conducted on two vessels operated by Designated Terrorist Organizations.
These vessels were known by our intelligence to be associated with illicit narcotics smuggling, were carrying narcotics, and…
pic.twitter.com/ocUoGzwwDO
Earlier today, at the direction of President Trump, the Department of War carried out a lethal kinetic strike on yet another narco-trafficking vessel operated by a Designated Terrorist Organization (DTO) in the Eastern Pacific.
Death toll:
14
killed in three separate strikes — with one reported survivor (eight aboard one boat, four on another and three on the last). The Mexican Navy failed to find the survivor, who is presumed to be dead, bringing the total to 15.
Yesterday, at the direction of President Trump, the Department of War carried out three lethal kinetic strikes on four vessels operated by Designated Terrorist Organizations (DTO) trafficking narcotics in the Eastern Pacific.
Today, at the direction of President Trump, the Department of War carried out yet another lethal kinetic strike on a vessel operated by a Designated Terrorist Organization (DTO). Yet again, the now-deceased terrorists were engaged in narco-trafficking in the Eastern Pacific.
Target:
Ejército de Liberación Nacional (Colombia)
On October 17th, at the direction of President Trump, the Department of War conducted a lethal kinetic strike on a vessel affiliated with Ejército de Liberación Nacional (ELN), a Designated Terrorist Organization, that was operating in the USSOUTHCOM area of responsibility.
Earlier this morning, on President Trump's orders, I directed a lethal, kinetic strike on a narco-trafficking vessel affiliated with Designated Terrorist Organizations in the USSOUTHCOM area of responsibility. Four male narco-terrorists aboard the vessel were killed in the…
pic.twitter.com/QpNPljFcGn
What I bring you today is a real gem that I have been searching for a long time and have finally managed to get my hands on a copy
1
. It is a collection of 672 maps found in the
Great Korean Encyclopaedia
2
, North Korea’s reference encyclopaedia. More specifically, it is an electronic edition published on CD in the first decade of the 2000s
3
.
Work on this encyclopaedia began in 1964, when Kim Il Sung established a compilation committee with the aim of bringing together all the knowledge and guidelines that a good Korean should follow
4
. The work spanned several decades, and the result was published in thirty volumes between 1995 and 2002. The encyclopaedia contained more than 100,000 words, 25,000 images and photographs, and 5,200 historical figures.
And maps, lots of maps.
According to the prevailing narrative in North Korea, the war was won by the communists and since then, the entire Korean peninsula has remained united under the rule of the Korean Workers’ Party. Therefore, when looking at the maps in this atlas, it should come as no surprise that Korea is always shown as one country, with no reference to the other country that exists at the southern tip of the peninsula.
Physical map of Korea
Map of the provinces of Korea
The mineral resources of Korea
In addition to generic maps of Korea, like any good atlas, the encyclopaedia also includes detailed maps of each of the provinces and counties that make them up. Again, it makes no distinction between north and south, maintaining that narrative of unity.
Gyeonggi Province, South Korea
Jeju Province, South Korea
When we delve into the global vision of North Korean cartography, things become even more interesting. So I’ll start simply with the world map.
North Korean world map
This North Korean world map is centred on the Pacific Ocean, which gives Korea a privileged position on the global stage. This is nothing new, but what is new is how North Korea depicts its enemies. Can you spot them?
Yes, they are the only two countries painted in dark grey: the United States and Japan. This pattern can also be seen on the political map of Korea
5
, and is consistent on virtually all the political maps in the atlas. In the map of Europe, that you can see below, this colour is also used for the United Kingdom and France, but in this case, it is not consistent through all the political maps.
Map of America
Map of Asia
Map of Europe
Map of Africa
The representation of the continents is also of some interest. Apart from the idea of showing enemies in a consistent colour, I like the choice of projections. Instead of opting for the classic projections seen in Western cartography, the authors of these maps choose projections that better balance the shape and size of different countries. They take advantage of the fact that only one region of the globe needs to be represented.
The encyclopaedia also includes maps of all the oceans, which also incorporate ocean current patterns.
Map of the Atlantic Ocean
Map of the Pacific Ocean
Map of the Indian Ocean
This collection would not be complete without country maps. Here, once again, we find a strong emphasis on the geopolitical situation and North Korea’s view of the world. This is consistent with the political maps of each continent, but when viewed separately, it is even more evident.
First, the maps dedicated to enemies.
Map of the United States
Map of Japan
Beyond these obvious things, there are more subtle issues that can be understood by looking at the complete list. The only country that does not have a dedicated map is Israel. In fact, Israel does not appear under that name on any map, but the territory of Israel appears as Palestine on the map of Asia and on all maps of surrounding countries. In the one for Jordan, it is also clarified that Palestine is a territory under Israeli occupation.
Map of Lebanon
Map of Egypt
Map of Jordan
Another curious detail is that the atlas includes a country with limited international recognition
6
: the Sahrawi Arab Democratic Republic of Western Sahara.
Map of Western Sahara
And well, although it may not be of general interest, as most of the readers are from countries with large English population, here you can see how some of these countries are represented.
Map of the United Kingdom
Map of Canada
Map of Australia
Map of India
If you are interested in any other map, please let me know in the comments and I can update the article.
Acknowledgement:
I owe today’s article entirely to
Pedro Zurita
, the man behind the
Mapoteca de pZZ
, thanks to whom I got a copy of this fabulous atlas. I recommend that you follow him on
Instagram
, where he posts many maps, especially of Mexico, and on
TikTok
or
YouTube
, where he posts interesting videos on cartography and geography (in Spanish).
In our
0.5.0 release blog post
, we announced that work was underway on a complete rewrite of Apache Iggy's core architecture using io_uring with a thread-per-core, shared nothing design. This architectural redesign aims to further improve performance, reduce tail latecies and lower resource usage by
leveraging io_uring's completion based I/O model
.
As part of this rewrite, we migrated from Tokio to
compio
, a completion-based async runtime that allows us to better utilize io_uring capabilities. However, it also presents different challenges when integrating with the wider Rust ecosystem.
We came across one such challenge when we needed to add WebSocket support to Iggy server. WebSockets are useful for browser clients and streaming dashboards. The Rust ecosystem has excellent WebSocket libraries like
tungstenite
and
tokio-tungstenite
but they are made when poll-based IO was the dominanat IO paradigm. They expect shared buffers and readiness-based I/O, fundamentally incompatible with compio's completion-based model that requires owned buffers.
Here we describe our journey of building compio-ws, a WebSocket implementation for the compio async runtime, and the engineering challenges we faced bridging two fundamentally different I/O models and it finally lead to us
contributing
to
compio
.
Understanding the architectural divide (poll vs completion)
Let's see why poll-based libraries can't easily work with completion based runtimes by examining the traits of compio, tungstenite.
Tungstenite is the de-facto Rust WebSocket protocol implementation. It handles all the WebSocket protocol logic:
Frame parsing and generation
Message fragmentation
Control frames (ping, pong, close)
Protocol violations and error handling
Text encoding validation
Our initial thought was to contribute to async-tungstenite, which provides runtime-agnostic WebSocket support. It has adapters for multiple async runtimes through a trait-based abstraction. But as we dug deeper into adapting it for compio we realized a fundamental incompatibility
The realization: async-tungstenite is strongly coupled to poll-based IO
The incompatibility becomes clear when we examine the AsyncRead traits of compio and Future's AsyncRead
// Compio's async read - can't be made into a poll-based trait asyncfnread<B:IoBufMut>(&mutself, buf:B)->BufResult<usize,B>; // Owned buffer ^^^^ // Futures' AsyncRead - expects polling with borrowed buffers fnpoll_read(self:Pin<&mutSelf>, cx:&mutContext<'_>, buf:&mut[u8])->Poll<io::Result<usize>>; // Borrowed buffer ^^^^
These are fundamentally different programming models that don't compose cleanly.
The key insight: we need a bridge layer that provides synchronous Read/Write traits to tungstenite while internally using compio's async owned-buffer I/O.
Compio already provides
SyncStream
in the
compio-io::compat
module specifically for interoperating with libraries that expect synchronous I/O traits. It's a clever structure that maintains internal buffers to bridge the async/sync boundary:
The default
DEFAULT_BUF_SIZE
is 8KB, but you can specify any capacity you want. Once created, the buffer capacity is fixed. It never grows. This can be a problem, which we will discuss below, read along.
Here's how it works:
Reading (sync to async):
impl<S>ReadforSyncStream<S>{ fnread(&mutself, buf:&mut[u8])->io::Result<usize>{ let available =self.read_buffer.slice();// Get buffered data if available.is_empty()&&!self.eof { // No data available - need to fill from async stream returnErr(io::Error::new( io::ErrorKind::WouldBlock, "need to fill the read buffer" )); } // Copy from internal buffer to caller's buffer let to_read = available.len().min(buf.len()); buf[..to_read].copy_from_slice(&available[..to_read]); self.read_buffer.advance(to_read); Ok(to_read) } } impl<S:AsyncRead>SyncStream<S>{ pubasyncfnfill_read_buf(&mutself)->io::Result<usize>{ // Async operation to fill the internal buffer let len =self.read_buffer .with(|b|asyncmove{ let len = b.buf_len(); let b = b.slice(len..); self.stream.read(b).await.into_inner() }) .await?; if len ==0{ self.eof =true; } Ok(len) } }
Writing (sync to async):
impl<S>WriteforSyncStream<S>{ fnwrite(&mutself, buf:&[u8])->io::Result<usize>{ ifself.write_buffer.need_flush(){ // Buffer full - need async flush returnErr(io::Error::new( io::ErrorKind::WouldBlock, "need to flush the write buffer" )); } // Copy into internal buffer let len = buf.len().min(self.write_buffer.remaining_capacity()); // ... copy data ... Ok(len) } fnflush(&mutself)->io::Result<()>{ // Just return Ok - actual flushing happens in flush_write_buf Ok(()) } } impl<S:AsyncWrite>SyncStream<S>{ pubasyncfnflush_write_buf(&mutself)->io::Result<usize>{ // Async operation to flush the internal buffer let len =self.write_buffer.flush_to(&mutself.stream).await?; self.stream.flush().await?; Ok(len) } }
The pattern is:
Sync
read()
/
write()
operate on internal buffers
Return
WouldBlock
when buffer is empty/full
Call async
fill_read_buf()
/
flush_write_buf()
to service buffers
Retry the sync operation
This allows tungstenite to work with compio streams:
let stream =TcpStream::connect("127.0.0.1:8080").await?; let sync_stream =SyncStream::new(stream); letmut websocket =WebSocket::from_raw_socket(sync_stream,Role::Client); loop{ match websocket.read(){ Ok(msg)=>process(msg), Err(Error::Io(e))if e.kind()==ErrorKind::WouldBlock=>{ // Need to fill the read buffer websocket.get_mut().fill_read_buf().await?; continue; } Err(e)=>returnErr(e), } }
The problem with fixed buffer size in
SyncStream
The initial implementation worked perfectly for messages smaller than the buffer. WebSocket handshakes completed, ping/pong frames exchanged, text messages flowed. Everything seemed fine.
Then we tested with larger messages, and the performance collapsed.
The Problem Scenario: Sending a 16MB binary message through WebSocket with the default 8KB SyncStream buffer:
Here's what happens inside:
Message: 16MB ↓ Tungstenite frames it ↓ Calls write() on SyncStream with chunks ↓ SyncStream buffer: 8KB capacity (fixed!) Round 1: Write 8KB → Buffer full → WouldBlock → flush_write_buf() → 8KB to kernel Round 2: Write 8KB → Buffer full → WouldBlock → flush_write_buf() → 8KB to kernel ... Total: 2048 round trips (16MB / 8KB = 2048)
The measurements were bad:
Time: Over 100 seconds to send 16MB
Memory: Excessive allocations from repeated buffer handling. At times this lead to the websocket process being OOM killed by the OS.
Each
WouldBlock
-> async call -> retry cycle involved:
Saving state in tungstenite
Suspending the sync call
Executing async flush
Resuming the sync call
The sync trait contract requires immediate success or
WouldBlock
. There's no way to say "I need a bigger buffer" or "give me partial progress while I grow the buffer." Each
WouldBlock
forces a complete async round trip.
We tried the obvious fix of increasing the
SyncStream
buffer size to 1MB. Which worked perfectly and compio-ws passed all tests in the standard autobahn testsuite. But this solution is still fragile when the user doesn't know the peak memory usage of their workload and this can lead to overprovisioning and wastage of server resources.
Cores:
6 physical cores with hyperthreading (12 vCPUs total)
Cache:
36 MB L3 cache
Memory:
96 GB RAM
Storage:
Local NVMe SSD
Common benchmark setting used:
enable fsync and fsync every single message (
export IGGY_SYSTEM_PARTITION_ENFORCE_FSYNC=true
and
export IGGY_SYSTEM_PARTITION_MESSAGES_REQUIRED_TO_SAVE=1
)
The results show measurable but
reasonable
overhead from the WebSocket layer:
Producer latency:
WebSocket adds ~0.8-1.0ms across most percentiles (30-40% higher than TCP). Even at P9999, we achieve 9.48ms latency - impressive for a durable workload with fsync-per-message enabled.
Consumer latency:
WebSocket shows roughly 2× the latency of raw TCP, adding ~0.7-1.0ms. The P9999 of 2.52ms demonstrates consistent performance even at high percentiles.
Under durability constraints, achieving single-digit millisecond latencies at P9999 for producers and sub-3ms for consumers is quite good.
Adapter layer cost:
The current implementation uses GrowableSyncStream as a bridge between sync and async I/O models. The buffer grows in linear increments, which can be suboptimal for large messages. However, for this first implementation enabling WebSocket support in a completion-based runtime, the performance is acceptable.
WebSocket protocol overhead:
WebSocket framing: each message needs frame headers.
Masking: XOR operation on payload
Tail latency consistency:
GrowableSyncStream maintains roughly proportional overhead even at high percentiles. The P9999 shows larger absolute differences but percentage-wise remains consistent with lower percentiles, indicating predictable behavior under load.
Current state in Iggy:
WebSocket support is currently implemented with long polling on the consumer side and push-based notifications is planned for future releases. This has different implications for producers and consumers.
For consumers (receiving messages):
The current long polling implementation means consumers must repeatedly request new messages, even when none are available. This is inefficient, especially for low-power edge clients.
For producers (sending messages):
WebSocket provides immediate benefits. Instead of establishing new connections or using inefficient long polling, producers can maintain a persistent WebSocket connection and push messages directly to the server. This is particularly valuable for:
Browser-based producers sending events or telemetry
Edge devices reporting sensor data or metrics
Dashboard applications sending commands or configuration updates
What's next:
While compio-ws with GrowableSyncStream enables WebSocket support today, significant optimization potential remains. A native WebSocket implementation designed for owned buffers from the ground up could eliminate the adapter layer overhead and unlock the full performance potential of io_uring's zero-copy capabilities.
We invite the Rust community to contribute:
Optimize
GrowableSyncStream
: Implement exponential buffer growth and pooling
Owned buffer WS protocol implementation:
Build WebSocket from scratch with owned buffers
Push-based consumers:
Help implement server-push notifications in Iggy
Edge client libraries:
Create WebSocket SDKs optimized for resource-constrained devices
Every log must fire: applying Chekhov's gun to cybersecurity incident reports
The quiet rule that
Anton Chekhov
slipped into literary history, the idea that a gun hanging on the wall in act one must eventually go off, holds a surprisingly modern lesson for security teams. In an age where organizations drown in logs, alerts and dashboards, the most effective incident reports are those where every detail the reader encounters has purpose, context and a satisfying resolution.
From theatre stage to security war room
Chekhov’s narrative principle, often summarized as
if a gun appears on stage, it must eventually be fired
, was never meant as a rigid law about weapons. It was a call for narrative discipline. Every object, line of dialogue or setting on stage should either drive the story forward or reveal something essential. Nothing should remain as meaningless decoration.
In cybersecurity, the theatre becomes the incident response war room. A security report that treats details like abandoned props, from a lonely failed login to an unexplained outbound connection, breaks the implicit pact with the reader. The audience of a report, usually a mix of executives, engineers and legal teams, expects that every technical element presented contributes to understanding what happened and what must change next.
Modern incident-reporting frameworks, such as the guidance published by
ENISA
on
incident reporting for operators of essential services
, already stress clarity, proportionality and traceability. Chekhov’s narrative lens adds a softer but powerful requirement: if you introduce an indicator, an assumption or a timestamp, you owe the reader a payoff.
Either you explain why it matters or you explain convincingly why it does not.
Designing reports that leave no loose ends
The best incident reports feel almost like well-edited short stories. They open with a clear promise, usually a short abstract that answers the quiet question in the reader’s mind: what went wrong and how serious is it. From there, each section should progressively resolve uncertainty rather than create new confusion. An internal report that ends with more open questions than it started with might satisfy curiosity but fails as a decision-support tool.
Publicly disclosed incidents, such as those described in the yearly
Verizon
Data Breach Investigations Report
, show how a disciplined narrative can guide even non-technical readers through complex technical realities. The structure is not accidental. It follows the logic of discovery, containment and recovery, ensuring that each introduced element returns later in the story with a clear explanation of its role.
In practice, this means resisting the temptation to stuff reports with raw screenshots from tools like
Splunk
or
Elastic
, or to copy entire command outputs just because they look impressively technical. A line that mentions an IP range, a registry key or a suspicious binary without ever clarifying its significance is the narrative equivalent of an unfired gun.
If an artefact is important enough to mention, it is important enough to close the loop on its meaning.
Chekhov’s gun in the age of infinite telemetry
Modern infrastructures produce such vast amounts of telemetry that selecting what to include in a report becomes an editorial act. Cloud platforms like
AWS
and
Azure
, endpoint solutions, SaaS audit logs and network sensors generate more potential narrative objects than any playwright could imagine. In this environment, Chekhov’s insight becomes a survival skill.
Security teams often lean on structured methodologies, such as the
MITRE ATT&CK
framework, to categorize adversary behaviour. These frameworks provide a vocabulary for describing techniques, but they do not decide which events deserve screen time inside the report. The narrative principle fills that gap by forcing authors to ask, for each log fragment or indicator, a simple question:
what does this tell the reader that they absolutely need to know to understand the incident
.
A well-crafted report from a major breach at a company like
Equifax
or
Microsoft
rarely drowns readers in raw data. Instead, it curates signals. For every highlighted alert or misconfiguration there is an eventual explanation, a mitigation or a lesson. The result is not just elegance on the page but operational clarity in the response room, where decision makers can act without guessing which of the many details are decorative and which are decisive.
Writing for readers who ask one last question
There is a simple test to evaluate whether an incident report respects the spirit of Chekhov’s gun. Read it through and, at the end, note how many new questions you have that relate directly to elements explicitly mentioned in the text. If the list is long, the report has probably introduced too many narrative guns without firing them.
Guidelines from organizations like
NIST
, for example in the
Computer Security Incident Handling Guide
, highlight the need for incidents to be documented so they can be analysed later. Yet, even a meticulously logged incident can remain opaque if the report fails to connect events to outcomes.
A technically precise but narratively fragmented document still leaves readers wandering among unexplained artefacts and half-finished hypotheses.
This is where style becomes a security control. The author of an incident report, whether based in
London
or
San Francisco
, can choose to treat the document as a story with a beginning, middle and end. The beginning sets the stakes and scope. The middle follows a coherent timeline that shows how signals accumulated into detection and response. The end closes every introduced thread, even if some threads conclude with “we verified that this was unrelated” rather than with a dramatic root cause.
Practical habits for narrative discipline in reports
Bringing Chekhov’s sensibility into everyday reporting does not require turning analysts into novelists. It calls for small, repeatable habits. Before publishing, the report owner can scan for unexplained artefacts such as lone hashes, single IP addresses or tool outputs without commentary. Each of these items is a tiny narrative promise, and the review process should check that the promise is either fulfilled or explicitly withdrawn.
Incident management platforms and playbooks, like those described in
SANS Institute
materials on
effective incident handling
, already formalize technical steps. Adding a narrative checklist creates a complementary layer. For every section, the author can ask whether a curious but informed reader would still need to ask
why was this detail important
after reading it. If the answer is yes, the paragraph probably needs another sentence.
Over time, teams that adopt this mindset tend to converge on a recognizable house style. Their reports may remain serious, compliant and actionable, but they read with the ease of a well-edited magazine feature. Patterns of attack become clearer, institutional memory becomes more reliable and post-incident reviews feel less like archaeology and more like guided tours. In that environment, even the most technical readers secretly appreciate that
every log, clue and configuration that appears on the page has earned its place by the time the story ends
.
EEG-based neurofeedback in athletes and non-athletes
University Research Center in Psychology (CUIP), Faculty of Human and Social Sciences (FCHS), University of Algarve, 8005-139 Faro, Portugal
2
Institute of Psychology, University of Gdańsk, 80-309 Gdańsk, Poland
*
Author to whom correspondence should be addressed.
Submission received: 1 September 2025
/
Revised: 13 October 2025
/
Accepted: 21 October 2025
/
Published: 3 November 2025
Abstract
Background
: Electroencephalography (EEG) is a non-invasive technique that records millisecond-scale cortical electrical activity using scalp electrodes. In EEG-based neurofeedback (NFB), these signals are processed to provide real-time feedback that supports self-regulation of targeted brain rhythms; evidence suggests improvements in cognitive and neurophysiological performance in athletes and non-athletes. However, methodological inconsistencies—such as limited blinding, poor sham control, and outdated approaches to EEG spectral analysis—restrict reproducibility and hinder cumulative progress in the field.
Methods
: This scoping review aimed to identify and analyze the methodological characteristics, outcome measures, and reproducibility gaps in EEG-based NFB studies involving athletes and non-athletes. Following PRISMA-ScR guidelines, we systematically searched academic databases (PubMed, Embase, Scopus, Web of Science, PsycINFO, and Cochrane Library), as well as gray literature sources (ProQuest Dissertations, LILACS, Tripdatabase, and Google Scholar). Of 48 included studies, 44 were published in international peer-reviewed journals and 4 in regional journals. Data were extracted on study design, participant population, NFB protocols, targeted EEG rhythms, cognitive and neurophysiological outcomes, and methodological rigor.
Results
: The review revealed substantial heterogeneity in targeted rhythms, protocols, and reporting standards. None of the studies employed modern spectral parameterization methods (e.g., FOOOF), while only 29% used active sham protocols and 6% employed inert sham conditions. Reporting blinding procedures and follow-up assessments was limited or absent in most studies.
Discussion
: This review highlights critical methodological shortcomings that may bias interpretations of NFB effects in sport and cognitive domains. To strengthen future research, studies should rigorously implement sham and blinding procedures, ensure transparent reporting of EEG metrics, and adopt open-science practices, including modern approaches to spectral parameterization.
1. Introduction
Electroencephalogram-based neurofeedback (EEG-NFB) has emerged as a promising non-invasive intervention to enhance cognitive and psychophysiological functioning, including attention, emotion regulation, and motor preparation [
1
,
2
,
3
,
4
]. In sports, NFB is increasingly applied to improve performance under pressure and support resilience across disciplines such as football, archery, judo, and swimming [
5
].
Despite encouraging findings, such as improvements in attention, emotion regulation, and athletic performance reported in previous EEG-NFB studies [
2
,
3
,
4
,
5
], the current evidence is constrained by major methodological limitations. In the sports context, EEG-NFB has been increasingly applied to enhance attentional focus, optimize sensorimotor rhythm regulation, and support stress management during competition. Studies have reported improvements in accuracy (e.g., archery, shooting), faster reaction times, and decision-making in football, highlighting its potential to strengthen both cognitive and motor domains in athletes [
6
,
7
,
8
,
9
,
10
,
11
,
12
,
13
,
14
,
15
,
16
,
17
].
Studies differ in protocol duration, targeted brain regions, and outcome measures, while most lack rigorous sham or double-blind designs, raising concerns about expectancy and placebo effects [
18
,
19
]. Such inconsistencies limit the attribution of NFB-induced changes to genuine neurophysiological mechanisms.
To address these challenges, the CRED-nf checklist [
20
] established standards for study design and reporting, including preregistration, detailed feedback specifications, sham controls (active and inert), and transparent reporting of outcomes. However, adherence remains inconsistent, and many studies still rely on closed-source analysis pipelines. Proprietary implementations of Fast Fourier Transform (FFT) parameters—such as window length or artifact rejection—are rarely disclosed, undermining reproducibility [
21
,
22
].
Notably, none of the reviewed studies employed modern spectral parameterization approaches, such as Fitting Oscillations & One-Over-F (FOOOF [
23
]), which separate periodic and aperiodic components to strengthen neurophysiological validity. This methodological gap is especially critical for sports applications, where subtle cognitive and performance-related changes demand precise measurement [
2
,
24
].
Finally, the drive for ecological validity—defined as the extent to which experimental findings can be generalized to real-world contexts—has led to portable EEG systems and semi-natural group protocols [
13
,
25
,
26
,
27
]. These approaches aim to capture cognitive and neurophysiological processes in more authentic environments, such as practice and competition, thereby increasing the applicability of research findings. While promising, they also introduce procedural challenges, including greater susceptibility to noise and artifacts, which require careful methodological control.
The aim of this scoping review is to systematically map methodological and analytical gaps in EEG-NFB studies, evaluate the current state of interventions in both athletes and non-athletes, and identify priorities for advancing transparency, reproducibility, and neurophysiological validity in future research.
2. Materials and Methods
2.1. Protocol and Registration
This scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR [
28
]). The protocol was developed a priori, following PRISMA-P guidelines [
29
] and methodological recommendations from the Joanna Briggs Institute [
30
].
To ensure transparency, reproducibility, and methodological rigor, the review was prospectively registered on the Open Science Framework (OSF). The protocol defines the eligibility criteria, outlines procedures for study selection, data extraction, and synthesis, and specifies the use of Rayyan software for independent screening by two reviewers [
31
].
2.2. Eligibility Criteria
The research question was developed using the PCC framework—Population, Concept, and Context [
30
]. The target population included adults ≥ 18 years, divided into three groups: (i) elite athletes, training and competing at professional or international levels, with weekly physical activity typically >9 METs (vigorous-intensity [
32
]; (ii) amateur athletes, engaged in regular but non-professional sports practice, typically 3–9 METs; and (iii) non-athletes, healthy adults without organized sport participation, typically <3 METs. These classifications were based on the Compendium of Physical Activities [
32
].
The concept focused on EEG-NFB as the primary intervention. Only studies reporting objective neurophysiological or cognitive outcomes were eligible (e.g., event-related potentials (ERP), quantitative electroencephalography (qEEG), low-resolution electromagnetic tomography (LORETA); or validated measures of attention, working memory, reaction time). Studies based solely on self-reported questionnaires or satisfaction ratings were excluded.
The context included sports and laboratory settings. Eligible studies could come from any country, year, or language, provided full-text access and accurate translation into English or Portuguese.
Only original empirical studies were included randomized controlled trials (RCTs), quasi-experimental, cohort, observational, qualitative, or mixed-method studies with explicit NFB interventions. Exclusions comprised systematic reviews, meta-analyses, theoretical papers, editorials, and conference abstracts—although these were screened for additional references.
Studies were excluded if they: (i) involved clinical populations, (ii) failed to report detailed NFB protocols or outcomes, (iii) assessed only non-specific effects (e.g., expectancy, placebo), or (iv) lacked peer-review. Full-text availability was mandatory.
2.3. Information Sources and Search Strategy
The research team conducted an extensive literature search across seven academic databases: PubMed/MEDLINE, Embase, Scopus, Web of Science, PsycINFO, Cochrane Library, and LILACS. Both controlled vocabulary terms (MeSH, DeCS) and free-text terms were used to link NFB with EEG, cognitive performance, ERP, qEEG, and LORETA. Boolean operators and truncations were adapted for each database.
To minimize publication bias, grey literature sources were also searched, including Google Scholar, ProQuest Dissertations & Theses, Trip Database, and Dissertations Citation Index. Reference lists of included studies were hand-searched to identify additional articles.
No restrictions were applied regarding year, language, or country of origin, provided accurate translation could be ensured. References were deduplicated in EndNote X9 (Clarivate Analytics), and records were screened independently by two reviewers using Rayyan [
31
].
The Rayyan system (Qatar Computing Research Institute) was employed to conduct the study selection process in two phases. In the first phase, two independent reviewers (RZ and TB) screened titles and abstracts against the eligibility criteria. In the second phase, the same reviewers conducted a full-text evaluation of studies that passed the initial screening. Disagreements were resolved through discussion, and when consensus could not be reached, a third reviewer (IBB) acted as arbitrator.
Additionally, the reference lists of included studies were manually reviewed to identify further eligible records. The entire selection process was documented through the PRISMA-ScR flow diagram, including reasons for exclusion during the full-text stage. To enhance methodological rigor, the procedure underwent independent double verification, thereby ensuring transparency, reliability, and reproducibility.
2.5. Data Charting Process and Data Items
The first reviewer (RZ) independently performed the data charting process using a structured extraction form based on the PCC framework. A second reviewer (TB) verified all extracted data, while the first reviewer (RZ) and a third reviewer (IBB) resolved any discrepancies through discussion.
The extraction process included study characteristics (authors, year, country, study design), population details (type of participants: elite athletes, amateur athletes, or non-athletes; age range; gender distribution; and level of competition), characteristics of the NFB intervention (protocol type, frequency band, number and duration of sessions), neurophysiological assessment tools (ERP, qEEG, LORETA), cognitive outcome measures, and methodological aspects. In addition, although not pre-specified in the initial charting form, all studies were systematically reviewed for the use of modern spectral parameterization methods (e.g., FOOOF [
23
]) and for transparency and reproducibility practices (e.g., pre-registration, data sharing, code availability, detailed reporting of analysis pipelines; cf. [
21
,
22
]). These exploratory assessments were included to provide further insight into the analytical and methodological rigor of NFB research in sport.
The evaluation also focused on methodological strength through an assessment of sham controls (none, Active, or Inert), blinding procedures, and statistical approaches. Following [
19
,
33
,
34
], sham controls were operationally categorized as Active Sham—non-contingent but plausible feedback (e.g., pre-recorded EEG or randomized signals)—and Inert Sham, fully decoupled from participants’ physiological activity (e.g., random tones or pre-recorded videos). This classification allows for a clearer evaluation of methodological rigor, reducing the risk of conflating non-specific engagement effects with genuine NFB-related changes. A summary of the distribution of sham control types across studies is presented in the
Section 3
(Figure 4) to providing a visual overview of this critical methodological factor.
When any essential information was unclear or unavailable, the corresponding study authors were contacted by email. However, response rates were limited, and missing data were coded as “not reported”.
Finally, the entire process was piloted on five studies to ensure consistency and clarity in the data extraction procedure, with particular emphasis on identifying the presence and type of sham controls as a critical methodological factor.
2.6. Synthesis of Results
The research findings will be summarized in tables that organize data by study design, population type (elite athletes, amateur athletes, non-athletes), and characteristics of the NFB protocols, as well as neurophysiological and cognitive outcomes.
A narrative synthesis will be conducted to highlight methodological trends, outcome patterns, and evidence gaps across studies. The synthesis will remain descriptive in nature, consistent with the scoping review methodology.
The analysis will also quantify the frequency of key methodological variables, including the presence and type of sham controls, blinding procedures, and EEG analysis techniques (e.g., qEEG, ERP, LORETA). These distributions will be reported to provide a structured overview of methodological rigor and transparency across the included studies.
3. Results
The scoping review analyzed 48 studies that examined EEG-based NFB interventions among athletes competing in various sports and at different competitive levels. The PRISMA 2020 flow diagram (
Figure 1
) illustrates the study selection process from identification through screening to final inclusion.
The included studies investigated athletes from a wide range of sports (e.g., archery, golf, gymnastics, swimming, soccer, judo, and chess) across multiple countries and competitive levels (elite, amateur, and novice).
The study characteristics are summarized in
Table 1
, which provides detailed information on authors, publication years, sample sizes, demographic characteristics, sport disciplines, study designs, electrode placement protocols, control group types, outcome measures, and reported intervention effects.
The extensive information presented in
Table 1
serves as the primary reference for understanding the diversity and methodological scope of the included studies.
3.1. Selection of Sources of Evidence
The initial database search retrieved 3516 records, supplemented by an additional 240 records from gray literature and other sources. After removing 1737 duplicates, 1779 records remained for title and abstract screening. Of these, 1729 records were excluded based on eligibility criteria.
The full-text evaluation was conducted for 70 articles, of which 48 studies met the inclusion criteria and were included in the review. This corresponds to approximately 2.6% of the initially retrieved records.
The detailed selection process is presented in the PRISMA 2020 flow diagram (
Figure 1
). Reasons for full-text exclusion are documented in
Supplementary Materials (Appendix S2)
, covering the 22 excluded studies.
Additionally, the distribution of electrode sites and frequency bands across the included studies is summarized in
Figure 2
.
Reason 5. Full text not available or no response from corresponding authors after three contact attempts (within a three-week period).
The study selection process followed the PRISMA 2020 guidelines [
67
], as illustrated in
Figure 1
.
3.2. Characteristics of Included Studies
The included studies were conducted across 18 countries, with Poland contributing the largest share (24%), followed by Iran (18%) and Taiwan (12%). Other countries, including Germany, Portugal, and Canada, provided smaller but noteworthy contributions (
Figure 3
A,B).
In terms of research design, randomized controlled trials (RCTs) accounted for 60% of the studies, followed by quasi-experimental designs (29%) and case or single-subject approaches (11%) (
Figure 3
C). Most studies recruited participants ranging from novice to elite athletes, with males representing 77% of the total sample.
Control group strategies showed considerable variability: Active Sham conditions were used in 29% of studies, passive controls with no intervention in 33%, and no-control designs (e.g., pre–post or single-subject studies) in 38% (
Figure 4
and
Figure 5
).
As shown in
Figure 6
, SMR-based training (12–15 Hz) was the most frequently applied protocol, followed by theta/beta and alpha-based modulation. This pattern underscores the predominance of SMR approaches in sports-related EEG-NFB research, reflecting their established association with motor control and attentional regulation. At the same time, the relatively lower prevalence of infra-low frequency, mu, and customized alpha- or ERP-based protocols highlights emerging directions that remain underrepresented in the current literature.
3.3. Neurophysiological Outcomes
Neurophysiological outcomes were reported in 52% (
n
= 25) of the included studies. The reported effects encompassed EEG spectral power changes, such as sensorimotor rhythm (SMR) enhancement at Cz (located at the vertex of the scalp, approximately over the sensorimotor cortex), and at C3 and C4 (positioned over the left and right primary cortices, respectively). Other studies examined ERPs, particularly components such as P3 and N2 [
48
], as well as coherence and connectivity measures derived from source localization techniques, including LORETA and sLORETA [
37
].
Studies that combined neurophysiological measures with behavioral assessments frequently reported associations between EEG changes and improvements in motor or cognitive performance [
36
,
39
]. The distribution of studies focusing on neurophysiological outcomes, compared with those relying exclusively on behavioral or cognitive assessments, is illustrated in
Figure 7
.
3.4. Cognitive Outcomes
Cognitive outcomes were reported in 89% (
n
= 43) of the included studies. The research primarily targeted three cognitive domains: attention, working memory, and executive functions. These were assessed through standardized paradigms such as inhibition tasks (e.g., Stroop test), working memory updating (e.g., N-back task), and cognitive flexibility/set-shifting (e.g., Oddball paradigm).
Standardized neuropsychological assessments—particularly the N-back task, Stroop test, and Oddball paradigm—were frequently complemented with sport-specific tasks, including reaction time tests in archery or golf putting performance, to evaluate cognitive improvements in real-world contexts.
Overall, the evidence indicated consistent cognitive benefits of neurofeedback training. For instance, studies highlighted improvements in attentional control [
42
,
51
], while others reported enhanced stress regulation and self-perceived mental readiness [
35
,
57
].
3.5. Methodological Features
The increasing focus on methodological rigor is reflected in the gradual adoption of randomized controlled trials (RCTs). Nevertheless, only 29% of studies (
n
= 14) included Active Sham feedback as a placebo control [
38
,
64
]. The majority of these Active Sham protocols (
n
= 15) relied on pre-recorded EEG data or randomized signals, which may still introduce unspecific neuroplastic changes [
18
,
19
]. Only three studies applied Inert Sham protocols that fully separated neural activity from feedback [
44
,
45
,
46
], representing the methodological gold standard for identifying neurofeedback-specific effects. Overall, approximately 40% of studies did not implement any sham control, relying on passive or no-control designs.
Beyond sham design, most studies did not report participant or evaluator blinding, and long-term follow-up assessments were rare (exceptions include [
39
,
59
]). These limitations further underscore the need for methodological consistency and transparency in EEG-NFB research.
Another critical issue concerns EEG spectral analysis. None of the 48 included studies applied modern spectral parameterization techniques such as FOOOF; [
23
]. Instead, all relied on conventional band-power approaches based on fixed frequency bands (e.g., SMR, alpha, theta, beta), typically calculated via Fast Fourier Transform (FFT). Some studies applied visual or manual inspection of EEG signals, and reporting of spectral analysis parameters was often incomplete. This reliance on traditional band-power metrics prevents separation of periodic oscillatory activity from the aperiodic 1/f background, which may bias the interpretation of NFB effects.
This methodological heterogeneity underscores the challenges of synthesizing evidence across studies and highlights the importance of adopting standardized protocols and transparent reporting practices.
Figure 5
,
Figure 6
and
Figure 7
illustrate these methodological inconsistencies, emphasizing the lack of sham standardization, limited neurophysiological outcome reporting, and the predominance of outdated spectral approaches.
3.6. Transparency and Reproducibility
The analysis of the 48 included studies revealed a systemic absence of open science practices. None of the studies provided data sharing, code availability, or preregistration. A single exception was noted in [
62
], which reported protocol approval by a local ethics committee prior to data collection; however, this does not constitute preregistration in the open science sense, as it lacked public accessibility and methodological detail.
Although most studies described their training protocols (e.g., electrode sites, frequency bands, session structures), independent replication remained unfeasible due to reliance on proprietary hardware/software and closed-source algorithms. In addition, statistical transparency was limited: the majority of studies reported only
p
-values, with rare mentions of effect sizes or confidence intervals, thereby constraining interpretability.
With respect to EEG analysis, all studies relied on traditional band-power metrics in fixed frequency bands. None applied modern spectral parameterization methods such as FOOOF [
23
], which separate periodic oscillatory activity from the aperiodic 1/f background. This reliance on fixed-band approaches—often embedded in commercial systems—further restricts the neurophysiological validity of reported outcomes.
Taken together, these findings align with concerns raised by [
21
,
22
], highlighting the urgent need for open data, shared code, preregistration, and transparent reporting of analytic pipelines to ensure reproducibility and credibility in EEG-NFB research.
4. Discussion
This scoping review synthesized 48 studies examining EEG-based NFB interventions across athletic and non-athletic populations. The evidence generally supports the potential of NFB to modulate neurophysiological activity and improve cognitive and performance-related outcomes. However, the review also exposes substantial methodological heterogeneity and reproducibility gaps that complicate interpretation and cross-study comparison. The following sections discuss these findings considering previous literature, highlighting consistent trends, discrepancies, and future research needs.
4.1. Neurophysiological and Cognitive Outcomes
Across the analyzed studies, NFB training most frequently targeted SMR and alpha bands, with reported increases in EEG power often corresponding to improvements in reaction time, attention, and motor precision. These findings align with early work by [
2
,
3
], who demonstrated that modulating SMR and alpha activity could facilitate motor preparation and cognitive stability. Similarly, more recent studies—such as [
36
,
37
]—confirmed enhanced motor accuracy and balance following SMR- and theta/beta-based training, supporting the link between neural regulation and performance optimization.
Nevertheless, not all evidence converges. Some experiments, such as [
43
], reported null effects on reaction time or inconsistent EEG modulation, suggesting that task specificity, participant expertise, and feedback parameters critically influence outcomes. Cognitive measures—particularly attention, working memory, and executive control—were the most frequently improved domains, in line with systematic syntheses by [
4
,
5
]. Yet, the diversity of testing paradigms (e.g., Stroop, N-back, Oddball) and the predominance of short-term assessments limit the generalization of these results. Overall, the current evidence indicates that EEG-NFB can induce measurable neural and behavioral adaptations, though magnitude and persistence remain uncertain due to methodological inconsistency.
4.2. The Role of Sham Controls
A central concern identified in this review involves the design and implementation of sham controls. As defined in
Section 2.5
, Active Sham refers to non-contingent but plausible feedback, whereas Inert Sham is fully decoupled from participants’ physiological activity. Only 29% of studies employed active sham feedback, and a mere 6% applied inert sham protocols—the methodological gold standard for isolating true NFB-specific effects. This distinction aligns with the CRED-nf recommendations [
20
] and prior methodological reviews [
33
], which emphasize the need for transparent reporting of sham procedures in EEG-NFB research. These proportions mirror the shortcomings previously highlighted by [
19
], who emphasized that expectancy and engagement effects may inflate apparent efficacy in NFB research. The scarcity of inert sham conditions observed here suggests that many studies risk conflating neurophysiological change with non-specific psychological factors.
Furthermore, a large subset of studies lacked participant or assessor blinding and relied solely on pre–post comparisons. Such designs increase susceptibility to placebo effects and Type III statistical errors, as discussed by [
34
]. When properly implemented, double-blind randomized trials—such as those by [
35
] or [
38
]—demonstrated more controlled evidence for EEG modulation and performance enhancement. Future investigations should therefore integrate both active and inert sham conditions, coupled with rigorous blinding, to strengthen internal validity and permit clearer attribution of causal effects.
4.3. Electrode and Frequency Variability in Neurofeedback Protocols
The diversity of electrode montages and targeted frequency bands across the reviewed studies reflects the absence of standardized NFB protocols in sport settings. Central sites (Cz, C3, C4) were predominant in SMR-based interventions, consistent with their functional relevance to motor preparation and attention. However, frontal and parietal placements targeting alpha, beta, or theta activity were also frequent, often motivated by exploratory aims rather than established neurophysiological models. Comparable variability was reported by [
2
,
4
], who noted that inconsistency in training loci and spectral ranges impedes replication and cumulative synthesis.
This heterogeneity complicates the interpretation of EEG changes and performance outcomes. Even when similar behavioral gains were reported, underlying neural mechanisms may differ due to protocol divergence. Standardized reporting through frameworks such as the CRED-nf checklist [
20
] and adoption of modern spectral parameterization tools like FOOOF [
23
] would allow more accurate separation of oscillatory and aperiodic components, thereby improving cross-study comparability and theoretical precision.
4.4. Ecological Validity and Implementation Challenges
Recent studies increasingly integrate portable EEG systems and field-based protocols to enhance ecological validity and bridge laboratory findings with real-world athletic contexts. Investigations by [
25
,
26
] illustrate this trend, demonstrating that brief, on-site SMR training sessions can positively influence golf and soccer performance. These developments parallel broader efforts in applied neuroscience to situate cognitive training within authentic performance environments.
However, ecological validity introduces methodological complexity. Field-based EEG is inherently vulnerable to motion artifacts, environmental noise, and fatigue effects that can compromise data quality and mask the specific contribution of NFB. Moreover, most reviewed studies relied on short-term pre–post designs without longitudinal follow-up, precluding conclusions about retention or transfer of NFB benefits. Sustained improvements, as observed in long-term follow-ups by [
59
], remain rare but essential for verifying whether NFB-induced adaptations persist beyond initial training phases. Future research should therefore combine controlled laboratory paradigms with extended, ecologically grounded interventions, using multimodal outcome measures (EEG, qEEG, ERP, behavioral, and psychophysiological indices) to achieve comprehensive evaluation.
4.5. Considerations for Future Research
To consolidate the evidence base for EEG-NFB in sport, future investigations must emphasize methodological rigor, analytical transparency, and ecological realism. Randomized controlled trials incorporating both active and inert sham conditions are imperative to distinguish genuine neurofeedback effects from non-specific influences. Protocol standardization regarding electrode placement, targeted frequency bands, and session parameters will facilitate replication and meta-analytic synthesis.
Equally crucial is the transition toward open-science practices. None of the reviewed studies preregistered protocols or shared data and analysis code, reflecting a broader reproducibility gap in applied neuroscience [
21
,
22
]. Adopting preregistration, data sharing, and transparent reporting of analytic pipelines will markedly enhance credibility and cumulative progress. Aligned with the CRED-nf checklist [
20
], future EEG-NFB studies should explicitly preregister core components of their experimental design, including hypotheses, primary and secondary outcomes, session parameters, and planned statistical analyses. Minimal datasets—such as pre-processed EEG spectra, behavioral measures, and analytic scripts—should be made openly available in public repositories (e.g., OSF, Zenodo, OpenNeuro). Moreover, the transparent reporting of key signal-processing parameters (e.g., FFT settings, filter characteristics, artifact rejection thresholds, and reinforcement schedules) will facilitate methodological reproducibility and cross-study comparability. Collectively, these practices will transform general calls for transparency into concrete, actionable standards for advancing open science in EEG-NFB research.
Finally, research should expand participant diversity—addressing gender balance and sport variety—and include longitudinal follow-ups to determine the durability and ecological transfer of NFB-induced performance gains. By integrating these methodological and conceptual refinements, future studies can transform EEG-NFB from a promising experimental approach into a reproducible, evidence-based tool for optimizing human performance.
Another potential methodological concern involves the partial overlap of samples across studies conducted by the same research groups (e.g., [
46
,
48
,
57
,
60
,
61
]). Such overlap may inflate the apparent evidence base and reduce the effective sample diversity, particularly when similar participant cohorts are repeatedly analyzed under slightly modified protocols. This limitation should be considered when interpreting the overall findings, as it may bias outcome generalizability and overestimate the robustness of specific training effects. Future reviews should therefore apply stricter data-source screening procedures and explicitly report instances of potential sample duplication to enhance the transparency and reproducibility of evidence synthesis in EEG-NFB research.
5. Conclusions
EEG-based NFB demonstrates meaningful potential to enhance both neurophysiological regulation and cognitive-motor performance in athletes. Yet, this promise remains constrained by inconsistent methodology, limited sham control, and insufficient transparency. The field now requires rigorously designed, double-blind randomized trials using validated sham procedures and standardized spectral analyses to establish causal validity.
Future progress depends equally on adopting open-science principles—preregistration, data and code sharing, and clear protocol reporting—to ensure replicability and comparability across studies. Long-term, ecologically valid designs will clarify whether short-term NFB effects translate into sustainable performance benefits. Strengthening methodological rigor and transparency will not only improve scientific reproducibility but also enable NFB to fulfill its potential as a practical tool in sport neuroscience.
Supplementary Materials
Author Contributions
Conceptualization, R.M.G.Z.; methodology, R.M.G.Z. and D.T.B.; validation, R.M.G.Z., D.T.B. and I.B.-B.; formal analysis, R.M.G.Z.; investigation, R.M.G.Z.; resources, J.M.C.; data curation, R.M.G.Z.; writing—original draft preparation, R.M.G.Z.; writing—review and editing, J.M.C., D.T.B., I.B.-B. and S.N.d.J.; visualization, R.M.G.Z.; supervision, J.M.C., I.B.-B. and S.N.d.J.; project administration, J.M.C.; funding acquisition, I.B.-B. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the European Union’s Horizon Europe Programme, Grant Agreement No. 101089757—SEA-EU 2.0: The European University of the Seas Alliance navigating towards modern and co-transformative intercampus life; people-driven, planet-friendly, and knowledge-based progress for all, funded by the European Union. Additional financial support from the University of Gdańsk is gratefully acknowledged.
Institutional Review Board Statement
Not applicable. This study is a scoping review that analyzed and synthesized data from previously published studies; therefore, ethical approval was not required.
Informed Consent Statement
Not applicable. This study did not involve humans or animals.
Data Availability Statement
All data supporting the findings of this study are included within the article and its
Supplementary Materials
.
Acknowledgments
The authors acknowledge the use of artificial intelligence tools exclusively to support language clarity and text organization. All scientific content, methodological design, analysis, and interpretation are the sole responsibility of the authors.
Conflicts of Interest
The authors declare no conflicts of interest.
References
Schomer, D.L.; Da Silva, F.H.L. (Eds.)
Niedermeyer’s Electroencephalography: Basic Principles, Clinical Applications, and Related Fields
, 7th ed.; Oxford University Press: New York, NY, USA, 2018. [
Google Scholar
]
Gruzelier, J.H. EEG-neurofeedback for optimising performance. I: A review of cognitive and affective outcome in healthy participants.
Neurosci. Biobehav. Rev.
2014
,
44
, 124–141. [
Google Scholar
] [
CrossRef
]
Hammond, D.C. What is Neurofeedback: An Update.
J. Neurother.
2011
,
15
, 305–336. [
Google Scholar
] [
CrossRef
]
Tosti, B.; Corrado, S.; Mancone, S.; Di Libero, T.; Carissimo, C.; Cerro, G.; Rodio, A.; Furtado da Silva, V.; Coimbra, D.R.; Andrade, A.; et al. Neurofeedback training protocols in sports: A systematic review of recent advances in performance, anxiety, and emotional regulation.
Brain Sci.
2024
,
14
, 1036. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Rydzik, Ł.; Wsącz, W.; Ambrozy, T.; Javdanekh, N.; Rydzak, K.; Koparska, M. The use of neurofeedback in sports training: Systematic review.
Brain Sci.
2023
,
13
, 660. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Landers, D.M.; Petruzzello, S.J.; Salazar, W.; Crews, D.J.; Kubitz, K.A.; Gannon, T.L.; Han, M. The influence of electrocortical biofeedback on performance in pre-elite archers.
Med. Sci. Sports Exerc.
1991
,
23
, 123–129. [
Google Scholar
] [
CrossRef
]
Paktaş, Y. The effect of neurofeedback training on the perceptual-motor abilities of basketball athletes.
Pak. J. Med. Health Sci.
2021
,
15
, 791–793. [
Google Scholar
]
Gołaś, A.; Nitychoruk, M.; Żak, M.; Kowalczyk, M.; Ignatjeva, A.; Maszczyk, A. Optimizing visual processing efficiency using neurofeedback training in judo athletes.
Arch. Budo Sci. Martial Arts Extrem. Sports
2019
,
15
, 105–112. [
Google Scholar
]
Krawczyk, M.; Kowalczyk, M.; Żak, M.; Daros, K.; Gozdowski, P. Zmiany aktywności fal mózgowych pod wpływem treningu neurofeedback u zawodników judo.
Ogrody Nauk. I Szt.
2019
,
9
, 388–399. [
Google Scholar
] [
CrossRef
]
Maszczyk, A.; Dobrakowski, P.; Nitychoruk, M.; Zak, M.; Kowalczyk, M.; Toborek, M. The Effect of Neurofeedback Training on the Visual Processing Efficiency in Judo Athletes.
J. Hum. Kinet.
2020
,
71
, 219–227. [
Google Scholar
] [
CrossRef
]
Rijken, N.H.; Soer, R.; de Maar, E.; Prins, H.; Teeuw, W.B.; Peuscher, J.; Oosterveld, F.G. Increasing Performance of Professional Soccer Players and Elite Track and Field Athletes with Peak Performance Training and Biofeedback: A Pilot Study.
Appl. Psychophysiol. Biofeedback
2016
,
41
, 421–430. [
Google Scholar
] [
CrossRef
]
Liu, Y.S.; Subramaniam, S.C.H.; Sourina, O.; Shah, E.; Chua, J.; Ivanov, K. Neurofeedback training for rifle shooters to improve cognitive ability. In Proceedings of the 2017 International Conference on Cyberworlds (CW), Chester, UK, 20–22 September 2017; pp. 186–189. [
Google Scholar
]
Mikicin, M.; Mroz, A.; Karczewska-Lindinger, M.; Malinowska, K.; Mastalerz, A.; Kowalczyk, M. Effect of the Neurofeedback-EEG Training During Physical Exercise on the Range of Mental Work Performance and Individual Physiological Parameters in Swimmers.
Appl. Psychophysiol. Biofeedback
2020
,
45
, 49–55. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Assadourian, S.; Branco Lopes, A.; Saj, A. Improvement in peripheral visual attentional performance in professional soccer players following a single neurofeedback training session.
Rev. Neuropsychol.
2022
,
14
, 133–138. [
Google Scholar
] [
CrossRef
]
Lo, L.C.; Hatfield, B.D.; Janjigian, K.; Wang, Y.S.; Fong, D.Y.; Hung, T.M. The Effect of Left Temporal EEG Neurofeedback Training on Cerebral Cortical Activity and Precision Cognitive-Motor Performance.
Res. Q. Exerc. Sport
2024
,
96
, 486–496. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Hosseini, F.; Norouzi, E. Effect of neurofeedback training on self-talk and performance in elite and non-elite volleyball players.
Med. Dello Sport
2017
,
70
, 344–353. [
Google Scholar
] [
CrossRef
]
Salimnejad, Z.; Zandi, H.; Arsham, S. Effect of Bio-Neural Feedback Exercises on the Performance of Female Rugby Players.
Int. J. Mot. Control Learn.
2019
,
1
, 10–18. [
Google Scholar
] [
CrossRef
]
Bussalb, A.; Congedo, M.; Barthélemy, Q.; Ojeda, D.; Acquaviva, E.; Delorme, R.; Mayaud, L. Clinical and experimental factors influencing the efficacy of neurofeedback in ADHD: A meta-analysis.
Front. Psychiatry
2019
,
10
, 35. [
Google Scholar
] [
CrossRef
]
Thibault, R.; Raz, A. The psychology of neurofeedback: Clinical intervention even if applied placebo.
Am. Psychol.
2017
,
72
, 679–688. [
Google Scholar
] [
CrossRef
]
Ros, T.; Enriquez-Geppert, S.; Zotev, V.; Young, K.D.; Wood, G.; Whitfield-Gabrieli, S.; Wan, F.; Vuilleumier, P.; Vialatte, F.; Van De Ville, D.; et al. Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist).
Brain
2020
,
143
, 1674–1685. [
Google Scholar
] [
CrossRef
]
Nichols, T.; Das, S.; Eickhoff, S.; Evans, A.; Glatard, T.; Hanke, M.; Kriegeskorte, N.; Milham, M.; Poldrack, R.; Poline, J.-B.; et al. Best practices in data analysis and sharing in neuroimaging using MRI.
Nat. Neurosci.
2017
,
20
, 299–303. [
Google Scholar
] [
CrossRef
]
Poldrack, R.; Baker, C.; Durnez, J.; Gorgolewski, K.; Matthews, P.; Munafo, M.; Nichols, T.; Poline, J.-B.; Vul, E.; Yarkoni, T. Scanning the horizon: Towards transparent and reproducible neuroimaging research.
Nat. Rev. Neurosci.
2017
,
18
, 115–126. [
Google Scholar
] [
CrossRef
]
Donoghue, T.; Haller, M.; Peterson, E.; Varma, P.; Sebastian, P.; Gao, R.; Noto, T.; Lara, A.; Wallis, J.; Knight, R.; et al. Parameterizing neural power spectra into periodic and aperiodic components.
Nat. Neurosci.
2020
,
23
, 1655–1665. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Ring, C.; Cooke, A.; Kavussanu, M.; McIntyre, D.; Masters, R. Investigating the efficacy of neurofeedback training for expediting expertise and excellence in sport.
Psychol. Sport Exerc.
2015
,
16
, 118–127. [
Google Scholar
] [
CrossRef
]
van Boxtel, G.J.M.; Denissen, A.; de Groot, J.A.; Neleman, M.S.; Vellema, J.; Hart de Ruijter, E.M. Alpha Neurofeedback Training in Elite Soccer Players Trained in Groups.
Appl. Psychophysiol. Biofeedback
2024
,
49
, 589–602. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Wu, J.H.; Chueh, T.Y.; Yu, C.L.; Wang, K.P.; Kao, S.C.; Gentili, R.J.; Hatfield, B.D.; Hung, T.M. Effect of a single session of sensorimotor rhythm neurofeedback training on the putting performance of professional golfers.
Scand. J. Med. Sci. Sports
2024
,
34
, e14540. [
Google Scholar
] [
CrossRef
]
Wu, J.H.; Tu, Y.C.; Chang, C.Y.; Chueh, T.Y.; Gentili, R.J.; Hatfield, B.D.; Hung, T.M. A single session of sensorimotor rhythm neurofeedback enhances long-game performance in professional golfers.
Biol. Psychol.
2024
,
192
, 108844. [
Google Scholar
] [
CrossRef
]
Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation.
Ann. Intern. Med.
2018
,
169
, 467–473. [
Google Scholar
] [
CrossRef
]
Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.; Group, P.-P. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement.
Syst. Rev.
2015
,
4
, 1. [
Google Scholar
] [
CrossRef
]
Peters, M.D.J.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated methodological guidance for the conduct of scoping reviews.
JBI Evid. Synth.
2020
,
18
, 2119–2126. [
Google Scholar
] [
CrossRef
]
Ouzzani, M.; Hammady, H.; Fedorowicz, Z.; Elmagarmid, A. Rayyan—A web and mobile app for systematic reviews.
Syst. Rev.
2016
,
5
, 210. [
Google Scholar
] [
CrossRef
]
Ainsworth, B.E.; Haskell, W.L.; Herrmann, S.D.; Meckes, N.; Bassett, D.R., Jr.; Tudor-Locke, C.; Greer, J.L.; Vezina, J.; Whitt-Glover, M.C.; Leon, A.S. 2011 Compendium of Physical Activities: A second update of codes and MET values.
Med. Sci. Sports Exerc.
2011
,
43
, 1575–1581. [
Google Scholar
] [
CrossRef
]
Rogala, J.; Jurewicz, K.; Paluch, K.; Kublik, E.; Cetnarski, R.; Wróbel, A. The Do’s and Don’ts of Neurofeedback Training: A Review of the Controlled Studies Using Healthy Adults.
Front. Hum. Neurosci.
2016
,
10
, 301. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Trullinger, M.; Novian, A.; Russell-Chapin, L.; Pradhan, D. Perspectives on Type III Statistical Errors: Exaggerating the Effects of Placebo in Neurofeedback.
NeuroRegulation
2019
,
6
, 38–41. [
Google Scholar
] [
CrossRef
]
Dekker, M.K.; van den Berg, B.R.; Denissen, A.J.; Sitskoorn, M.M.; van Boxtel, G.J. Feasibility of eyes open alpha power training for mental enhancement in elite gymnasts.
J. Sports Sci.
2014
,
32
, 1550–1560. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Maszczyk, A.; Golas, A.; Pietraszewski, P.; Kowalczyk, M.; Cieszczyk, P.; Kochanowicz, A.; Smolka, W.; Zajac, A. Neurofeedback for the enhancement of dynamic balance of judokas.
Biol. Sport
2018
,
35
, 99–102. [
Google Scholar
] [
CrossRef
]
Kober, S.E.; Ninaus, M.; Witte, M.; Buchrieser, F.; Grossinger, D.; Fischmeister, F.P.S.; Neuper, C.; Wood, G. Triathletes are experts in self-regulating physical activity—But what about self-regulating neural activity?
Biol. Psychol.
2022
,
173
, 108406. [
Google Scholar
] [
CrossRef
]
Afrash, S.; Saemi, E.; Gong, A.; Doustan, M. Neurofeedback training and motor learning: The enhanced sensorimotor rhythm protocol is better or the suppressed alpha or the suppressed mu?
BMC Sports Sci. Med. Rehabil.
2023
,
15
, 93. [
Google Scholar
] [
CrossRef
]
Chen, T.-T.; Wang, K.-P.; Chang, W.-H.; Kao, C.-W.; Hung, T.-M. Effects of the function-specific instruction approach to neurofeedback training on frontal midline theta waves and golf putting performance.
Psychol. Sport Exerc.
2022
,
61
, 102211. [
Google Scholar
] [
CrossRef
]
Mottola, F.; Blanchfield, A.; Hardy, J.; Cooke, A. EEG neurofeedback improves cycling time to exhaustion.
Psychol. Sport Exerc.
2021
,
55
, 101944. [
Google Scholar
] [
CrossRef
]
Pourbehbahani, Z.; Saemi, E.; Cheng, M.Y.; Dehghan, M.R. Both Sensorimotor Rhythm Neurofeedback and Self-Controlled Practice Enhance Motor Learning and Performance in Novice Golfers.
Behav. Sci.
2023
,
13
, 65. [
Google Scholar
] [
CrossRef
]
Mirifar, A.; Keil, A.; Beckmann, J.; Ehrlenspiel, F. No Effects of Neurofeedback of Beta Band Components on Reaction Time Performance.
J. Cogn. Enhanc.
2019
,
3
, 251–260. [
Google Scholar
] [
CrossRef
]
Wang, K.P.; Frank, C.; Hung, T.M.; Schack, T. Neurofeedback training: Decreases in Mu rhythm lead to improved motor performance in complex visuomotor skills.
Curr. Psychol.
2023
,
42
, 20860–20871. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Horvath, D.; Negyesi, J.; Racz, M.; Gyori, T.; Matics, Z.; Puskin, A.; Csipor, J.; Racz, L. Feasibility of a novel neurofeedback system: A parallel randomized single-blinded pilot study.
Sci. Rep.
2023
,
13
, 17353. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Wang, K.P.; Cheng, M.Y.; Elbanna, H.; Schack, T. A new EEG neurofeedback training approach in sports: The effects function-specific instruction of Mu rhythm and visuomotor skill performance.
Front. Psychol.
2023
,
14
, 1273186. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Arns, M.; Kleinnijenhuis, M.; Fallahpour, K.; Breteler, R. Golf Performance Enhancement and Real-Life Neurofeedback Training Using Personalized Event-Locked EEG Profiles.
J. Neurother.
2008
,
11
, 11–18. [
Google Scholar
] [
CrossRef
]
Ziółkowski, A.; Graczyk, M.; Strzałkowska, A.; Włodarczyk, P.; Zarańska, B. Neuronal, cognitive and social indicators for the control of aggressive behaviors in sport.
Acta Neuropsychol.
2012
,
10
, 537–546. [
Google Scholar
]
Graczyk, M.; Pachalska, M.; Ziolkowski, A.; Manko, G.; Lukaszewska, B.; Kochanowicz, K.; Mirski, A.; Kropotov, I.D. Neurofeedback training for peak performance.
Ann. Agric. Environ. Med.
2014
,
21
, 871–875. [
Google Scholar
] [
CrossRef
]
Kao, S.-C.; Huang, C.-J.; Hung, T.-M. Neurofeedback Training Reduces Frontal Midline Theta and Improves Putting Performance in Expert Golfers.
J. Appl. Sport Psychol.
2014
,
26
, 271–286. [
Google Scholar
] [
CrossRef
]
Mikicin, M.; Orzechowski, G.; Jurewicz, K.; Paluch, K.; Kowalczyk, M.; Wróbel, A. Brain-training for physical performance: A study of EEG-neurofeedback and alpha relaxation training in athletes.
Acta Neurobiol. Exp.
2015
,
75
, 434–445. [
Google Scholar
]
Mikicin, M. State of mind as a subjective mental sensation results from objective brain activity following neurofeedback-EEG and relaxation trainings.
Acta Neuropsychol.
2016
,
14
, 17–33. [
Google Scholar
]
Szczypińska, M.; Mikicin, M. Does attention training induce any changes in the level of the selected cognitive processes in handball players?
J. Phys. Educ. Sport
2019
,
19
, 1445–1452. [
Google Scholar
] [
CrossRef
]
Domingos, C.; Alves, C.; Sousa, E.; Rosa, A.; Pereira, J. Does Neurofeedback Training Improve Performance in Athletes?
NeuroRegulation
2020
,
7
, 8–17. [
Google Scholar
] [
CrossRef
]
Domingos, C.; da Silva Caldeira, H.; Miranda, M.; Melicio, F.; Rosa, A.C.; Pereira, J.G. The Influence of Noise in the Neurofeedback Training Sessions in Student Athletes.
Int. J. Environ. Res. Public Health
2021
,
18
, 13223. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Domingos, C.; Peralta, M.; Prazeres, P.; Nan, W.; Rosa, A.; Pereira, J.G. Session frequency matters in neurofeedback training of athletes.
Appl. Psychophysiol. Biofeedback
2021
,
46
, 195–204. [
Google Scholar
] [
CrossRef
] [
PubMed
]
Domingos, C.; Silva, C.M.D.; Antunes, A.; Prazeres, P.; Esteves, I.; Rosa, A.C. The Influence of an Alpha Band Neurofeedback Training in Heart Rate Variability in Athletes.
Int. J. Environ. Res. Public Health
2021
,
18
, 12579. [
Google Scholar
] [
CrossRef
]
Mikicin, M.; Orzechowski, G. Neuronal Activity in the Brain Changes During Exercise in Attention States, Warm-up, Submaximal Effort, and Recovery, After Neurofeedback-Eeg Training in Motion.
Acta Neuropsychol.
2022
,
20
, 175–186. [
Google Scholar
] [
CrossRef
]
Fuentes-Garcia, J.P.; Villafaina, S. Psychophysiological and Performance Effects of Biofeedback and Neurofeedback Interventions in a Top 100 Female Chess Player.
Behav. Sci.
2024
,
14
, 1044. [
Google Scholar
] [
CrossRef
]
Bakhtafrooz, S.; Kavyani, M.; Farsi, A.; Alboghebeish, S. The effect of infra low frequency-neurofeedback training on pistol shooting performance and attention in semi-skilled players.
Front. Hum. Neurosci.
2025
,
19
, 1487737. [
Google Scholar
] [
CrossRef
]
Paul, M.; Ganesan, S.; Sandhu, J.; Simon, J. Effect of sensory motor rhythm neurofeedback on psycho-physiological, electro-encephalographic measures and performance of archery players.
Ibnosina J. Med. Biomed. Sci.
2012
,
4
, 32–39. [
Google Scholar
] [
CrossRef
]
Rostami, R.; Sadeghi, H.; Karami, K.A.; Abadi, M.N.; Salamati, P. The Effects of Neurofeedback on the Improvement of Rifle Shooters’ Performance.
J. Neurother.
2012
,
16
, 264–269. [
Google Scholar
] [
CrossRef
]
Strizhkova, O.; Cherapkina, L.; Strizhkova, T. Neurofeedback course applying of high skilled gymnasts in competitive period.
J. Hum. Sport Exerc.
2012
,
7
, S185–S193. [
Google Scholar
] [
CrossRef
]
Christie, S.; Bertollo, M.; Werthner, P. The Effect of an Integrated Neurofeedback and Biofeedback Training Intervention on Ice Hockey Shooting Performance.
J. Sport Exerc. Psychol.
2020
,
42
, 34–47. [
Google Scholar
] [
CrossRef
]
Shokri, A.; Nosratabadi, M. Comparison of Biofeedback and Combined Interventions on Athlete’s Performance.
Appl. Psychophysiol. Biofeedback
2021
,
46
, 227–234. [
Google Scholar
] [
CrossRef
]
Fardinia, M.; Shojaei, M.; Rahimi, A. The effect of neurofeedback training on the anxiety of elite female swimmers.
Ann. Biol. Res.
2012
,
3
, 1020–1028. [
Google Scholar
]
Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews.
BMJ
2021
,
372
, n71. [
Google Scholar
] [
CrossRef
]
Figure 1.
PRISMA-ScR flow diagram showing the identification, screening, and inclusion of the studies included in this scoping review. * Records identified from the listed electronic databases. ** Records excluded after title and abstract screening.
Figure 1.
PRISMA-ScR flow diagram showing the identification, screening, and inclusion of the studies included in this scoping review. * Records identified from the listed electronic databases. ** Records excluded after title and abstract screening.
Figure 2.
Distribution of electrode sites and frequency bands in studies using different neurofeedback protocols: (
A
) Active Sham neurofeedback (
Table 1
A); (
B
) Inert Sham neurofeedback (
Table 1
B); and (
C
) without Sham neurofeedback (
Table 1
C).
Figure 2.
Distribution of electrode sites and frequency bands in studies using different neurofeedback protocols: (
A
) Active Sham neurofeedback (
Table 1
A); (
B
) Inert Sham neurofeedback (
Table 1
B); and (
C
) without Sham neurofeedback (
Table 1
C).
Figure 3.
Characteristics of included studies: (
A
) Geographic distribution; (
B
) Country contributions by percentage; (
C
) Study design classification.
Figure 3.
Characteristics of included studies: (
A
) Geographic distribution; (
B
) Country contributions by percentage; (
C
) Study design classification.
Figure 4.
Proportion of studies using different sham control types (no sham, active sham, and inert sham).
Figure 4.
Proportion of studies using different sham control types (no sham, active sham, and inert sham).
Figure 5.
Distribution of control group types across included studies (sham feedback, passive control, no control).
Figure 5.
Distribution of control group types across included studies (sham feedback, passive control, no control).
Figure 6.
Neurofeedback protocols used across included studies (SMR, theta/beta, alpha, ILF, um rhythm, custom).
Figure 6.
Neurofeedback protocols used across included studies (SMR, theta/beta, alpha, ILF, um rhythm, custom).
Figure 7.
Proportion of studies reporting neurophysiological outcomes versus behavioral-only measures.
Figure 7.
Proportion of studies reporting neurophysiological outcomes versus behavioral-only measures.
Table 1.
(
A
) Studies with Active Sham. Presents studies with
Active Sham
(feedback from pre-recorded EEG or randomized signals). (
B
) Studies with Inert Sham. Shows studies that used feedback completely unrelated to EEG (Inert Sham). (
C
) Studies without Sham. Includes studies with no form of sham (only passive control groups or pre-post designs).
Table 1.
(
A
) Studies with Active Sham. Presents studies with
Active Sham
(feedback from pre-recorded EEG or randomized signals). (
B
) Studies with Inert Sham. Shows studies that used feedback completely unrelated to EEG (Inert Sham). (
C
) Studies without Sham. Includes studies with no form of sham (only passive control groups or pre-post designs).
RCT with 3 groups: correct feedback, incorrect (active sham), and control; feedback modeled on slow cortical potential (SCP) paradigm
Significant improvement in performance in performance in the correct feedback group; performance decrement in incorrect feedback group; no significant change in control
Double-blind RCT: FSI-NFB vs. TI-NFB vs. Sham; single session (BioTrace+)
FSI-NFB group: ↓ putting accuracy (
p
= 0.013), slight ↑ Mu power (ns), positive Mu–error correlation (r = 0.319,
p
= 0.043); TI and sham showed no changes
Single-subject design using β1/θ NFB at P8 (Emotiv 14-channel EEG System, San Francisco, CA, USA); 6–7 sessions; attention and shooting accuracy assessed
3/5 athletes improved shooting accuracy and attention; 2 remained stable; no adverse effects reported
24 female rugby players (16–25 y;
n
= 12 NFB,
n
= 12 control)
Rugby—female athletes
Quasi-experimental pre–post design: NFB vs. control; 15 sessions (3×/week, 40 min); α↑ at Pz and SMR↑ at C3
NFB group: ↑ passing accuracy (
p
< 0.01, both sides); no change in shot accuracy; control: no improvement
Disclaimer/Publisher’s Note:
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Zacarias, R.M.G.; Bulathwatta, D.T.; Bidzan-Bluma, I.; de Jesus, S.N.; Correia, J.M.
EEG-Based Neurofeedback in Athletes and Non-Athletes: A Scoping Review of Outcomes and Methodologies.
Bioengineering
2025
,
12
, 1202.
https://doi.org/10.3390/bioengineering12111202
AMA Style
Zacarias RMG, Bulathwatta DT, Bidzan-Bluma I, de Jesus SN, Correia JM.
EEG-Based Neurofeedback in Athletes and Non-Athletes: A Scoping Review of Outcomes and Methodologies.
Bioengineering
. 2025; 12(11):1202.
https://doi.org/10.3390/bioengineering12111202
Chicago/Turabian Style
Zacarias, Rui Manuel Guerreiro, Darshika Thejani Bulathwatta, Ilona Bidzan-Bluma, Saúl Neves de Jesus, and João Mendonça Correia.
2025. "EEG-Based Neurofeedback in Athletes and Non-Athletes: A Scoping Review of Outcomes and Methodologies"
Bioengineering
12, no. 11: 1202.
https://doi.org/10.3390/bioengineering12111202
APA Style
Zacarias, R. M. G., Bulathwatta, D. T., Bidzan-Bluma, I., de Jesus, S. N., & Correia, J. M.
(2025). EEG-Based Neurofeedback in Athletes and Non-Athletes: A Scoping Review of Outcomes and Methodologies.
Bioengineering
,
12
(11), 1202.
https://doi.org/10.3390/bioengineering12111202
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here
.
In March of 2024 the
DESI collaboration
dropped a bombshell on the cosmological community: slim but significant evidence that dark energy might be getting weaker with time. This was a stunning result delivered after years of painstaking analysis. It’s not a bullet-proof result, but it doesn’t have to be to make our lives more interesting.
I know I’m late to the party on discussing this. And it’s okay, because 1) there’s a lot to unpack in this kind of result and I wanted to take my time, and 2) it’s not like this result is going to get revised or even updated anytime soon, so we’ve got plenty of room to play with this.
Let’s start with the results themselves and how they got there. DESI stands for Dark Energy Spectroscopic Instrument. It’s a roughly 4-meter telescope mounted on Kitt Peak in southeastern Arizona. It’s a galaxy survey, and to accomplish this survey they have 5,000 robotically controlled fiber optic cables underneath the telescope. Every night, the telescope selects a patch of sky to observe, the robots position the fiber optic cables to align with the positions of galaxies within that patch, and the instrument records detailed information for each and every single one. Then they do the same thing the next night, and then the next, and then the next.
So far they have amassed a catalog of over 13 million galaxies, providing the largest and comprehensive survey of galaxy positions in history. And they’re not even done! They’re aiming for 50 million galaxies once the survey is complete.
And let me tell you, those robotically controlled fiber optic cables are a huge game changer. In many ways DESI is the successor to an older survey, the Sloan Digital Sky Survey. That survey had a similar setup, except that instead of robots to move all those fibers every night, they had to use grad students. Probably cheaper, but still less efficient. (Note that I was never one of those unlucky “volunteers” but I did hear horror stories.)
Sure, the DESI survey is less than 1% of all the galaxies in the observable volume of the cosmos, but it’s still pretty sizable. So what do you do with a map of a decent chunk of the entire universe?
I’m glad you didn’t ask, because I’m happy to answer. The arrangements of galaxies on very large scales tells us a lot about the universe. And one of the key things used in this new DESI analysis is a feature of the large-scale universe goes by the ungainly but super nerdy name of baryon acoustic oscillations, or BAO for short.
Check this out. Long ago the universe was much smaller, hotter, and denser than it is today. If you’re ever asked what the big bang theory is all about, that’s pretty much it in a nutshell. In fact, billions of years ago, when the universe was only a few hundred thousand years old, it was so hot and dense (for those of you keeping score at home, a million times smaller than its present volume and thousands of degrees hotter) that all the matter was crammed together in the form of an energized plasma. This is the same state of matter as the body of the Sun or a lightning bolt, and it literally filled the universe.
Like any dense material, there were sound waves – waves of pressure that crisscrossed the universe. Many of these sound waves were triggered by a competition between gravity and radiation. Dense clumps of matter would try to collapse under their own gravity, but then those clumps would get hot and the radiation they emitted would push them back out.
This seesawing effect went on and on, back and forth, until the plasma cooled down so much that the light was released. This meant that radiation could no longer play the game, and the back-and-forth sound waves got stuck mid seesaw. Wherever they were, they acted as a source of additional gravitation, a shell of slightly higher density.
In fact we even have pictures of these features, which are the baryon acoustic oscillations (or “super hot sound waves” if you prefer). The light that was emitted when this process stopped still exists today, and we can take pictures of it. It’s called the cosmic microwave background, and a decade ago when a bunch of my friends were plugging away their fiber optic cables, I was a member of the Planck collaboration, which was a satellite to map the microwave background.
These shells of extra matter didn’t just go away. They stuck around, and slowly slowly slowly over billions of years more matter accumulated on those shells than the surrounding regions. Today, we see the imprint of the BAO in the form of shells of matter roughly 800 million light-years in diameter.
The cool part about all this is that the shells are what’s called a standard ruler. We know how big the shells are supposed to be – it’s a relatively straightforward calculation to transport the images we see in the microwave background to their sizes in the present day. And we can compare that expected value to how big they appear on the sky. And how big they appear on the sky depends on cosmology: on the properties, history, and evolution of the universe.
The new finding is that the BAO shells found by DESI are a little off. Their sizes don’t quite fit with our usual picture of cosmology. And they seem to fit better a picture of the universe where dark energy is evolving.
But what the heck is dark energy, and why is it so interesting that it might be evolving?
Azure hit by 15 Tbps DDoS attack using 500k IP addresses
On October 24, 2025, Azure DDOS Protection automatically detected and mitigated a multi-vector DDoS attack measuring 15.72 Tbps and nearly 3.64 billion packets per second (pps). This was the largest DDoS attack ever observed in the cloud and it targeted a single endpoint in Australia.
By utilizing Azure’s globally distributed DDoS Protection infrastructure and continuous detection capabilities, mitigation measures were initiated. Malicious traffic was effectively filtered and redirected, maintaining uninterrupted service availability for customer workloads.
The attack originated from Aisuru botnet. Aisuru is a Turbo Mirai-class IoT botnet that frequently causes record-breaking DDoS attacks by exploiting compromised home routers and cameras, mainly in residential ISPs in the United States and other countries.
The attack involved extremely high-rate UDP floods targeting a specific public IP address, launched from over 500,000 source IPs across various regions. These sudden UDP bursts had minimal source spoofing and used random source ports, which helped simplify traceback and facilitated provider enforcement.
Attackers are scaling with the internet itself. As fiber-to-the-home speeds rise and IoT devices get more powerful, the baseline for attack size keeps climbing.
As we approach the upcoming holiday season, it is essential to confirm that all internet-facing applications and workloads are adequately protected against DDOS attacks. Additionally, do not wait for an actual attack to assess your defensive capabilities or operational readiness—conduct regular simulations to identify and address potential issues proactively.
In 1971, the philosophy department at Oxford University was confronted with an unusual student. One of the few vegetarians on campus, Peter Singer staged alarming demonstrations with
papier-mâché chickens
on Cornmarket Street. He petitioned to write his term paper on Karl Marx (“not a real philosopher” in the faculty’s minds). He
attended
Radical Philosophy meetings, which set out to make philosophy more practical and less complacent, but grew impatient. He had more pressing concerns than splitting hairs on Althusser.
Singer was preoccupied by great suffering around the world—the plight of the persecuted, of refugees, and of victims of famine—and by his peers’ relative indifference to it. This was the landscape that inspired his famous thought experiment. Put simply, it asks if you were to walk past a child drowning in a shallow pond and the only cost to saving them is your clothes get wet, should you jump in? The question itself is not challenging, but Singer used it to make the radical claim that Westerners turn a blind eye to the drowning child each day we refuse to address global suffering. “If it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant,” Singer argues, “we ought, morally, to do it.”
This is the idea behind effective altruism, the philosophical movement centered around maximizing the impact of our resources for the greatest good. Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched
Giving What We Can
in 2009, which encouraged members to pledge 10 percent of their incomes to charity. Since then, numerous organizations have sprung up
to help Silicon Valley billionaires and broke college students alike find the most cost-effective ways to improve the world.
Effective altruists have often taken extreme measures to show their commitment: A couple
adopts 20 neglected children
at the expense of their other children’s needs; a man
stops doing the dishes
so he can spend more time serving others (and he donates so generously that he ends up dumpster diving for his own food). Over its relatively short lifespan, E.A. has been touched by scandal. Before Sam Bankman-Fried
was
convicted of stealing $8 billion from his customers
in 2024, his strategy for doing good was amassing a cryptocurrency fortune, which he pledged to
donate
to causes such as pandemic preparedness and artificial intelligence safety. Effective altruism has been co-opted by
techno-fascists
and has developed offshoots that include
pronatalists
and
cults
.
How did the movement stray so far?
I spoke with philosopher David Edmonds, author of
Death in a Shallow Pond
, about the origins of effective altruism, the thought experiment that inspired it, and how it has transformed beyond its original aim.
This conversation has been edited for length and clarity.
Kate Mabus:
How did effective altruism transform from a group of Oxford philosophers experimenting with their personal finances into the highly coordinated and much-dissected movement it is today?
David Edmonds:
I don’t know whether it’s shifted that dramatically in culture. When I think of the two organizers, I would say Toby Ord was the more stereotypical academic of the two, whereas Will MacAskill always had activist instincts. So right from the beginning, the whole point was that they wanted to change the world for the better. It’s obviously open to debate to what extent they’ve done so, but that certainly was their ambition. To do that, they had to build a movement, and that involved building organizations. The first one was called Giving What We Can, and that encouraged people to give 10 percent of their salary away to good causes. There was another organization called 80,000 Hours designed to advise people on what they should do with their work life. If you are interested in doing the most good that you can, it may not be just the money that you can help out with. It may be how you devote your time. A number of other organizations sprang up under the effective altruism umbrella. So, right from the beginning, they were quite ambitious in what they wanted to achieve. That was about 15 years ago now, and it’s become a slicker operation with various major bumps along the road.
K.M.:
Why is Peter Singer important for understanding the intellectual underpinnings of effective altruism?
D.E.:
He’s absolutely fundamental. Effective altruism kicks off about four decades or so after Peter Singer writes a
famous article
called “Famine, Affluence, and Morality.” He has this thought experiment where you are to imagine that you are walking past a shallow pond and there’s a child who’s drowning. There’s nobody else around, and you are about to save the child when you notice that you are wearing your most expensive shoes. Peter Singer asks, “Should you worry about that?” And everybody says, “That’s a ridiculous question. Of course what you should do is save the child.” Then Peter Singer makes this very contentious claim that those of us in the affluent West with spare resources are effectively walking past a shallow pond every day of our lives.
Most of us think that if somebody is in danger just around the corner from us, that should have greater moral weight for us than if somebody is in trouble on the other side of the world. In the past, there was nothing we could do about people in another country. Peter Singer says that’s just an evolutionary hangover, a moral error. The lives of people on the other side of the world are no less important than the lives of people just around the corner from us.
Ord and MacAskill read this famous article, written way back in 1971 when Singer was wondering what our obligations are to people in Bangladesh who were going through a terrible civil war, and they found it totally compelling. Effective altruism was born.
K.M.:
In the book, you apply utilitarianism to a number of thought experiments, which often leads to unsatisfactory conclusions. Would it not be better to simply follow our ethical intuition?
D.E.:
Opinions differ on this. There are plenty of people who are effective altruists who aren’t utilitarian and think that actually what we should do is devote a small part of our life to maximizing good, but for the rest of our time, we should be free to pursue other projects. Utilitarians, like Peter Singer, believe that what you should do is maximize well-being, minimize pain, and that is the ultimate arbiter of all actions. This can seem very demanding because every day there are things in the world to worry about and you must be committed to doing what you can to alleviate those problems instead of, say, looking after your family. There are obviously good evolutionary reasons why you would want to focus on your kids but, if you’re utilitarian, in effect, your children’s lives are no more important than other children’s lives.
K.M.:
A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world. Is this possible to reconcile?
D.E.:
The institutional critique of effective altruism is that Peter Singer’s solution to global poverty is very individualistic, essentially leaves power as it is, and doesn’t deal with the root cause of any of these problems. The real problems are structural. They are to do with the way societies are organized, power imbalances, things like corruption and the lack of accountability for politicians. Donating money like the effective altruists encourage is just sticking a plaster over the wound. It’s not actually tackling the causes of the injury. The effective altruists have various responses to that, but I think the most compelling one is this. They would say, “Well, what we believe is we should do the most good. If you can convince us that working for structural change—for example, lobbying politicians or supporting organizations trying to root out corruption—is the most effective use of our resources, then we have no ideological commitment to donating through charities.” So it’s not a difference about ideology or morality, it’s a practical and empirical difference about what they think is the most effective way of bringing about change.
K.M.:
In its early days, the movement seemed to be quite apolitical in its mission. Now it is increasingly impossible to keep philanthropy separate from politics. How might effective altruism change in response to this new political landscape?
D.E.:
You are right that politics has become increasingly polarized. I’m not an expert on why that’s happened, but I think it’s happened in the United States to a greater extent than in any other Western democracy. The implications for E.A. are entirely pragmatic ones. Their ultimate aim is not to get any particular party in power. Their ultimate aim is obviously the distribution of resources in a way that effectively improves people’s lives. Insofar as alignment with any particular political party undermines that objective, they would obviously do well to disengage from party politics. Already they’re not overtly party political. Surveys of those who’ve signed up to effective altruism show they tend to be, as you might expect, on the progressive side, but vary from being very centrist to moderately progressive … in European terms, which might be very left-wing [in] American terms. They’re not of one political persuasion, and they will probably want to remove themselves from the arena of politics just because it is so polarized. If you are associated with one side or the other, you alienate 50 percent of the American population.
K.M.:
The movement has been blighted by some major scandals in recent years. At one point in the book, you ask if these bad actors are a bug or a feature. Which is it?
D.E.:
I think it’s a bit of both. There has been a problem in that effective altruism began as a movement entirely focused on development and then evolved into various other areas. A couple of other areas that got very seriously interested included animal rights, but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program. I don’t know how strongly to put this … they may have had instincts which didn’t chime with the instincts of the initial crew who were thinking about those in desperate poverty and in need of inoculations and money for food and so on.
Part of it was a feature: It attracted a whole bunch of people whose values were not totally aligned with the original values. But part of it is also a bug: I wouldn’t exaggerate the role of people like Sam Bankman-Fried. He’d been one of the richest donors to effective altruism, and it turned out he’d been committing fraud. That was obviously terrible P.R. for effective altruism, not least because Will MacAskill encouraged Sam Bankman-Fried to go into commerce rather than work for a charity. He’d been part of this 80,000 Hours movement, as it were. I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals. I sort of hope and trust that going forward, there won’t be the same kind of catastrophes emerging; famous last words.
If you live in the United States today, and you accidentally knock a hole in your wall, it’s probably cheaper to buy a flatscreen TV and stick it in front of the hole, compared to hiring a handyman to fix your drywall. (
Source: Marc Andreessen.
) This seems insane; why?
Well, weird things happen to economies when you have huge bursts of productivity that are concentrated in one industry. Obviously, it’s great for
that industry
, because when the cost of something falls while its quality rises, we usually find a way to consume way more of that thing - creating a huge number of new jobs and new opportunities in this newly productive area.
But there’s an interesting spillover effect. The more jobs and opportunities created by the productivity boom, the more wages increase
in other industries
, who at the end of the day all have to compete in the same labor market. If you can make $30 an hour as a digital freelance marketer (a job that did not exist a generation ago), then you won’t accept less than that from working in food service. And if you can make $150 an hour installing HVAC for data centers, you’re not going to accept less from doing home AC service.
This is a funny juxtaposition. Each of these phenomena have a name: there’s
Jevons Paradox,
which means, “We’ll spend more on what gets more productive”, and there’s the
Baumol Effect,
which means, “We’ll spend more on what
doesn’t
get more productive.”
And both of them are top of mind right now, as we watch in awe at what is happening with AI Capex spend.
As today’s AI supercycle plays out, just like in productivity surges of past decades, we’re likely going to see something really interesting happen:
Some goods and services, where AI has relatively more impact and we’re able to consume 10x more of them along some dimension, will become orders of magnitude cheaper.
Other goods and services, where AI has relatively less impact, will become more expensive - and we’ll consume more of them anyway.
And, even weirder, we may see this effect happen
within a single job:
Some parts of the job, automated by AI, will see 10x throughput at 10x the quality, while
Other parts of the job - the part that
must be done
by the human - will be
the reason
you’re getting paid, command a wildly high wage, and be the target of regulatory protection.
Let’s dive in:
Chances are, you’ve probably seen a version of this graph at some point:
This graph can mean different things to different people: it can mean “what’s regulated versus what isn’t” to some, “where technology makes a difference” to others. And it’s top of mind these days, as persistent inflation and the AI investment supercycle both command a lot of mindshare.
To really understand it, the best place to start isn’t with the red lines. It’s with the blue lines: where are things getting
cheaper
, in ways that create
more
jobs, more opportunity, and more spending?
The original formulation of “Jevons paradox”, by William Stanley Jevons in 1865, was about coal production. Jevons observed that, the cheaper and faster we got at producing coal, the more coal we ended up using - demand
more than
eclipsed the cost savings, and the coal market grew rapidly as it fed the Second Industrial Revolution in England and abroad.
Today we all know Moore’s Law, the best contemporary example of Jevons paradox. In 1965, a transistor cost roughly $1. Today it costs a fraction of a millionth of a cent. This extraordinary collapse in computing costs – a billionfold improvement – did not lead to modest, proportional increases in computer use. It triggered an explosion of applications that would have been unthinkable at earlier price points. At $1 per transistor, computers made sense for military calculations and corporate payroll. At a thousandth of a cent, they made sense for word processing and databases. At a millionth of a cent, they made sense in thermostats and greeting cards. At a billionth of a cent, we embed them in disposable shipping tags that transmit their location once and are thrown away. The efficiency gains haven’t reduced our total computing consumption: they’ve made computing so cheap that we now use trillions times more of it.
We’re all betting that the same will happen with the cost of tokens, just like it happened to the cost of computing, which in turn unlocks more demand than can possibly be taken up by the existing investment. The other week, we heard from Amin Vahdat, GP and GM of AI and Infrastructure at Google Cloud,
share an astonishing observation with us
: that 7 year old TPUs were still seeing 100% utilization inside Google.
That
is one of the things you see with Jevons Paradox: the opportunity to do productive work
explodes
in possibility. We are at the point in the technology curve with AI where every day someone figures out something new to do with them, meaning users will take any chip they can get, and use it productively.
Jevons Paradox (which isn’t really a paradox at all; it’s just economics) is where demand
creation
comes from, and where new kinds of attractive jobs come from. And that huge new supply of viable, productive opportunity is our starting point to understand the other half of our economic puzzle: what happens everywhere else.
Agatha Christie once wrote that she never thought she’d be wealthy enough to own a car, or poor enough to not have servants. Whereas, after a century of productivity gains, the average American middle-class household can comfortably manage a new car lease every two years, but needs to split the cost of a single nanny with their neighbors.
How did this happen? 100 years after Jevons published his observation on coal, William Baumol published a short paper investigating why so many orchestras, theaters, and opera companies were running out of money. He provocatively asserted that the String Quartet had become
less productive
, in “real economy” terms, because the rest of the economy had become
more productive
, while the musicians’ job stayed exactly the same. The paper struck a nerve, and became a branded concept: “Baumol’s Cost Disease”.
This is a tricky concept to wrap your head around, and
not everyone buys it
. But the basic argument is, over the long run all jobs and wage scales compete in the labor market with every other job and wage scale. If one sector becomes hugely productive, and creates
tons
of well-paying jobs, then every other sector’s wages eventually have to rise, in order for their jobs to remain attractive for anyone.
The String Quartet is an odd choice of example, because there are so many ways to argue that music
has
become more productive over the past century: recorded and streaming music have brought consumption costs down to near zero, and you could argue that Taylor Swift is “higher quality” for what today’s audiences are looking for (even if you deplore the aesthetics.) But
the overall effect is compelling nonetheless
. As some sectors of the economy get more attractive, the comparatively
less
attractive ones get more expensive anyway.
Once you’ve heard of Baumol’s, it’s like you get to join a trendy club of economic thinkers who now have someone to blame for all of society’s problems. It gets to be a successful punching bag for why labor markets are weird, or why basic services cost so much - “It’s a rich country problem.”
But the odd thing about Baumol’s is how rarely it is juxtaposed with the
actual driving cause
of those productivity distortions, which is the
massive increase
in productivity, in overall wealth, and in overall consumption, that’s required for Baumol’s to kick in. In a weird way, Jevons
is necessary
for Baumol’s to happen.
For some reason, we rarely see those two phenomena juxtaposed against each other, but they’re related. For the Baumol Effect to take place as classically presented, there must be a
total
increase in productive output and opportunity; not just a
relative
increase in productivity, from the booming industry and the new jobs that it creates. But when that does happen, and we see a lot of consumption, job opportunities, and prosperity get created by the boom, you can safely bet that Baumol’s will manifest itself in faraway corners of the economy.
This isn’t all bad; it’s how wealth gets spread around
and how a rising tide lifts many boats. (There’s probably a joke here somewhere that Baumol’s Cost Disease is actually the most effective form of Communism ever tried, or something.)
So, to recap:
“Jevons-type effects” created bountiful new opportunity in everything that got more productive; and
“Baumol-type effects” means that everything that
didn’t
get more productive got more expensive anyway, but we consume more of it all the same because society as a whole got richer.
As explained in one job: our explosion of demand for data centres means there’s infinite work for HVAC technicians. So they get paid more (even though they themselves didn’t change), which means they charge more on
all
jobs (even the ones that have nothing to do with AI), but we can afford to pay them (because we got
richer
overall, mostly from technology improvements, over the long run). Furthermore, the next generation of plumber apprentices might decide to do HVAC instead; so now
plumbing
is more expensive too. And so on.
Now let’s think about what’s going to happen with widespread AI adoption, if it pays off the way we all think it will. First of all, it’s going to drive a lot of productivity gains in
services
specifically. (There is precedent for this; e.g. the railroads made the mail a lot more productive; the internet made travel booking a lot more productive.)
Some
services are going to get pulled into the Jevons vortex, and just rapidly start getting more productive, and unlocking new use cases for those services. (The key is to look for
elastic-demand
services, where we plausibly could consume 10x or more of the service, along some dimension. Legal services, for example, plausibly fit this bill.)
And then there are other kinds of services that are not going to be Jevons’ed, for some reason or another, and for
those
services, over time, we should expect to see wildly high prices for specific services that have no real reason to AI whatsoever. Your dog walker has nothing to do with AI infrastructure; and yet, he will cost more. But you’ll pay it anyway; if you love your dog.
The last piece of this economic riddle, which we haven’t mentioned thus far, is that elected governments (who appoint and direct employment regulators) often believe they have a mandate to protect people’s employment and livelihoods. And the straightforward way that mandate gets applied, in the face of technological changes, is to protect human jobs by saying, “This safety function must be performed or signed off by a human.”
When this happens (which it certainly will, across who knows how many industries, we’ll see a Baumol’s type effect take hold
within single jobs.
Here’s Dwarkesh, on his recent interview with Andrej Karpathy
: (Excerpted in full, because it’s such an interesting thought):
“With radiologists, I’m totally speculating and I have no idea what the actual workflow of a radiologist involves. But one analogy that might be applicable is when Waymos were first being rolled out, there’d be a person sitting in the front seat, and you just had to have them there to make sure that if something went really wrong, they’d be there to monitor. Even today, people are still watching to make sure things are going well. Robotaxi, which was just deployed, still has a person inside it.
Now we could be in a similar situation where if you automate 99% of a job, that last 1% the human has to do is incredibly valuable because it’s bottlenecking everything else. If it were the case with radiologists, where the person sitting in the front of Waymo has to be specially trained for years in order to provide the last 1%, their wages should go up tremendously because they’re the one thing bottlenecking wide deployment. Radiologists, I think their wages have gone up for similar reasons, if you’re the last bottleneck and you’re not fungible. A Waymo driver might be fungible with others. So you might see this thing where your wages go up until you get to 99% and then fall just like that when the last 1% is gone. And I wonder if we’re seeing similar things with radiology or salaries of call center workers or anything like that.”
Just like we have really weird economies in advanced countries (where we can afford supercomputers in our pockets, but not enough teachers for small class sizes), we could see a strange thing happen where the last 1% that
must be a human
in a job (the “Dog Walker” part, as opposed to the “Excel” part) becomes the essential employable skillset.
In an interesting way, this hints at where Baumol’s will finally run out of steam - because at some point, these “last 1% employable skills” no longer become substitutable for one another. They’ll become strange vestigial limbs of career paths; in a sense. We have a ways to go until we get there, but we can anticipate some very strange economic & political alliances that could get formed in such a world. Until then, let’s keep busy on the productivity part. Because that’s what matters, and what makes us a wealthy society - weird consequences and all.
Views expressed in “posts” (including podcasts, videos, and social media) are those of the individual a16z personnel quoted therein and are not the views of a16z Capital Management, L.L.C. (“a16z”) or its respective affiliates. a16z Capital Management is an investment adviser registered with the Securities and Exchange Commission. Registration as an investment adviser does not imply any special skill or training. The posts are not directed to any investors or potential investors, and do not constitute an offer to sell — or a solicitation of an offer to buy — any securities, and may not be used or relied upon in evaluating the merits of any investment.
The contents in here — and available on any associated distribution platforms and any public a16z online social media accounts, platforms, and sites (collectively, “content distribution outlets”) — should not be construed as or relied upon in any manner as investment, legal, tax, or other advice. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Any charts provided here or on a16z content distribution outlets are for informational purposes only, and should not be relied upon when making any investment decision. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, posts may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein. All content speaks only as of the date indicated.
Under no circumstances should any posts or other information provided on this website — or on associated content distribution outlets — be construed as an offer soliciting the purchase or sale of any security or interest in any pooled investment vehicle sponsored, discussed, or mentioned by a16z personnel. Nor should it be construed as an offer to provide investment advisory services; an offer to invest in an a16z-managed pooled investment vehicle will be made separately and only by means of the confidential offering documents of the specific pooled investment vehicles — which should be read in their entirety, and only to those who, among other requirements, meet certain qualifications under federal securities laws. Such investors, defined as accredited investors and qualified purchasers, are generally deemed capable of evaluating the merits and risks of prospective investments and financial matters.
There can be no assurances that a16z’s investment objectives will be achieved or investment strategies will be successful. Any investment in a vehicle managed by a16z involves a high degree of risk including the risk that the entire amount invested is lost. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by a16z is available here:
https://a16z.com/investments/
. Past results of a16z’s investments, pooled investment vehicles, or investment strategies are not necessarily indicative of future results. Excluded from this list are investments (and certain publicly traded cryptocurrencies/ digital assets) for which the issuer has not provided permission for a16z to disclose publicly. As for its investments in any cryptocurrency or token project, a16z is acting in its own financial interest, not necessarily in the interests of other token holders. a16z has no special role in any of these projects or power over their management. a16z does not undertake to continue to have any involvement in these projects other than as an investor and token holder, and other token holders should not expect that it will or rely on it to have any particular involvement.
With respect to funds managed by a16z that are registered in Japan, a16z will provide to any member of the Japanese public a copy of such documents as are required to be made publicly available pursuant to Article 63 of the Financial Instruments and Exchange Act of Japan. Please contact
compliance@a16z.com
to request such documents.
For other site terms of use, please go
here
. Additional important information about a16z, including our Form ADV Part 2A Brochure, is available at the SEC’s website: http://www.adviserinfo.sec.gov
Discussion about this post
Cities Panic over Having to Release Mass Surveillance Recordings
Yves here. BWAHAHA. There is so little good news on the mass surveillance that every win ought to be celebrated. And the precedent here is important if it holds. The cities affected by the setback to Flock using license plate reading as a pretext for pervasive visual data hauling seem not to be willing to bet on an appeal succeeding. Enjoy the schadenfreude of the rapid retreat. If we are really lucky, Flock will suffer irreparable financial damage.
By Thomas Neuburger. Originally published at
God’s Spies
Image from Flock Safety, the product’s manufacturer
This is a tale of
Flock cameras
, something you may never have heard about. Flock cameras are sold to the gullible and the complicit as simple “license plate readers.” Flock cameras are designed to watch cars. For safety, of course. Because crime. But they are much more.
Flock Safety, a fast-growing startup that helps law enforcement find vehicles from fixed cameras, has released a slew of new features meant to make it easier for users to locate vehicles of interest.
Overall, the moves push the company’s software in the direction of giving police the ability to search for vehicles using whatever cameras are at their disposal — a security camera at an ATM, a homeowner’s Ring doorbell, even a photo somebody took on their cellphone. The company’s new Advanced Search package — which costs between $2,500 and $5,000 a year, depending on how many of Flock Safety’s cameras the agency operates — includes a feature that allows users to upload a picture of a vehicle from any source and then perform a search to see if any of the company’s cameras have seen it.
It doesn’t just search for license plates, either. The company has designed its software to recognize vehicle features such as paint color, type of vehicle and distinguishing features such as roof racks.
The tell is in the name: Flock
Safety
. Because “keeping you safe” is the reason for every intrusion. As one police-oriented site
puts it
(note: “you” here is the cops):
7/10 crimes are committed with the use of a vehicle. Capture the vehicle details you need to track leads and solve crime. Flock Safety’s patented Vehicle Fingerprint™ technology lets you search by vehicle make, color, type, license plate, state of the license plate, missing plate, covered plate, paper plate, and unique vehicle details like roof racks, bumper stickers, and more.
The reach is stunning in breadth. Flock captures everything it sees. Everything. Not just vehicles. People. Everything.
Think that’s a problem? So does a Washington state judge, who ruled that the sweep is so great that
its data is a public record
. Public means open to all.
That freaked out so many towns that the company is starting to lose contracts.
Across the United States, thousands of automated license plate readers quietly watch the roads. Some ride along in
police cruisers
[note: unrelated link, but a helluva story]
, others perch on telephone poles or hang above intersections, clicking away as cars glide past. They record everything in sight, regardless of who’s behind the wheel.
It’s a vast, largely invisible network, one that most people never think twice about until it makes the news.
Well, it turns out that those pictures are public data, according to a judge’s recent ruling. And almost as soon as the decision landed, local officials scrambled to shut the cameras down.
The tale behind the case is interesting:
The ruling stems from a civil case involving the Washington cities of Sedro-Woolley and Stanwood. Both sued to block public records requests filed by Oregon resident Jose Rodriguez. He works in Walla Walla and sought to access the images as part of a broader inquiry into government surveillance.
Judge Elizabeth Yost Neidzwski sided with Rodriguez, concluding that the data “do qualify as public records subject to the Public Records Act.”
The decision immediately led both cities to deactivate their Flock systems. Flock cameras are mounted along public roadways and continuously photograph passing vehicles, including occupants, regardless of whether any crime is suspected.
Concerns about privacy are central to the case. City attorneys, defending against Rodriguez’s suit, said releasing the data would compromise the privacy of innocents. But they saw no problem with the
government
keeping the same data.
Privacy for Me, Surveillance for Everyone Else
This gets us to the central problem of today’s surveillance state. No one running the cameras wants to be observed. One reason that city officials object to releasing Flock data, for example, must that they themselves are among the recorded. The cameras are on them too; they too can be tracked. Everything means
everything
for these everywhere cameras.
The rich want to hide their crimes (hello, Mr. Epstein’s friends), ICE wants to mask its thugs. Billionaires think you have no business in their affairs.
Masked and hooded. ICE agents looking for victims in Chicago IL (
source
)
Yet they want to have every right to be
deep into yours
. Look at the ICE agents above. Then consider that one of the uses of Flock is to
help ICE do what it does
by stripping the whole world naked as much as it can.
Or consider the trick used by cities like Eugene OR to
hide the Flock cameras from view
so they could record without them being unobserved.
Or that Congress had no problem at all with domestic spying, until
they were the spied upon
. Here Feinstein makes, ahem, the constitutional argument.
Image from Flock Safety, the product’s manufacturer
This is a tale of
Flock cameras
, something you may never have heard about. Flock cameras are sold to the gullible and the complicit as simple “license plate readers.” Flock cameras are designed to watch cars. For safety, of course. Because crime. But they are much more.
Flock Safety, a fast-growing startup that helps law enforcement find vehicles from fixed cameras, has released a slew of new features meant to make it easier for users to locate vehicles of interest.
Overall, the moves push the company’s software in the direction of giving police the ability to search for vehicles using whatever cameras are at their disposal — a security camera at an ATM, a homeowner’s Ring doorbell, even a photo somebody took on their cellphone. The company’s new Advanced Search package — which costs between $2,500 and $5,000 a year, depending on how many of Flock Safety’s cameras the agency operates — includes a feature that allows users to upload a picture of a vehicle from any source and then perform a search to see if any of the company’s cameras have seen it.
It doesn’t just search for license plates, either. The company has designed its software to recognize vehicle features such as paint color, type of vehicle and distinguishing features such as roof racks.
The tell is in the name: Flock
Safety
. Because “keeping you safe” is the reason for every intrusion. As one police-oriented site
puts it
(note: “you” here is the cops):
7/10 crimes are committed with the use of a vehicle. Capture the vehicle details you need to track leads and solve crime. Flock Safety’s patented Vehicle Fingerprint™ technology lets you search by vehicle make, color, type, license plate, state of the license plate, missing plate, covered plate, paper plate, and unique vehicle details like roof racks, bumper stickers, and more.
The reach is stunning in breadth. Flock captures everything it sees. Everything. Not just vehicles. People. Everything.
Think that’s a problem? So does a Washington state judge, who ruled that the sweep is so great that
its data is a public record
. Public means open to all.
That freaked out so many towns that the company is starting to lose contracts.
Across the United States, thousands of automated license plate readers quietly watch the roads. Some ride along in
police cruisers
[note: unrelated link, but a helluva story]
, others perch on telephone poles or hang above intersections, clicking away as cars glide past. They record everything in sight, regardless of who’s behind the wheel.
It’s a vast, largely invisible network, one that most people never think twice about until it makes the news.
Well, it turns out that those pictures are public data, according to a judge’s recent ruling. And almost as soon as the decision landed, local officials scrambled to shut the cameras down.
The tale behind the case is interesting:
The ruling stems from a civil case involving the Washington cities of Sedro-Woolley and Stanwood. Both sued to block public records requests filed by Oregon resident Jose Rodriguez. He works in Walla Walla and sought to access the images as part of a broader inquiry into government surveillance.
Judge Elizabeth Yost Neidzwski sided with Rodriguez, concluding that the data “do qualify as public records subject to the Public Records Act.”
The decision immediately led both cities to deactivate their Flock systems. Flock cameras are mounted along public roadways and continuously photograph passing vehicles, including occupants, regardless of whether any crime is suspected.
Concerns about privacy are central to the case. City attorneys, defending against Rodriguez’s suit, said releasing the data would compromise the privacy of innocents. But they saw no problem with the
government
keeping the same data.
This gets us to the central problem of today’s surveillance state. No one running the cameras wants to be observed. One reason that city officials object to releasing Flock data, for example, must that they themselves are among the recorded. The cameras are on them too; they too can be tracked. Everything means
everything
for these everywhere cameras.
The rich want to hide their crimes (hello, Mr. Epstein’s friends). ICE wants to mask its thugs. Billionaires think you have no business in their affairs.
Masked and hooded. ICE agents looking for victims in Chicago IL (
source
)
Yet they want to have every right to be
deep into yours
. Look at the ICE agents above. Then consider that one of the uses of Flock is to
help ICE do what it does
by stripping the whole world naked as much as it can.
Or consider the trick used by cities like Eugene OR to
hide the Flock cameras from view
so they could record you without them being unobserved.
Or that Congress had no problem at all with domestic spying, until
they were the spied upon
. Here Feinstein makes, ahem, the constitutional argument.
We (
@emmatyping
,
@eclips4
) propose introducing the Rust programming language to CPython. Rust will initially only be allowed for writing optional extension modules, but eventually will become a required dependency of CPython and allowed to be used throughout the CPython code base.
Motivation
Rust has established itself as a popular, memory-safe systems programming language in use by a large number of projects. Memory safety is a property of a programming language which disallows out-of-bounds reads and writes to memory, as well as use-after-free errors. Rust’s safety properties are enforced by an ownership mode for code, which ensures that memory accesses are valid. Rust’s memory safety guarantees have been formally proven by the
RustBelt project
for code that does not use “unsafe” . By adopting Rust, entire classes of bugs, crashes, and security vulnerabilities can be eliminated from code.
Due to Rust’s ownership model, the language also enforces thread safety: data races are prevented at compile time. With free-threaded Python becoming officially supported and more popular, ensuring the standard library is thread safe becomes critical. Rust’s strong thread safety guarantees would ease reasoning around multi-threaded code in the CPython code base.
CPython has historically encountered numerous bugs and crashes due to invalid memory accesses.We believe that introducing Rust into CPython would reduce the number of such issues by disallowing invalid memory accesses by construction. While there will necessarily be some unsafe operations interacting with CPython’s C API to begin with, implementing the core module logic in safe Rust would greatly reduce the amount of code which could potentially be unsafe.
Rust also provides “zero-cost”, well designed implementations of common data structures such as Vectors, HashMaps, Mutexes, and more. Zero cost in this context means that these data structures allow implementations to use higher-level constructs with the performance of hand-rolled implementations in other languages. The documentation for
the Rust standard library
covers these data structures very thoroughly. By having these zero-cost, high-level abstractions we expect Rust will make it easier to implement performance-sensitive portions of CPython.
Rust additionally enables principled meta-programming through its macro system. Rust has two types of macros: declarative and procedural. Declarative macros in Rust are hygienic, meaning that they cannot unintentionally capture variables, unlike in C. This means it is much easier to reason about macros in Rust compared to C. Procedural macros are a way to write Rust code which does token transformations on structure definitions, functions, and more. Procedural macros are very powerful, and are used by PyO3 to ergonomically handle things like argument parsing, class definitions, and module definitions.
Finally, Rust has an excellent build system. Rust uses
the Cargo package manager
, which handles acquiring dependencies and invoking
rustc
(the Rust compiler driver). Cargo supports vendoring dependencies out of the box, so that any Rust dependencies do not need to be downloaded (see open questions about dependencies). Furthermore, Rust has easy to set up cross-compilation which only requires installing the desired target and ensuring the proper linker is available.
In summary, Rust provides many extremely useful benefits that would improve CPython development. Increasing memory safety would be a significant improvement in of itself, but it is far from the only benefit Rust provides.
Implementation
The integration of Rust would begin by adding Rust-based extension modules to the “Modules/” directory, which contains the C extensions for the Python standard library. The Rust modules would be optional at build time, dependent on if the build environment has Rust available.
Integrating Rust with the CPython C API requires foreign function interface (FFI) definitions in Rust to describe the APIs available. A new crate (a library in Rust terminology)
cpython-sys
will be introduced to handle these FFI definitions. To automate the process of generating Rust FFI bindings, we use
bindgen
. Bindgen is not only an official Rust project, but also used ubiquitously throughout the Rust ecosystem to bind to C APIs, including in the Linux and Android projects. Bindgen uses libclang at build time to read C headers and automatically generate Rust FFI bindings for the current target. Unfortunately, due to the use of C macros to define some constants and methods, the
cpython-sys
crate will also need to replicate a few important APIs like
PYOBJECT_HEAD_INIT
manually. However these should all be straightforward to replicate and few in number.
With the C API bindings available in Rust, contributors can call the FFI definitions to interact with CPython. This will necessarily introduce some unsafe Rust code. However extension modules should be written such that there is a minimal amount of unsafe at the edges of FFI boundaries.
Eventually safe abstractions to simplify and standardize code like module definitions, function argument parsing, and class definitions could be adopted to reduce the amount of raw FFI code written.
A reference implementation
which includes a proof-of-concept
_base64
module which uses Rust to provide a speedup to
base64
is available.
Distribution
Rust supports
all platforms which CPython supports
and
many more as well
. Rust’s tiers are slightly different, and include information on whether host tools (such as
rustc
and
cargo
) are provided. Here are all of the PEP 11 platforms and their corresponding tiers for Rust:
In summary, every platform Python supports is supported Rust at tier 2 or higher, and host tools are provided for every platform other than those where Python is already cross-compiled (e.g. WASI and mobile platforms).
While CPython could depend on PyO3 for safe abstractions over CPython C APIs, this may not provide the flexibility desired. If a new API is added to the C API, it would need to be added to PyO3, then the version of PyO3 would need to be updated in CPython. This is a lot of overhead and would slow down development. Using bindgen, new APIs are automatically exposed to Rust.
Keep Rust Always-Optional
Rust could provide many benefits to the development of CPython such as increased memory safety, increased thread safety, and zero-cost data structures. It would be a shame if these benefits were unavailable to the core interpreter implementation permanently.
Open Questions
How should we manage dependencies?
By default cargo will download dependencies which aren’t already cached locally when
cargo build
is invoked, but perhaps we should vendor these? Cargo has built-in support for vendoring code. We could also
cargo fetch
to download dependencies at any other part of the build process (such as when running configure).
How to handle binding Rust code to CPython’s C API?
The MVP currently uses bindgen, which requires libclang at build time and a number of other dependencies. We could pre-generate the bindings for all supported platforms, which would remove the build-time requirement on vendoring bindgen and all of its dependencies (including libclang) for those platforms.
When should Rust be allowed in non-optional parts of CPython?
Given the numerous advantages Rust provides, it would be advantageous to eventually introduce Rust into the core part of CPython, such as the Python string hasher, SipHash. However, requiring Rust is a significant new platform requirement. Therefore, we propose a timeline of:
In Python 3.15,
./configure
will start emitting warnings if Rust is not available in the environment. Optional extension modules may start using Rust
In Python 3.16,
./configure
will fail if Rust is not available in the environment unless
--with-rust=no
is passed. This will ensure users are aware of the requirement of Rust on their platform in the next release
In Python 3.17, Python may require Rust to build
We choose this timeline as it gives users several years to ensure that their platform has Rust available (most should) or otherwise plan for the migration. It also ensures that users are aware of the upcoming requirement. We hope to balance allowing time to migrate to Rust with ensuring that Rust can be adopted swiftly for its many benefits.
Rust could always ensure their bootstrap script is compatible with older versions of Python. Then the process is to build an older version of Python, build Rust, then build a newer version of CPython. The bootstrap script is currently compatible with Python 2, so this seems likely to continue to be the case
Rust could use PyPy to bootstrap
Rust could drop their usage of Python in the bootstrap process
I (
@emmatyping
) plan to reach out to the Rust core team and ask what their thoughts are on this issue.
What about platforms that don’t support Rust?
Rust supports all platforms enumerated in PEP 11, but people run Python on other operating systems and architectures. Reviewing all of the issues labeled
OS-unsupported
in the CPython issue tracker, we found only a few cases where Rust would not be available:
HPPA/PA-RISC: This is an old architecture from HP with the last released hardware coming out in 2005 and a community of users maintaining a Linux fork. There is no LLVM support for this architecture.
RISC OS: This is a community maintained version of an operating system created by Acorn Computers. There’s no support in Rust for this OS.
PowerPPC OS X: This older OS/architecture combination has a community of users running on PowerBooks and similar. There is no support in Rust for this OS/architecture combination, but Rust has PowerPC support for Linux.
CentOS 6: Rust requires glibc 2.17+, which is only available in Centos 7. However, it is unlikely users on a no-longer-supported Linux distribution will want the latest version of CPython. Even if they do, CPython would have a hard time supporting these platforms anyway.
How should current CPython contributors learn/engage with Rust portions of the code base?
Current contributors may need to interact with the Rust bindings if they modify any C APIs, including internal APIs. This process can be well covered in the devguide, and there are many great resources to learn Rust itself.
The Rust book
provides a thorough introduction to the Rust programming language. There are many other resources in addition, such as
Rust for C++ programmers
and the official learning resources
Learn Rust - Rust Programming Language
.
To ease this process, we can introduce a Rust experts team on GitHub who can be pinged on issues to answer questions about interacting with the API bindings. Furthermore, we can add a Rust tutorial focused on Rust’s usage in CPython to the devguide.
Obviously any extension modules written in Rust will require knowledge of Rust to contribute to.
What about Argument Clinic?
Argument Clinic
is a great tool that simplifies the work of anyone writing C functions that require argument processing. We see two possible approaches for implementing it in Rust:
Adapt the existing Argument Clinic to parse Rust comments using the same DSL as in C extensions, and generate Rust function signatures.
Create a Rust procedural macro capable of parsing a similar DSL. This approach might allow it to be used by any third-party package, whereas the C-based Argument Clinic does not guarantee compatibility with third-party extensions.
Using a proc macro would allow for better IDE integration and could become useful to 3rd party extension authors.
Should the CPython Rust crates be supported for 3rd-party use? Should there be a Rust API?
Having canonical Rust crates for bindings to CPython’s C API would be advantageous, but the project is ill-prepared to support usage by 3rd-parties at this time. Thus we propose deferring making Rust code supported for 3rd-party use until a later date.
Microsoft: Azure hit by 15 Tbps DDoS attack using 500,000 IP addresses
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 17:13:15
Microsoft said today that the Aisuru botnet hit its Azure network with a 15.72 terabits per second (Tbps) DDoS attack, launched from over 500,000 IP addresses. [...]...
Microsoft said today that the Aisuru botnet hit its Azure network with a 15.72 terabits per second (Tbps) DDoS attack, launched from over 500,000 IP addresses.
The attack used extremely high-rate UDP floods that targeted a specific public IP address in Australia, reaching nearly 3.64 billion packets per second (bpps).
"The attack originated from Aisuru botnet. Aisuru is a Turbo Mirai-class IoT botnet that frequently causes record-breaking DDoS attacks by exploiting compromised home routers and cameras, mainly in residential ISPs in the United States and other countries,"
said
Azure Security senior product marketing manager Sean Whalen.
"These sudden UDP bursts had minimal source spoofing and used random source ports, which helped simplify traceback and facilitated provider enforcement."
Cloudflare
linked the same botnet
to a record-breaking 22.2 terabits per second (Tbps) DDoS attack that reached 10.6 billion packets per second (Bpps) and was mitigated in September 2025. This attack lasted only 40 seconds but was roughly equivalent to streaming one million 4K videos simultaneously.
One week earlier, the XLab research division of Chinese cybersecurity company Qi'anxin attributed another 11.5 Tbps DDoS attack to the
Aisuru botnet
, saying that it was controlling around 300,000 bots at the time.
The botnet targets security vulnerabilities in IP cameras, DVRs/NVRs, Realtek chips, and routers from T-Mobile, Zyxel, D-Link, and Linksys. As XLab researchers said, it suddenly ballooned in size in April 2025 after its operators breached a TotoLink router firmware update server and infected approximately 100,000 devices.
Infosec journalist Brian Krebs
reported
earlier this month that Cloudflare removed multiple domains linked to the Aisuru botnet from its public "Top Domains" rankings of the most frequently requested websites (based on DNS query volume) after they began overtaking legitimate sites, such as Amazon, Microsoft, and Google.
The company stated that Aisuru's operators were deliberately flooding Cloudflare's DNS service (1.1.1.1) with malicious query traffic to boost their domain's popularity while undermining trust in the rankings. Cloudflare CEO Matthew Prince also confirmed that the botnet's behavior was severely distorting the ranking system and added that Cloudflare now redacts or completely hides suspected malicious domains to avoid similar incidents in the future.
As Cloudflare revealed in its 2025 Q1 DDoS Report in April, it
mitigated a record number of DDoS attacks
last year, with a 198% quarter-over-quarter jump and a massive 358% year-over-year increase.
In total, it blocked 21.3 million DDoS attacks targeting its customers throughout 2024, as well as another 6.6 million attacks targeting its own infrastructure during an 18-day multi-vector campaign.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
For help understanding this article or how you can implement auth and similar security architectures in your services, feel free to reach out to me via the
community server
.
One of the most massive AWS incidents transpired on
October 20th
. The long story short is that the DNS for DynamoDB was impacted for
us-east-1
, which created a health event for the entire region. It's the worst incident we've seen in a decade.
Disney+
,
Lyft
,
McDonald'ss
,
New York Times
,
Reddit
, and the
list goes on
were lining up to claim their share too of the spotlight. And we've been watching because our product is part of our customers critical infrastructure. This one graph of the event says it all:
The AWS
post-incident report
indicates that at 7:48 PM UTC DynamoDB had
"increased error rates"
. But this article isn't about AWS, and instead I want to share
how exactly we were still up when when AWS was down.
Now you might be thinking:
why are you running infra in us-east-1?
And it's true, almost no one should be using us-east-1, unless, well, of course, you are us. And that's because we end up running our infrastructure where our customers are. In theory, practice and theory are the same, but in practice they differ. And if our (or your) customers chose
us-east-1
in AWS, then realistically, that means you are also choosing us-east-1 😅.
During this time, us-east-1 was offline, and while we only run a limited amount of infrastructure in the region, we have to run it there because we have customers who want it there. And even without a direct dependency on
us-east-1
, there are critical services in AWS — CloudFront, Certificate Manager, Lambda@Edge, and IAM — that all have their control planes in that region. Attempting to create distributions or roles at that time were also met with significant issues.
Since there are plenty of articles in the wild talking about
what actually happened
,
why it happened
, and
why it will continue to happen
, I don't need to go into it here. Instead, I'm going to share a dive about exactly what we've built to avoid these exact issues, and what you can do for your applications and platforms as well. In this article, I'll review how we maintain a high SLI to match our SLA
reliability
commitment even when the infrastructure and services we use don't.
Before I get to the part where I share how we built one of the most reliable
auth solutions
available. I want to define reliability. And for us, that's an SLA of five nines. I think that's so extraordinary that the question I want you to keep in mind through this article is:
is that actually possible?
Is it really achievable to have a service with a five nines SLA? When I say five nines, I mean that 99.999% of the time, our service is up and running as expected by our customers. And to put this into perspective, the red, in the sea of blue, represents just how much time we can be down.
And if you can't see it, it's hiding inside this black dot. It amounts to just five minutes and 15 seconds per year. This pretty much means we have to be up all the time, providing responses and functionality exactly as our customers expect.
To put it into perspective, it's important to share for a moment, the specific challenges that we face, why we built what we built, and of course why that's relevant. To do that, I need to include some details about what we're building — what
Authress actually does
. Authress provides login and access control for the software applications that you write — It generates JWTs for your applications. This means:
User authentication and authorization
User identities
Granular role and resource-based authorization (ReBAC, ABAC, TBAC, RBAC, etc...)
API keys for your technical customers to interact with your own APIs
Machine to machine authentication, or services — if you have a microservice architecture.
Audit trails to track the permission changes within your services or expose this to your customers.
And there are of course many more components, that help complete full auth-platform, but they aren't totally relevant to this article, so I'm going to skip over them.
With that, you may already start to be able to see why uptime is so critical for us.
We're on the critical path for our customers
. It's not inherently true for every single platform, but it is for us. So if our solution is down, then our customer applications are down as well.
If we put the reliability part in the back corner for one second and just think about the features, we can theorize about a potential initial architecture. That is, an architecture that just focuses on the features, how might you build this out as simple as possible? I want to do this, so I can help explain all the issues that we would face with the simple solution.
Maybe you've got a single region, and in that region you have some sort of HTTP router that handles requests and they forward to some compute, serverless, container, or virtual machine, or, and I'm very sorry for the scenario — if you have to use bare metal. Lastly, you're interacting with some database, NoSQL, SQL, or something else, file storage, and maybe there's some async components.
If you take a look at this, it's probably obvious to you (and everyone else) that there is no way it is going to meet our reliability needs. But we have to ask, just exactly how often will there actually be a problem with this architecture? Just building out complexity doesn't directly increase reliability, we need to focus on why this architecture would fail. For us, we use AWS, so I look to the Amazon CTO for guidance, and he's famously quoted as saying,
Everything fails all the time
.
And AWS's own services are no exception to this. Over the last decade, we've seen numerous incidents:
2024 - Global - Network connectivity - STS Service
2024 - Virginia - Message size overflow - Kinesis down - Lambda, S3, ECS, CloudWatch, Redshift
2025 - Virginia - Dynamo DB DNS - DynamoDB down - All Services
And any one of these would have caused major problems for us and therefore our customers. And the frequency of incident is actually increasing in time. This shouldn't be a surprise, right? Cloud adoption is increasing over time. The number of services AWS is offering is also increasing. But how impactful are these events? Would single one of them have been a problem for us to actually reach our SLA promise? What would happen if we just trusted AWS and used that to pass through our commitments? Would it be sufficient to achieve 99.999% SLA uptime? Well, let's take a look.
Okay, so when it comes to trusting AWS SLAs, it isn't sufficient. At. All.
We can't just use the components that are offered by AWS, and go from there. We fundamentally need to do something more than that. So the question becomes, what exactly must a dependency's reliability be in order for us to utilize it? To answer that question, it's time for a math lesson. Or more specifically, everyone's favorite topic,
probabilities
.
Let's quickly get through this
torture
exercise. Fundamentally, you have endpoints in your service, and you get in an HTTP request, and it interacts with some third-party component or API, and then you write the result to a database. For us, this could be an integration such as
logging in with Google
or with
Okta
for our customers' enterprise customers.
So if we want to meet a 5-nines reliability promise, how unreliable could this third-party component actually be? What happens if this component out of the box is only 90% reliable? We'll design a strategy for getting around that.
Uptime is a product of all of the individual probabilities:
For the sake of this example, we'll just assume that every other component in our architecture is 100% reliable — That's every line of code, no bugs ever written in our library dependencies, or transitive library dependencies, or the dependencies' dependencies' dependencies, and everything always works exactly as we expect.
So we can actually rewrite our uptime promise as a result of the failure rate of that third-party component.
And the only way that we can actually increase the success rate of the uptime based off of failures is to retry. And so we can multiply out the third-party failure rate and retry multiple times.
Logically that makes a lot of sense. When a component fails, if you retry again, and again, the likelihood it will be down every single time approaches zero. And we can generate a really nasty equation from this to actually determine how many exact times do we need to retry.
How many exactly can it? Rather than guessing whether or not we should retry four times or five times, or put it in a
while(true)
loop, we can figure it out exactly. So we take this equation and extend it out a little bit. Plugging in our 90% reliable third-party component:
We find that our retry count actually must be greater than or equal to five. We can see that this adds up to our uptime expectation:
Is this the end of the story? Just retry a bunch of times and you're good? Well, not exactly. Remember this equation?
We do really need to consider every single component that we utilize. And specifically when it comes to the third-party component, we had to execute it by utilizing a retry handler. So we need to consider the addition of the retry handler into our equation. Going back to the initial architecture, instead of what we had before, when there's a failure in that third-party component, now we will automatically execute some sort of asynchronous retries or in-process retries. And every time that third-party component fails, we execute the retry handler and retry again.
This means we need to consider the reliability of that retry handler.
Let's assume we have a really reliable retry handler and that it's even more reliable than our service. I think that's reasonable, and actually required. A retry handler that is less reliable than our stated SLA by default is just as faulty as the third-party component.
Let's consider one with five and a half nines — that's half a nine more reliable than our own SLA.
But how reliable does it really need to be? Well, we can pull in our original equation and realize that our total uptime is the unreliability or the reliability of the third-party component multiplied by the reliability of our retry handler.
From here, we add in the retries to figure out what the result should be:
We have a reliable retry handler, but it's not perfect. And with a retry handler that has reliability of five and a half nines, we can retry
a maximum two times
. Because remember, it has to be reliable every single time we utilize it, as it is a component which can also fail. Which means left with this equation:
I don't think comes as a surprise to anyone that in fact five is greater than two. What is the implication here?
The number of retries required for that unreliable third-party component to be utilized by us exceeds the number of retries actually allowed by our retry handler.
That's a failure, the retry handler can only retry twice before itself violates our SLA, but we need to retry five times in order to raise the third-party component reliably up. We can actually figure out what the minimum reliability of a third-party component is allowed to be, when using our retry handler:
Which in turn validates that it's actually impossible for us to utilize that component.
99.7%
.
99.7%
is the minimum allowed reliability for any third-party component in order for us to meet our required 5-nines SLA. This third-party component is so unreliable (
~90%
), that even using a highly reliable retry handler, we still can't make it reliable enough without the retry handler itself compromising our SLA. We fundamentally need to consider this constraint, when we're building out our architecture.
That means we drop this third-party component. Done.
And then, let's assume we get rid of every flaky component, everything that don't have a high enough reliability for us. At this point, it's good to think, is this sufficient to achieve our 5-nines SLA? Well, it isn't just third-party components we have to be concerned about. We also have to be worried about those AWs infrastructure failures.
So let's flashback to our initial architecture again:
We can have issues at the database layer, right? There could be any number of problems here. Maybe it's returning 500s, there are some slow queries, maybe things are timing out. Or there could be a problem with our compute. Maybe it's not scaling up fast enough. We're not getting new infrastructure resources. Sometimes, even AWS is out of bare metal machines when you don't reserve them, request them get them on demand, and the list go on.
Additionally, there could also be some sort of network issue, where requests aren't making it through to us or even throw a DNS resolution error on a request from our users.
In many of these cases, I think the answer is obvious. We just have to declare the whole region as down. And you are probably thinking, well, this is where we failover to somewhere else. No surprise, yeah, this is exactly what we do:
However, this means we have to have all the data and all the infrastructure components duplicated to another region in order to do this. And since
Authress
has
six primary regions
around the world, that also means we need multiple backup regions to be able to support the strategy. But this comes with significant wasted resources and wasted compute that we're not even getting to use. Costly! But I'll get to that later.
Knowing a redundant architecture is required is a great first step, but that leaves us having to solve for:
how do we actually make the failover happen in practice?
Simply put — our strategy is to utilize DNS dynamic routing. This means requests come into our DNS and it automatically selects between one of two target regions, the primary region that we're utilizing or the failover region in case there's an issue. The critical component of the infrastructure is to switch regions during an incident:
We know how we're gonna do it, but the long pole in the tent is actually knowing that there is even a problem in the first place. A partial answer is to say
Have a health check
, so of course there is health check here. But the full answer is: have a health check that validates both of the regions, checking if the region is up, or is there an incident? And if it is, reports the results to the DNS router.
We could be utilizing the default provided handler from AWS Route 53 or a third-party component which pings our website, but that's not accurate enough from a standpoint of correctly and knowing for certain that our services are in fact down.
It would be devastating for us to fail over when a secondary region is having worse problems than our primary region. Or what if there's an issue with with network traffic. We wouldn't know if that's an issue of communication between AWS's infrastructure services, or an issue with the default Route 53 health check endpoint, or some entangled problem with how those specifically interact with our code that we're actually utilizing. So it became a requirement to built something ourselves, custom, to actually execute exactly what we need to check.
Here is a representation of what we're doing. It's not exactly what we are doing, but it's close enough to be useful. Health check request come in from the Route 53 Health Check. They call into our APIGW or Load Balancer as a router. The requests are passed to our compute which can interact and validate logic, code, access, and data in the database:
The health check executes this code on request that allows us to validate if the region is in fact healthy:
We start a profiler to know how long our requests are taking.
Then we interact with our databases, as well as validate some secondary components, such as SQS. While issues with secondary components may not always be a reason to failover, they can cause impacts to response time, and those indicators can be used to predict incoming incidents.
From there, we check whether or not the most critical business logic is working correctly. In our case, that's interactions with DynamoDB as well as core authorizer logic. Compared to a simple unit test, this accounts for corruption in a deployment package, as well instances where some subtle differences between regions interact with our code base. We can catch those sorts of problems here, and know that the primary region that we're utilizing, one of the six, is having a problem and automatically update the DNS based on this.
When we're done, we return success or failure so the health check can track changes.
And we don't stop here with our infrastructure failover however. With the current strategy, it's good, in some cases, even sufficient. But it isn't that great. For starters, we have to completely failover. If there's just one component that's problematic, we can't just swap that one out easily, it's all or nothing with the Route 53 health check. So when possible, we push for an edge-optimized architecture. In AWS, this means utilizing
AWS CloudFront
with AWS Lambda@Edge for compute. This not only helps reduce latency for our customers and their end users depending where they are around the world, as a secondary benefit, fundamentally, it is an improved failover strategy.
And that looks like this:
Using CloudFront gives us a
highly reliable CDN
, which routes requests to the locally available compute region. From there, we can interact with the local database. When our database in that region experiences a health incident, we automatically failover, and check the database in a second adjacent region. And when there's a problem there as well, we do it again to a third region. We can do that because when utilizing DynamoDB we have
Global Tables
configured for authorization configuration. In places where we don't need the data duplicated, we just interact with the table in a different region without replication.
After a third region with an issue,
we stop.
And maybe you're asking why three and not four or five or six? Aren't you glad we did the probabilities exercise earlier? Now you can actually figure out why it's three here. But, I'll leave that math as an exercise for you.
As a quick recap, this handles the problems with at the infrastructure level and with third-party components. And if we solve those, is that sufficient for us to achieve our goal the 5-nines SLA?
For us the answer is
No
, and you might have guessed, if you peaked at the scrollbar or table contents that there are still quite some additional components integrated into our solution. One of them is knowing that at some point, there's going to be a bug in our code, unfortunately.
And that bug will get committed to production, which means we're going to end up with an application failure. It should be obvious that it isn't achievable to write completely bug-free code. Maybe there is someone out there that thinks that, and maybe even that's you, and I believe you that you believe that. However, I know it's not me, and realistically, I don't want to sit around and pray that it's also my fellow team members. The risk is too high, because in the case something does get into production, that means it can impact some of our customers. So instead, let's assume that will happen and design a strategy around it.
So when it does happen, we of course have to trigger our incident response. For us, we send out an email, we post a message on our community and internal communication workspaces, and start an on-call alert. The technology here isn't so relevant, but tools like AWS SES, SQS, SNS, Discord, and emails are involved.
Incidents wake an engineer up, so someone can start to take look at the incident, and most likely the code.
But by the time they even respond to the alert, let alone actually investigate and fix the cause of the incident, we would long violated our SLA. So an alert is not sufficient for us. We need to also implement automation to automatically remediate any of these problems. Now, I'm sure you're thinking,
yeah, okay, test automation
. You might even be thinking about an LLM agent that can automatically create PRs. (Side note: LLM code generation, doesn't actually work for us, and I'll get to that a little further down) Instead, we have to rely on having sufficient testing in place. And yes, of course we do. We test before deployment. There is no better time to test.
This seems simple and an obvious answer, and I hope that for anyone reading this article it is. Untested code never goes to production. Every line of code is completely tested before it is merged to production, even if it is enabled on some flag. Untested code is never released, it is far too dangerous. Untested code never makes it to production behind some magic flag. Abusing feature flags to make that happen could not be a worse decision for us. And that's because we can need to be as confident as possible before those changes actually get out in front of our customers. The result is — we don't focus on test coverage percentage, but rather
test value
. That is, which areas provide most value, that are most risky, that we care about being the most reliable for our customers. Those are the ones we focus on testing.
Every incident could have been prevented if we just had one more test.
The trick though is actually having that right test, before the incident.
And in reality, that's not actually possible. Having every right test for a service that is constantly changing, while new features are being added, is just unmaintainable. Every additional test we write increases the maintenance burden of our service. Attempting to achieve 100% complete test coverage would require an infinite amount of time. This is known as the
Pareto Principle
, more commonly the 80-20 rule. If it takes 20% of the time to deliver 80% of the tests, it takes an infinite amount of time to achieve all the tests, and that assumes that the source code isn't changing.
The result is we'll never be able to catch everything.
So we can't just optimize for prevention. We also need to optimize for recovery.
This conclusion for us means also implementing tests against our deployed production code. One example of this are validation tests.
A validation test is where you have some data in one format and data in another format and you use those two different formats to ensure referential consistency. (Side note: There are many different kinds of tests, and I do a deep dive in
the different types of tests
and how they're relevant in building secure and reliable systems). One concrete example could be you have a request that comes in, you end up logging the request data and the response, then you can compare that logged data to what's actually saved in your database.
In our scenario, which focuses on the authorization and permissions enforcement checks, we have multiple databases with similar data. In one case, there's the storage of permissions as well as the storage of the expected checks and the audit trail tracking the creation of those permissions. So we actually have multiple opportunities to compare the data between our databases asynchronously outside of customer critical path usage.
On a schedule, via an AWS CloudWatch Scheduled Rule, we load the data from our different databases and we compare them against each other to make sure it is consistent. If there is a problem, then if this fires off an incident before any of our customers notice, so that we can actually go in and check what's going on.
This sounds bad on the surface that it could ever happen. But the reality of the situation is that a discrepancy can show up as a result of any number of mechanisms. For instance, the infrastructure from AWS could have corrupted one of the database shards and what is written to the databases is inconsistent. We know that this can happen as there is no 100% guarantee on database durability, even from AWS.
AWS does not guarantee Database Durability
, are you assuming they do, because we don't! So actually reading the data back and verifying its internal consistency is something that we must do.
While it might not seem that this could reduce the probability of there being an incident. Consider that a requested user permission check whose result doesn't match our customer's expectation is an incident. It might not always be one that anyone identifies or even becomes aware of, but it nonetheless a problem, just like a publicly exposed S3 is technically an issue, even if no one has exfiltrated the data yet, it doesn't mean the bucket isn'is sufficiently secured.
There are two parts to the actual risk of an incident. The probability and the impact. Everything in this article I've discuss until now talks about reducing the probability of an incident, that is — the likelihood of it happening. But since we know that we can't avoid ever having an incident, we also have to reduce the impact when it happens.
One way we do that is by utilizing an
incremental rollout
. Hopefully everyone knows what incremental rollout is, so I'll instead jump straight into how we accomplish it utilizing AWS. And for that we focus again on our solution integrating with CloudFront and our edge architecture.
The solution for us is what I call
Customer Deployment Buckets
. We bucket individual customers into separate buckets and then deploy to each of the buckets sequentially. If the deployment rolls out without a problem, and it's all green, that is everything works correctly, then we go on to the second bucket and then deploy our code to there, and then the third bucket, and so on and so forth until every single customer has the new version.
If there is an issue, we stop the rollout and we go and investigate what's actually going on. While we can't prevent the issue from happening to the earlier buckets, we are able to stop that issue from propagating to more customers, having an impact on everyone, and thus reduce the impact of the incident.
As I mentioned before the biggest recurring issue isn't executing an operations process during an incident, it's identifying there is a real incident in the first place. So,
How do we actually know that there's an issue?
If it was an easy problem to solve, you would have written a unit task or
integration test or service level test
and thus already discovered it, right? So adding tests can't, by design, help us. Maybe there's an issue with the deployment itself or during infrastructure creation, but likely that's not what's happening.
Now, I know you're thinking,
When is he going to get to AI?
Whether or not we'll ever truly have AI is a separate
<rant />
that I won't get into here, so this is the only section on it, I promise. What we actually do is better called
anomaly detection.
Historically anomaly detection, was what AI always meant, true AI, rather than an LLM or agent in any way.
You might notice that it's not tracking 400s or 500s, which are in reality relatively easy to detect. But in fact don't actually tell us meaningfully what's wrong with our service or whether or not there really is a problem. Impact is measured by business value, not technical protocol level analytics, so we need to have a business-focused metric.
And for us, at Authress, the business-focussed metric we use to identify meaningful incidents we call:
The Authorization Ratio
. That is the ratio of successful logins and authorizations to ones that are blocked, rejected, timeout or are never completed for some reason.
The above CloudWatch metric display contains this exact ratio, and here in this timeframe represents an instance not too long ago where we got really close to firing off our alert.
Here, there was a slight elevation of errors soon after a deployment. The expected ratio was outside of our allowance span for a short period of time. However not long enough to trigger an incident. We still investigated, but it wasn't something that required immediate remediation. And it's a good reminder that identifying problems in any production software isn't so straightforward. To achieve high reliability, we've needed an AI or in this case anomaly detection to actually identify additional problems. And realistically, even with this level of sophistication in place, we still can never know with 100% certainty that there is actually an incident at any moment. And that's because "what is an incident", is actually a philosophical question...
Our anomaly detection said – almost an incident, and we determined the result – no incident. But does that mean there wasn't an incident? What makes an incident, how do I define an incident? And is that exact definition ubiquitous, for every system, every engineer, every customer?
Obviously not, and one look at the
AWS Health Status Dashboard
is all you need to determine that the identification of incidents is based on subjective perspective, rather than objective criteria. What's actually more important is the synthesis of our perspective on the situation and what our customers believe. To see what I mean, let's do a comparison:
I'm going to use Authress as an example. So I've got the product services perspective on one side and our customer's perspective on the other.
In the top left corner we have alignment. If we believe that our system is up and working and our customers do, too, then success, all good. Everything's working as expected.
Inversely in the opposite corner, maybe there is a problem. We believe that one of our services is having an issue, and successfully, we're able to identify it. Most importantly, our customers say–yes, there is an issue for us.
It's not great that there's an incident, but as I've identified incidents will absolutely happen, and the fact we've correctly aligned with our customers on the problem's existence independently allows us to deploy automation to automatically remediate the issue. That's a success! If it's a new problem that we haven't seen before, we can even design new automation to fix this. Correctly identifying incidents is challenging, so doing that step correctly, leads itself very well to automation for remediation.
One interesting corner is when our customers believe that there's nothing wrong, there have been no incidents reported, but all our alerts are saying –
RED ALERT
— someone has to go look at this!
In this case, our alerts have identified a problem that no one cares about. This often happens in scenarios where our customers are in one region, Switzerland for example, with local region users, a health care, manufacturing, or e-commerce app, is a good example, rather than global, who are likely asleep at 2:00 AM. And that means an incident at the moment, could be an issue affecting some customers. But if they aren't around to experience it, is it actually happening?
You are probably wincing at that idea. There's a bug, it must be fixed! And sure that's a problem, it's happening and we should take note of what's going on. But we don't need to respond in real time. That's a waste of our resources where we could be investing in other things. Why wake up our engineers based on functionality that no one is using?
I think one of the most interesting categories is in the top right-hand corner where:
And it can happen for any number of reasons. Maybe there is something in our knowledge base that tells our customers to do something one way and it's confusing and they've interpreted it in a different way. So there's a different expectation here. That expectation can get codified into customer processes and product services.
Or maybe our customer is running different tests from us, ones that are of course, valuable for their business, but not ones that we consider. Or more likely they are just using a less resilient cloud provider.
Most fundamentally, there could really be an incident, something that we haven't detected yet, but they have. And if we don't respond to that, it could grow, and left unchecked, escalate, and eventually impact all our customers. This means we need to give our customers an easy way to report incidents to us, which we can immediately follow up with.
For us, every single incident, every single customer support ticket that comes into our platform, we immediately and directly send it to our engineering team. Now, I often get pushback on this from other leaders. I'm sure, even you might be thinking something like —
I don't want to be on call for customer support incidents.
But if you throw additional tiers in your organization between your engineering teams and your customers, that means you're increasing the time to actually start investigating and resolving those problems. If you have two tiers before your engineering team and each tier has its own SLA of 10 minutes to triage the issue, that means you've already gone through 20 minutes before an engineer even knows about it and can go and look at it. That violates our SLA by fourfold before investigation and remediation can even begin.
Instead, in those scenarios, what I actually recommend thinking about is how might you reduce the number of support tickets you receive in aggregate? This is the much more appropriate way to look at the problem. If you are getting support tickets that don't make sense, then you've got to investigate,
why did we get this ticket?
Do the root cause analysis on the ticket, not just the issue mentioned in it — why the ticket was even created in the first place.
A ticket means: Something is broken. From there, we can figure out, OK, maybe we need to improve our documentation. Or we need to change what we're doing on one of our endpoints. Or we need to change the response error message we're sending. But you can always go deeper.
And going deeper, means customer support is critical for us. We consider customer support to be the lifeline of our service level agreement (SLA). If we didn't have that advantage, then we might not have been able to deliver our commitment at all. So much so that we report some of our own CloudWatch custom metrics to our customers so they can have an aggregate view of both what they know internally and what we believe. We do this through our own internal dashboard in our application management UIs.
Helping our users identify incidents benefits us; because we can't catch everything. It's just not possible.
To this point, we've done the math on reliability of third-party components. We've implemented an automatic region failover and added incremental rollout. And we have a core customer support focus. Is that sufficient to achieve 5-nines of reliability?
If you think yes, then you'd expect the meme pictures now. And, I wish I could say it was enough, but it's not. That's because we also have to deal with negligence and malice.
We're in a privileged position to have numerous security researchers out there on the internet constantly trying to find vulnerabilities within our service. For transparency, I have some of those reports I want to share:
I am a web security researcher enthusiast. Do you give a monetary reward?
Okay, this isn't starting out that great. What else have we received?
I found some vulnerabilities in your website. Do you offer rewards for ethical hackers?
Well, maybe, but I think you would actually need to answer for us, what the problem actually is. And you also might notice this went to our spam. It didn't even get to our inbox. So a lot of help they might be providing. Actually we ignore any
”security”
email sent from a non-custom domain.
This one was really interesting. We had someone attempting to phish our engineering team by creating a support ticket and putting in some configuration trying to get us to provide them our own credentials to one of our third-party dependencies. Interestingly enough, our teams don't even have access to those credentials directly.
And, we know this was malicious because the credentials that they are referencing in the support request are from our honey pot, stuck in our UI to explicitly catch these sorts of things. The only way to get these credentials is if they hacked around our UI application and pulled out of the HTML. They aren't readily available any other way. So it was very easy for us to detect that this “report” was actually a social engineering attack.
And this is one of my favorites, and I can't make this up:
I have found many security loophole. How much will you pay if you want to working with me like project?
That's the exact quote, I don't even know what that means. Unfortunately, LLMs will actually start to make all of these future "vulnerability reports" sound more appealing to read in the future, for better or worse. However, at the end of the day, the truth is that these are harmless. And we actually do have a
security disclosure program
that anyone can go and submit problems for. I hope the message to white-hat hackers is please use that process, and the legitimate reports usually do go through it. Do not send us emails. Those are going to go into the abyss. Alternatively, you can follow our
security.txt
public page or go to the disclosure form, but with email, the wrong people are going to get that and we can't triage effectively.
Vulnerabilities in our services can result in production incidents for our customers. That means security is part of our SLA. Don't believe me, I'll show you how:
It's relevant for us, that Authress is a multitenant solution. So some of the resources within our service are in fact shared between customers.
Additionally, customers could have multiple services in a microservice architecture or multiple components. And one of these services could theoretically consume all of the resources that we've allocated for that customer. In that scenario, that would cause an incident for that customer. So we need to protect against resource exhaustion
Intra-Tenant
. Likewise, we have multiple customers. One of those customers could be consuming more resources than we've allocated to the entire tenant. And that could cause an incident across
Inter-Tenant
and cause an incident across our platform and impact other customers.
Lastly, we have to be worried about our customers, our customers' customers, and our customers' customers' customers, because any one of those could be malicious and consume their resources and so on and so forth, thus causing a cascading failure.
A failure due to lack of resources is an incident
. The only solution that makes sense for this is, surprise, rate limiting.
So we need to rate-limit these requests at different levels for different kinds of clients, different kinds of users, and we do that within our architecture, at different fundamental levels within our infrastructure.
Primarily there are protections at our compute level, as well at the region level, and also place protections at a global level. In AWS, this of course means using a
web application firewall or WAF
. I think our WAF configuration is interesting and in some ways novel.
The reputation list is list of IP addresses that have been associated with malicious activity outside of our service throughout other customers at AWS and other providers out there in the world where a problem has been detected. That means before those attacks even get to our service or to our customers' instances of Authress, we can already know to block them, and the WAF does that. This is great, and most importantly, has a very low false positive rate.
However, the false positive rate is an important metric for consideration of counter measures against malicious attacks or negligent accidental abuse of resources, and something that prevents us from using any other managed rules from AWS or external providers. There's two problems with managed rules, fundamentally:
Number one is the false positive rate. If that is even a little bit more than, it couldn't be sustainable, and would result in us blocking legitimate requests coming for a customer. This means it is a problem, and it's an incident for them if some of their users can't utilize their software because of something we did. False positives are customer incidents.
The second one is that managed rules are gratuitously expensive. Lots of companies are building these just to charge you lots of money, and the ROI just doesn't seem to be there. We don't see useful blocks from them.
But the truth is, we need to do something more than just the reputation list rule.
And the thing that we've decided to do is — add blocking for sufficiently high requests. By default, any Authress account's service client that goes above 2,000 requests per second (RPS), we just immediately terminate. Now, this isn't every customer, as there are some out there for us that do require such a high load or even higher (as 2k isn't that high). But for the majority of them, if you get to this number and they haven't talked to us about their volume, then it is probably malicious in some way. You don't magically go from zero to 2,000 one day, unless it is an import job.
Likewise, we can actually learn about a problem long before it gets to that scale. We have milestones, and we start reporting loads from clients at 100, 200, 500, 1,000, et cetera. If we see clients hitting these load milestones, we can already start to respond and create an incident for us to investigate before they reach a point where they're consuming all of the resources in our services for that customer. And we do this by adding alerts on the COUNT of requests for WAF metrics.
However, we also get attacks at a smaller scale. Just because we aren't being DDoS-ed doesn't mean there isn't attack. And those requests will still get through because they don't meet our blocking limits. They could be malicious in nature, but only identifiable in aggregate. So while single request might seem fine, if you see the same request 10 times a second, 100 times a second, something is probably wrong. Or if you have request urls that end in
.php?admin
, when no one has run WordPress in decades, you also know that there's a problem. We catch these by logging all of the blocked requests.
We have automation in place to query those results and update our rules, but a picture is worth a thousand words:
Here you can see a query based off of the IP addresses from the client that are being utilized and sorted by frequency. When we get these requests that look non-malicious individually, we execute a query such as this one and we check to see if the results match a pattern. You can use ip address matching or more intelligently, something called the JA3 or JA4 fingerprints of those requests There are actually lots of options available, I'm not going to get into exactly what they are, there are some
great articles on the topic
. And there are more mechanisms to actually track these used throughout the security industry, and utilizing them let's you instantly identify:
Hey, you know what? This request violates one of our patterns, maybe we should block all the requests from that client.
And so, rather than waiting for them to get to the point where an attacker is consuming 2,000 requests per second worth of resources, you can stop there right away. In the cases where we can't make a conclusive decision, this technology gives us another tool that we can utilize to improve our patterns for the future. Maybe it goes without saying, but of course because we've running our technology to many regions around the world, we have to work on deploying this infrastructure in all these places and push it out to the edge where possible.
I said a lot of things, so I to quickly want to quickly summarize our architecture that we have in place:
Third-party component reliability reviews
. I can't stress this enough. Don't just assume that you can utilize something. And sometimes in order to achieve 5-nines, you actually have to remove components from your infrastructure. Some things are just not able to be utilized no matter what. Now maybe you can put it in some sort of async background, but it can't be on the critical path for your endpoints.
DNS failover and health checks.
For places where you have an individual region or availability zone or cluster, having a full backup with a way to conclusively determine what's up and automatically failover is critical.
Edge compute where possible
. There's a whole network out there of services that are running on top of the cloud providers, which help guarantee your capability to run as close to as possible to where your users are and reduce latency.
Incremental rollout
for when you want to reduce the impact as much as possible.
The
Web Application Firewall
for handling those malicious requests.
Having a
Customer Support Focus
to enable escalating issues that outside your area of detection.
And through seven years or so that we've been doing this and building up this architecture, there's a couple of things that we've learned:
Everything fails all the time. There absolutely will be failures everywhere. Every line of code, every component you pull in, every library, there's guaranteed to be a problem in each and everyone of those. And you will for sure have to deal with it, at some point. So being prepared to handle that situation, is something you have to be thinking through in your design.
DNS, yeah, AWS will say it, everyone out there will say, and now we get to say it. The global DNS architecture is pretty good and reliable for a lot of scenarios, but I worry that it's still a single point of failure in a lot of ways.
The last thing is infrastructure as code challenges. We deploy primary regions, but then there's also the backup regions, which are slightly different from the primary regions, and then there are edge compute, which are, again, even more slightly different. And then sometimes, we do this ridiculous thing, where we deploy infrastructure dedicated to one customers. And in doing so, we're running some sort of IaC to deploy those resources.
It is almost exactly the same architecture. Almost! Because it isn't exactly the same there are quite the opportunities for challenges to sneak it. That's problematic with even Open Tofu or CloudFormation, and often these tools make it more difficult, not less. And good luck to you, if you're still using some else that hasn't been modernized. With those, it's even easier to run into problems and not get it exactly correct.
The last thing I want to leave you with is, well,
With all of these, is that actually sufficient to achieve five nines?
No. Our commitment is 5-nines, what we do is in defense of that, just because you do all these things doesn't automatically mean your promise of 5-nines in guaranteed. And you know what, you too can promise a 5-nines SLA without doing anything. You'll likely break your promise, but for us our promise is important, and so this is our defense.
info
For help understanding this article or how you can implement auth and similar security architectures in your services, feel free to reach out to me via the
community server
.
Israeli-founded app preloaded on Samsung phones is attracting controversy
For years, Samsung has shipped its Galaxy M, F, and A series smartphones in India with a little-known app called
AppCloud
. Despite its name, AppCloud isn’t a cloud storage service. It’s essentially an app-installer that surfaces third-party app recommendations during device setup.
On new Galaxy devices in these lineups, AppCloud appears as part of the initial onboarding and forces users to choose whether they want to install certain apps before setup can be completed. You can postpone this by choosing the “later” option, but the app continues to push a persistent notification until you finish the selection process or disable it entirely.
For most users, AppCloud has long been regarded as little more than nuisance bloatware, a side effect of Samsung’s need to generate revenue beyond hardware margins while competing with aggressive Chinese smartphone brands in India.
But findings by the non-profit SMEX
from earlier this year
suggest AppCloud may not be as harmless as once assumed.
AppCloud expansion into Asian and African markets has sparked scrutiny
Since 2022, Samsung has also been preloading AppCloud on its A and M series phones in several West Asian and North African (WANA) markets. This rollout has triggered privacy concerns due to AppCloud’s ties to ironSource, a company founded in Israel and now owned by US-based Unity.
While AppCloud can be disabled, it is difficult to remove without root access. Furthermore, its privacy policy is not easily available online, raising questions about transparency, user consent, and what kind of data the app may collect.
ironSource itself has a controversial track record. The company previously operated an “InstallCore” program that became infamous for installing software without clear user permission and for bypassing security warnings, behavior that resulted in widespread criticism and blacklisting by several anti-malware tools.
Regional sensitivities make things more contentious
The presence of an Israeli-origin technology component on Samsung phones in WANA countries poses additional problems. Several nations in this region legally bar Israeli companies from operating, and in light of the ongoing Israel–Palestine conflict, the preload of an app tied to such a company becomes even more contentious.
ironSource’s Aura technology, which “optimizes device experiences” by surfacing apps, content, and services directly on smartphones,
has been used on Samsung devices
in Europe, Russia, and Southeast Asia, and by telecom operators in the US; it also appears to do something similar to AppCloud. However,
AppCloud itself is not listed anywhere on ironSource’s website
, which appears to be the major cause for concern, even though the app is now owned by a US company.
While there’s no concrete evidence that AppCloud engages in questionable data practices today, the lack of an accessible privacy policy and ironSource's past reputation are causing anxiety among users.
People want Samsung to respond
Consumer advocates and privacy-focused users are urging Samsung to take immediate steps, like providing a clear opt-out for AppCloud during setup, making its privacy policy public and accessible, and to stop preloading the app entirely in sensitive regions.
With concerns rising across multiple markets, Samsung will likely need to issue a statement to reassure customers. We have reached out to the company for comment and will update this story once we hear back.
The Rust Project has been collecting valuable information about the Rust programming language community through our annual
State of Rust Survey
since 2016. Which means that this year marks the tenth edition of this survey!
We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. The results will allow us to more deeply understand the global Rust community and how it evolves over time.
Like last year, the
2025 State of Rust Survey
will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until December 17. Trends and key insights will be shared on
blog.rust-lang.org
as soon as possible.
We are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the
main survey page
:
English
Chinese (Simplified)
Chinese (Traditional)
French
German
Japanese
Ukrainian
Russian
Spanish
Portuguese (Brazil)
Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the
translations, we would be glad if you could send us a
pull request
to improve the quality of the translations!
Please help us spread the word by sharing the
survey link
via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.
This survey would not be possible without the time, resources, and attention of the Rust Survey Team, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):
Click
here
to read a summary of last year's survey findings.
By the way, the Rust Survey team is looking for new members. If you like working with data and coordinating people, and would like to help us out with managing various Rust surveys, please drop by our
Zulip channel
and say hi.
Ford will be the next automaker to allow its vehicles to be featured on
Amazon’s new online car buying site
. Starting today, customers can browse, finance, and purchase certified pre-owned Ford vehicles online through Amazon Autos, with in-person pickup at a local Ford dealership.
Ford is the second brand name,
after Hyundai
, to list its vehicles on the site. But unlike Hyundai, which launched with participating dealerships in 48 cities, Ford’s vehicles will only be available in Los Angeles, Seattle, and Dallas. Amazon says it expects more markets to be added soon.
Ford’s dealers still will have enormous sway over these online sales, including setting the price, maintaining service, and scheduling deliveries. Essentially, Amazon’s platform will be the middleman between the customer and the dealership. As such, Amazon needs to appeal to dealers almost as much as it does to car shoppers. The company is making its pitch by arguing that it can offer “a new sales channel that connects them with millions of Amazon customers.” And with over 310 million active users, Amazon certainly has the numbers to back it up.
Image: Amazon
The vehicles will be certified pre-owned, meaning you won’t see any brand, spanking new vehicles on the site. And all the vehicles sold on Amazon will be backed by Ford’s warranties and roadside assistance guarantees. According to Ford, every vehicle that appears on Amazon has been “inspected, reconditioned, and comes with a Ford warranty, Ford Rewards points, and in some cases, a money-back guarantee.”
“It’s about delivering the best of both worlds to our customers,” Robert Kaffl, executive director of Ford US sales and dealer relations. said in a release.
People typically hate car shopping, with
most surveys
showing that the dealership experience tops people’s lists of frustrations. Tesla has helped spearhead a movement toward a direct-to-consumer (DTC) model in which people buy their vehicles directly from the company, eschewing a dealership. Forty-eight states have laws that limit or ban manufacturers from selling vehicles directly to consumers —
though that has started to shift recently
thanks to Tesla’s popularity. Tesla has no independent dealerships, but dealership associations in multiple states have filed
numerous lawsuits
against Tesla to prevent the company from selling cars directly.
Follow topics and authors
from this story to see more like this in your personalized homepage feed and to receive email updates.
Andrew J. Hawkins
Jeff Bezos reportedly launches new AI startup with himself as CEO
Guardian
www.theguardian.com
2025-11-17 16:47:11
Former Amazon CEO to co-head Project Prometheus with tech executive Vik Bajaj, according to the New York Times After stepping down as Amazon’s CEO four years ago, Jeff Bezos, the billionaire founder and former chief executive of the online shopping company, is going to be a CEO again. This time, Bez...
After stepping down as Amazon’s CEO four years ago, Jeff Bezos, the billionaire founder and former chief executive of the online shopping company, is going to be a CEO again. This time, Bezos has appointed himself co-CEO of an AI startup called Project Prometheus,
the New York Times
reported, citing anonymous sources.
The startup, which will focus on developing AI for engineering and manufacturing in various fields, has already received $6.2bn in funding – more than many companies are able to raise in their lifetimes. Leading the company alongside Bezos is his co-founder and co-CEO Vik Bajaj, a celebrity tech executive in his own right. Bajaj is a physicist and chemist best known for his work at Google’s moonshot factory, X, where he founded the health startup Verily.
It’s unclear how long the company has existed, but Project Prometheus has already hired 100 employees, poaching several from firms like OpenAI, DeepMind and Meta, according to the Times. Little else is known about the project, as Bezos did not disclose where the company will be based or how its technology might function. The world’s third-richest person has been closely involved at his aerospace company Blue Origin for several years as its founder and sole shareholder, but becoming a CEO again will be the first formal role Bezos has taken since stepping down from
Amazon
.
Bezos and Bajaj join a crowded AI marketplace where billions of dollars are being poured into competitors like OpenAI and billions more are being spent to support the rapid development of AI models. More experts are beginning to question the financial sustainability of the AI industry, though. Michael Burry, best known for accurately predicting the 2008 housing crisis, recently invested $1bn in bets that Palantir and Nvidia shares will fall just days after he accused some of the big tech firms of
using accounting tricks to “artificially boost earnings”.
[$] Hot-page migration and specific-purpose NUMA nodes
Linux Weekly News
lwn.net
2025-11-17 16:46:04
For better or for worse, the NUMA node is the abstraction used by the
kernel to keep track of different types of memory. How that abstraction is
used, though, is still an active area of development. Two patch sets
focused on this problem are currently under review; one addresses the
perennial prob...
Reader subscriptions are a necessary way
to fund the continued existence of LWN and the quality of its content.
If you are already an LWN.net subscriber, please log in
with the form below to read this content.
Please consider
subscribing to LWN
. An LWN
subscription provides numerous benefits, including access to restricted
content and the warm feeling of knowing that you are helping to keep LWN
alive.
(Alternatively, this item will become freely
available on December 4, 2025)
A vulnerability in DoorDash's systems could allow anyone to send "official" DoorDash-themed emails right from company's authorized servers, paving a near-perfect phishing channel. DoorDash has now patched the issue, but a contentious disclosure dispute has erupted, with both sides accusing each othe...
A vulnerability in DoorDash's systems could allow anyone to send "official" DoorDash-themed emails right from company's authorized servers, paving a near-perfect phishing channel.
DoorDash has now patched the issue, but a contentious dispute has erupted between the researcher who reported the vulnerability and the company, with both sides accusing each other of acting improperly.
Anyone could send 'official' DoorDash emails
A simple flaw in DoorDash for Business platform could let anyone send fully branded "official" emails directly from
no-reply@doordash.com
.
Discovered by a pseudonymous security researcher
doublezero7
, the flaw could be exploited by threat actors to launch highly convincing phishing campaigns and social engineering scams.
Put simply, anyone could create a free
DoorDash for Business
account and then use backend admin dashboards to add a new 'Employee' (with an arbitrary name and email address), assign them meal-expense budgets, and craft emails containing arbitrary HTML.
The resulting message, bearing DoorDash's official template, would arrive seamlessly in the recipient's mailbox, not spam:
Researcher-crafted email sent via DoorDash's official servers
(BleepingComputer)
The security researcher behind this discovery recently approached BleepingComputer and provided evidence of the vulnerability to demonstrate how it could be exploited by nefarious actors.
"The root was Budget name input field. It was stored as raw text in database and forwarded to email where it would be rendered," the researcher told BleepingComputer.
"Using unclosed tags I could have altered the entire block of text about Budget information and using
display:none
it was possible to hide it completely and replace with crafted payload."
"It relied completely on email client defensive layers. Everything that passed, would be rendered. The input field enabled even on* events except for 'onerror' but these are filtered by email platforms," continued the researcher.
The "Claim Free 20$ Voucher" text shown in the above screenshot is a proof-of-concept HTML injection exploit crafted by the researcher on the DoorDash for Business backend, shown below:
DoorDash for Business budgets backend used for creating emails
(BleepingComputer)
The researcher stated that emails sent by misuse of this feature was not limited to DoorDash customers or merchants—in other words, a threat actor could target almost any recipient with DoorDash-themed emails.
The vulnerability is identical to the unaddressed flaw in Uber's email systems that let just about anyone
send emails from Uber.com
, as revealed in 2022 by BleepingComputer.
Escalated after 15 months
Prior to contacting BleepingComputer, the researcher, frustrated with the long disclosure, published
a brief vulnerability report
summarizing the flaw and his disclosure attempts, while withholding any concrete technical details or proofs-of-concept.
"The technical flaw was never complex—it was a classic stored payload rendered in a trusted email template," they wrote at the time.
The discoverer, however, took issue with the fact that the HackerOne report (#
2608277
) filed for the vulnerability was closed as "Informative" around 17th of July, 2024, and "never escalated," leaving the flaw exploitable for more than 15 months.
According to the publicly visible timeline, and the researcher's narration of events to BleepingComputer, it wasn't until the week of November 3rd, that the flaw was patched, after the researcher directly emailed DoorDash repeatedly.
"Without my public pressure, this vulnerability would still be active today," claims the researcher.
Ethical disclosure derailed, no bounty offered
To establish a clear timeline, BleepingComputer performed an independent verification, and this is where the researcher's account and DoorDash's version of events begin to diverge.
The researcher contends the company ignored the issue until pressured. The company says the pressure itself crossed ethical lines.
According to a person familiar with the company's handling of the vulnerability report, the interaction between the researcher and DoorDash broke down after the researcher demanded a substantial payment tied to disclosure timelines—something the source said the company viewed as outside the bounds of ethical bug bounty research. According to the source, the researcher also refused an offer of mediation and reiterated the financial demand.
The researcher framed the report as a legitimate security finding deserving compensation. DoorDash has, however, deemed the issue out of scope and characterised the approach as feeling like extortion.
A DoorDash spokesperson told BleepingComputer:
"DoorDash operates a bug bounty program to work with security researchers to help find and fix potential security vulnerabilities.
In this case, this individual attempted to extort DoorDash for money. They were subsequently banned from our bug bounty program.
The issue reported fell outside the scope of our bug bounty program. Our security team has taken action to address the issue reported.
We will continue to work with researchers who operate in good faith to protect our platform."
BleepingComputer also reached out to HackerOne to get full context.
The bug bounty platform did not comment on why the researcher's report was closed as "Informative."
A HackerOne spokesperson, however, shared with BleepingComputer:
"We’ve reviewed this matter in coordination with our customer and confirmed that appropriate actions were taken consistent with HackerOne’s Code of Conduct and the customer’s program policy.
HackerOne takes our Terms of Service seriously to ensure the safety and security of the platform, our customers, and the HackerOne community.
If we determine that a community member has violated HackerOne's Terms of Service, we will take prompt, appropriate action, which may include a permanent platform ban."
In emails to BleepingComputer, the researcher reiterated that the flaw went unpatched for an extended period and acknowledged using a "less ethical" approach when contacting the company directly, including demanding a payment:
"My final email to DoorDash was a conditional offer to enter a compensated NDA in exchange for silence, given the history of severe neglect," they wrote to BleepingComputer.
"DoorDash fixed the bug within hours of the ultimatum (proving its criticality) but chose to ignore my payment demand and silently patch the flaw."
The now-patched flaw, while useful for spoofing convincing DoorDash emails, did not expose DoorDash user data or provide access to internal systems.
Like any phishing vector, it required the recipient to be tricked into taking action, raising questions about its actual 'criticality'.
The researcher, however, sees the "silent fix" and their subsequent removal from the bug bounty program as retaliatory.
"My decision to [disclose the vulnerability] stems directly from the fact that the company took my service for free, tried to hide their 16-month failure, and then attempted to silence me, which I believe is an unethical approach to security research."
"I honestly did not know if all my actions were right or not. But ultimately they patched the flaw so at least I accomplished that," concluded the researcher to BleepingComputer.
The case illustrates how vulnerability reporting can become fraught, and how misaligned expectations between researchers and companies can quickly lead to conflict.
A source briefed on the matter told BleepingComputer the flaw is unrelated to the
October DoorDash breach
disclosed this month.
It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.
Learn how top leaders are turning investment into measurable impact.
strace-macos: A clone of the strace command for macOS
Lobsters
github.com
2025-11-17 16:26:28
Ever since I tested software on macOS, I deeply missed my beloved strace that I use when programs are missbehaving. macOS has dtruss but it's getting locked down and more unusable with every machine. My approach uses the signed lldb binary on the system and re-implements the output you are know from...
Color output
- Syntax highlighting when output is a TTY
Summary statistics
- Time/call/error counts with
-c
Installation
With Nix Flakes
# Run directly
nix run github:Mic92/strace-macos -- ls
# Install to profile
nix profile install github:Mic92/strace-macos
Manual Installation
strace-macos requires macOS system Python (has LLDB bindings):
# Install directly from GitHub
/usr/bin/python3 -m pip install --user git+https://github.com/Mic92/strace-macos
# Then run (if ~/Library/Python/3.x/bin is in PATH)
strace /usr/local/bin/git status # or any homebrew-installed binary# Or run directly from repository without installing
git clone https://github.com/Mic92/strace-macos
cd strace-macos
/usr/bin/python3 -m strace_macos /usr/local/bin/git status
Usage
Trace a command
# Basic usage (use non-system binaries like homebrew or nix-installed)
strace /usr/local/bin/git status
# Output to file
strace -o trace.txt /usr/local/bin/git status
# JSON output
strace --json /usr/local/bin/git status > trace.jsonl
# Filter syscalls
strace -e trace=open,close /usr/local/bin/git status
strace -e trace=file /usr/local/bin/git status # All file operations
strace -e trace=network /usr/local/bin/curl https://example.com # Network syscalls only
For the Next Election, Prepare to Fight MAGA’s Steal
OrganizingUp
convergencemag.com
2025-11-17 15:58:27
The November 4 elections give the opposition to MAGA—and its progressive contingent especially—a huge boost. But in the coming year, the fight against US-style fascism will only intensify, as will contention over how to wage it and what ought to come next. The election results showed the scale of pu...
There is an internal server error on Cloudflare's network.
What can I do?
Please try again in a few minutes.
Cloudflare Ray ID:
9a07764a8c05de9a
•
Your IP:
204.19.241.141
•
Performance & security by
Cloudflare
Pennsylvania AG confirms data breach after INC Ransom attack
Bleeping Computer
www.bleepingcomputer.com
2025-11-17 15:57:48
The office of Pennsylvania's attorney general has confirmed that the ransomware gang behind an August 2025 cyberattack stole files containing personal and medical information. [...]...
The office of Pennsylvania's attorney general has confirmed that the ransomware gang behind an August 2025 cyberattack stole files containing personal and medical information.
This comes after Attorney General Dave Sunday
confirmed in early September
that the incident was a ransomware attack and his office refused to pay the ransom requested by the cybercriminals after they encrypted compromised systems.
"The OAG later learned that certain files may have been accessed without authorization. The OAG reviewed which data may have been involved and learned that certain personal information was contained in some files,"
said
the Pennsylvania Office of the Attorney General (OAG) in a Friday press release.
"Based on the OAG's review of the data involved, for some individuals the information involved may have included name, Social Security number, and/or medical information."
On August 9th, when the breach was discovered, the threat actors
took down systems
and services on Pennsylvania OAG's network, including the office's website, employees' email accounts, and landline phone lines, in an attack with widespread and crippling impact.
While the Pennsylvania OAG has yet to share more information on how the network was breached,
cybersecurity expert Kevin Beaumont found
that the Pennsylvania AG's network had several public-facing Citrix NetScaler appliances vulnerable to ongoing attacks exploiting a critical vulnerability (CVE-2025-5777) known as
Citrix Bleed 2
.
Although the Pennsylvania OAG didn't publicly attribute the breach to a specific ransomware operation, the INC Ransom gang claimed responsibility for the attack on September 20th, when they added it as a new entry on their dark web leak site.
At the time, the ransomware group claimed that they had stolen 5.7TB worth of files from the Pennsylvania OAG's network and said that the breach allegedly provided them with access to an FBI internal network.
Pennsylvania OAG claimed by INC Ransom (BleepingComputer)
INC Ransom surfaced as a ransomware-as-a-service (RaaS) operation in July 2023 and has since targeted organizations in the private and public sectors worldwide.
This is the third time that Pennsylvania state entities have been breached in a ransomware attack:
Delaware County
paid a $500,000 ransom following a DoppelPaymer attack in 2020 to recover encrypted systems, and a ransomware attack took down the
Pennsylvania Senate Democratic Caucus
' network in 2017.
As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.
This free cheat sheet outlines 7 best practices you can start using today.
Gemini is a new internet technology supporting an electronic library of interconnected text documents. That's not a new idea, but it's not old fashioned either. It's timeless, and deserves tools which treat it as a first class concept, not a vestigial corner case. Gemini isn't about innovation or disruption, it's about providing some respite for those who feel the internet has been disrupted enough already. We're not out to change the world or destroy other technologies. We are out to build a lightweight online space where documents are just documents, in the interests of every reader's privacy, attention and bandwidth.
There is a theory which states
that if ever anyone discovers exactly what the Linux networking stack does and why it does it, it will instantly disappear and be replaced by something even more bizarre and inexplicable.
There is another theory which states that Git was created to track how many times this has already happened.
Many products at Cloudflare aren’t possible without pushing the limits of network hardware and software to deliver improved performance, increased efficiency, or novel capabilities such as
soft-unicast, our method for sharing IP subnets across data centers
. Happily, most people do not need to know the intricacies of how your operating system handles network and Internet access in general. Yes, even most people within Cloudflare.
But sometimes we try to push well beyond the design intentions of Linux’s networking stack. This is a story about one of those attempts.
Hard solutions for soft problems
My previous blog post about the Linux networking stack teased a problem matching the ideal model of soft-unicast with the basic reality of IP packet forwarding rules. Soft-unicast is the name given to our method of sharing IP addresses between machines.
You may learn about all the cool things we do with it
, but as far as a single machine is concerned, it has dozens to hundreds of combinations of IP address and source-port range, any of which may be chosen for use by outgoing connections.
The SNAT target in iptables supports a source-port range option to restrict the ports selected during NAT. In theory, we could continue to use iptables for this purpose, and to support multiple IP/port combinations we could use separate packet marks or multiple TUN devices. In actual deployment we would have to overcome challenges such as managing large numbers of iptables rules and possibly network devices, interference with other uses of packet marks, and deployment and reallocation of existing IP ranges.
Rather than increase the workload on our firewall, we wrote a single-purpose service dedicated to egressing IP packets on soft-unicast address space. For reasons lost in the mists of time, we named it SLATFATF, or “fish” for short. This service’s sole responsibility is to proxy IP packets using soft-unicast address space and manage the lease of those addresses.
WARP is not the only user of soft-unicast IP space in our network. Many Cloudflare products and services make use of the soft-unicast capability, and many of them use it in scenarios where we create a TCP socket in order to proxy or carry HTTP connections and other TCP-based protocols. Fish therefore needs to lease addresses that are not used by open sockets, and ensure that sockets cannot be opened to addresses leased by fish.
Our first attempt was to use distinct per-client addresses in fish and continue to let Netfilter/conntrack apply SNAT rules. However, we discovered an unfortunate interaction between Linux’s socket subsystem and the Netfilter conntrack module that reveals itself starkly when you use packet rewriting.
Collision avoidance
Suppose we have a soft-unicast address slice, 198.51.100.10:9000-9009. Then, suppose we have two separate processes that want to bind a TCP socket at 198.51.100.10:9000 and connect it to 203.0.113.1:443. The first process can do this successfully, but the second process will receive an error when it attempts to connect, because there is already a socket matching the requested 5-tuple.
Instead of creating sockets, what happens when we emit packets on a TUN device with the same destination IP but a unique source IP, and use source NAT to rewrite those packets to an address in this range?
If we add an nftables “snat” rule that rewrites the source address to 198.51.100.10:9000-9009, Netfilter will create an entry in the conntrack table for each new connection seen on fishtun, mapping the new source address to the original one. If we try to forward more connections on that TUN device to the same destination IP, new source ports will be selected in the requested range, until all ten available ports have been allocated; once this happens, new connections will be dropped until an existing connection expires, freeing an entry in the conntrack table.
Unlike when binding a socket, Netfilter will simply pick the first free space in the conntrack table. However, if you use up all the possible entries in the table
you will get an EPERM error when writing an IP packet
. Either way, whether you bind kernel sockets or you rewrite packets with conntrack, errors will indicate when there isn’t a free entry matching your requirements.
Now suppose that you combine the two approaches: a first process emits an IP packet on the TUN device that is rewritten to a packet on our soft-unicast port range. Then, a second process binds and connects a TCP socket with the same addresses as that IP packet:
The first problem is that there is no way for the second process to know that there is an active connection from 198.51.100.10:9000 to 203.0.113.1:443, at the time the
connect()
call is made. The second problem is that the connection is successful from the point of view of that second process.
It should not be possible for two connections to share the same 5-tuple. Indeed, they don’t. Instead, the source address of the TCP socket is
silently rewritten to the next free port
.
This behaviour is present even if you use conntrack without either SNAT or MASQUERADE rules. It usually happens that the lifetime of conntrack entries matches the lifetime of the sockets they’re related to, but this is not guaranteed, and you cannot depend on the source address of your socket matching the source address of the generated IP packets.
Crucially for soft-unicast, it means conntrack may rewrite our connection to have a source port outside of the port slice assigned to our machine. This will silently break the connection, causing unnecessary delays and false reports of connection timeouts. We need another solution.
Taking a breather
For WARP, the solution we chose was to stop rewriting and forwarding IP packets, instead to terminate all TCP connections within the server and proxy them to a locally-created TCP socket with the correct soft-unicast address. This was an easy and viable solution that we already employed for a portion of our connections, such as those directed at the CDN, or intercepted as part of the Zero Trust Secure Web Gateway. However, it does introduce additional resource usage and potentially increased latency compared to the status quo. We wanted to find another way (to) forward.
An inefficient interface
If you want to use both packet rewriting and bound sockets, you need to decide on a single source of truth. Netfilter is not aware of the socket subsystem, but most of the code that uses sockets and is also aware of soft-unicast is code that Cloudflare wrote and controls. A slightly younger version of myself therefore thought it made sense to change our code to work correctly in the face of Netfilter’s design.
Our first attempt was to use the Netlink interface to the conntrack module, to inspect and manipulate the connection tracking tables before sockets were created.
Netlink is an extensible interface to various Linux subsystems
and is used by many command-line tools like
ip
and, in our case,
conntrack-tools
. By creating the conntrack entry for the socket we are about to bind, we can guarantee that conntrack won’t rewrite the connection to an invalid port number, and ensure success every time. Likewise, if creating the entry fails, then we can try another valid address. This approach works regardless of whether we are binding a socket or forwarding IP packets.
There is one problem with this — it’s not terribly efficient. Netlink is slow compared to the bind/connect socket dance, and when creating conntrack entries you have to specify a timeout for the flow and delete the entry if your connection attempt fails, to ensure that the connection table doesn’t fill up too quickly for a given 5-tuple. In other words, you have to manually reimplement
tcp_tw_reuse
option to support high-traffic destinations with limited resources. In addition, a stray RST packet can erase your connection tracking entry. At our scale, anything like this that can happen, will happen. It is not a place for fragile solutions.
Socket to ‘em
Instead of creating conntrack entries, we can abuse kernel features for our own benefit. Some time ago Linux added
the TCP_REPAIR socket option
, ostensibly to support connection migration between servers e.g. to relocate a VM. The scope of this feature allows you to create a new TCP socket and specify its entire connection state by hand.
An alternative use of this is to create a “connected” socket that never performed the TCP three-way handshake needed to establish that connection. At least, the kernel didn’t do that — if you are forwarding the IP packet containing a TCP SYN, you have more certainty about the expected state of the world.
However, the introduction of
TCP Fast Open
provides an even simpler way to do this: you can create a “connected” socket that doesn’t perform the traditional three-way handshake, on the assumption that the SYN packet — when sent with its initial payload — contains a valid cookie to immediately establish the connection. However, as nothing is sent until you write to the socket, this serves our needs perfectly.
Binding a “connected” socket that nevertheless corresponds to no actual socket has one important feature: if other processes attempt to bind to the same addresses as the socket, they will fail to do so. This satisfies the problem we had at the beginning to make packet forwarding coexist with socket usage.
Jumping the queue
While this solves one problem, it creates another. By default, you can’t use an IP address for both locally-originated packets and forwarded packets.
For example, we assign the IP address 198.51.100.10 to a TUN device. This allows any program to create a TCP socket using the address 198.51.100.10:9000. We can also write packets to that TUN device with the address 198.51.100.10:9001, and Linux can be configured to forward those packets to a gateway, following the same route as the TCP socket. So far, so good.
On the inbound path, TCP packets addressed to 198.51.100.10:9000 will be accepted and data put into the TCP socket. TCP packets addressed to 198.51.100.10:9001, however, will be dropped. They are not forwarded to the TUN device at all.
Why is this the case? Local routing is special. If packets are received to a local address, they are treated as “input” and not forwarded, regardless of any routing you think should apply. Behold the default routing rules:
cbranch@linux:~$ ip rule
cbranch@linux:~$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
The rule priority is a nonnegative integer, the smallest priority value is evaluated first. This requires some slightly awkward rule manipulation to “insert” a lookup rule at the beginning that redirects marked packets to the packet forwarding service’s TUN device; you have to delete the existing rule, then create new rules in the right order. However, you don’t want to leave the routing rules without any route to the “local” table, in case you lose a packet while manipulating these rules. In the end, the result looks something like this:
ip rule add fwmark 42 table 100 priority 10
ip rule add lookup local priority 11
ip rule del priority 0
ip route add 0.0.0.0/0 proto static dev fishtun table 100
As with WARP, we simplify connection management by assigning a mark to packets coming from the “fishtun” interface, which we can use to route them back there. To prevent locally-originated TCP sockets from having this same mark applied, we assign the IP to the loopback interface instead of fishtun, leaving fishtun with no assigned address. But it doesn’t need one, as we have explicit routing rules now.
Uncharted territory
While testing this last fix, I ran into an unfortunate problem. It did not work in our production environment.
It is not simple to debug the path of a packet through Linux’s networking stack. There are a few tools you can use, such as setting nftrace in nftables or applying the LOG/TRACE targets in iptables, which help you understand which rules and tables are applied for a given packet.
Our expectation is that the packet will pass the prerouting hook, a routing decision is made to send the packet to our TUN device, then the packet will traverse the forward table. By tracing packets originating from the IP of a test host, we could see the packets enter the prerouting phase, but disappear after the ‘routing decision’ block.
While there is a block in the diagram for “socket lookup”, this occurs after processing the input table. Our packet doesn’t ever enter the input table; the only change we made was to create a local socket. If we stop creating the socket, the packet passes to the forward table as before.
It turns out that part of the ‘routing decision’ involves some protocol-specific processing. For IP packets,
routing decisions can be cached
, and some basic address validation is performed. In 2012, an additional feature was added:
early demux
. The rationale being, at this point in packet processing we are already looking up something, and the majority of packets received are expected to be for local sockets, rather than an unknown packet or one that needs to be forwarded somewhere. In this case, why not look up the socket directly here and save yourself an extra route lookup?
The workaround at the end of the universe
Unfortunately for us, we just created a socket and didn’t want it to receive packets. Our adjustment to the routing table is ignored, because that routing lookup is skipped entirely when the socket is found. Raw sockets avoid this by receiving all packets regardless of the routing decision, but the packet rate is too high for this to be efficient. The only way around this is disabling the early demux feature. According to the patch’s claims, though, this feature improves performance: how far will performance regress on our existing workloads if we disable it?
This calls for a simple experiment: set the
net.ipv4.tcp_early_demux
syscall to 0 on some machines in a datacenter, let it run for a while, then compare the CPU usage with machines using default settings and the same hardware configuration as the machines under test.
The key metrics are CPU usage from /proc/stat. If there is a performance degradation, we would expect to see higher CPU usage allocated to “softirq” — the context in which Linux network processing occurs — with little change to either userspace (top) or kernel time (bottom). The observed difference is slight, and mostly appears to reduce efficiency during off-peak hours.
Swimming upstream
While we tested different solutions to IP packet forwarding, we continued to terminate TCP connections on our network. Despite our initial concerns, the performance impact was small, and the benefits of increased visibility into origin reachability, fast internal routing within our network, and simpler observability of soft-unicast address usage flipped the burden of proof: was it worth trying to implement pure IP forwarding and supporting two different layers of egress?
So far, the answer is no. Fish runs on our network today, but with the much smaller responsibility of handling ICMP packets. However, when we decide to tunnel all IP packets, we know exactly how to do it.
A typical engineering role at Cloudflare involves solving many strange and difficult problems at scale. If you are the kind of goal-focused engineer willing to try novel approaches and explore the capabilities of the Linux kernel despite minimal documentation, look at
our open positions
— we would love to hear from you!
Visit
1.1.1.1
from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet,
start here
. If you're looking for a new career direction, check out
our open positions
.
A Safari content blocker for macOS, iOS, and iPadOS utilizing declarative content blocking rules.
Supports 750,000 rules across 5 extensions with Protocol Buffer storage and LZ4 compression.
Note
Looking for a detailed comparison?
Check out my
comparison guide
to see how wBlock stacks up against other Safari content blockers.
~40 MB RAM footprint
at idle via Safari's native content blocking API
Protocol Buffers serialization
with LZ4 compression for filter storage
Off-thread I/O operations
with streaming serialization to minimize main thread blocking
HTTP conditional requests
(If-Modified-Since/ETag) for efficient filter update detection
Content Modification
Element Zapper
(macOS only) generates persistent CSS selectors for manual element removal
Userscript engine
implements Greasemonkey API (GM_getValue, GM_setValue, GM_xmlhttpRequest)
Custom filter list ingestion
supports AdGuard-syntax blocklists via URL import
Category-based filter organization
with per-list toggle and automatic rule distribution
Filter list validation
with automatic disabling on Safari's 150k rule limit per extension
Blocking Capabilities
Network request blocking
via declarative content blocking rules (advertisements, trackers)
Cookie and local storage filtering
through Safari content blocker rule actions
CSS injection
for cosmetic filtering and element hiding
Script blocking
for unwanted software and JavaScript execution
Pop-up and redirect prevention
using Safari content blocking patterns
Configuration & Management
Configurable auto-update intervals
from 1 hour to 7 days with background refresh
Per-site blocking controls
through Safari's content blocker enable/disable API
Whitelist management
for trusted domains with Safari extension state persistence
Regional filter support
with preset lists for language-specific content blocking
Filter compilation monitoring
with real-time rule count and compilation status
Background update notifications
(optional) for filter list refresh events
Screenshots
Userscript Management
Manage paywalls, YouTube Dislikes, and more
Settings & Customization
Configure auto-updates, notifications, and preferences
iOS Interface
Full-featured blocking on iPhone and iPad
Technical Implementation
Core Architecture
Protocol Buffers (libprotobuf) with LZ4 compression for filter serialization
Asynchronous I/O with Swift concurrency (async/await, Task, Actor isolation)
Streaming serialization to disk minimizes peak memory usage during compilation
5 Safari content blocking extensions per platform (maximum Safari API capacity)
SafariServices framework integration for declarative content blocking
Dependencies & Standards
SafariConverterLib v4.0.4 for AdGuard to Safari rule conversion
AdGuard Scriptlets v2.2.9 for advanced blocking techniques
Swift 5.9+ with strict concurrency checking enabled
WCAG 2.1 AA compliance with full VoiceOver and Dynamic Type support
SwiftProtobuf for cross-platform filter storage format
Support Development
wBlock is free and open-source software. Financial contributions support ongoing development and maintenance:
FAQ
How does wBlock compare to other ad blockers?
Check out our
comparison guide
vs uBlock Origin Lite, AdGuard, and Wipr.
Can I use my own filter lists?
Yes! wBlock supports any AdGuard-compatible filter list. Add the URL in Custom Filter Lists.
Does wBlock slow down Safari?
No. wBlock uses Safari's native declarative content blocking API, which processes rules in a separate process. Memory overhead is ~40 MB at idle with no measurable impact on page load times.
Do userscripts work on iOS?
Yes. The userscript engine implements the Greasemonkey API (GM_getValue, GM_setValue, GM_xmlhttpRequest, GM_addStyle) on both iOS and macOS via Safari Web Extensions.
How often do filters update?
Auto-update intervals are configurable from 1 hour to 7 days, or manually triggered. Updates use HTTP conditional requests (If-Modified-Since/ETag headers) to minimize bandwidth usage.
Is the element zapper available on iOS?
Not yet.
I wrote a few months ago about
the proxy war by Google against the open web by means of XSLT
.
Unsurprisingly,
Google
has been moving forward on the deprecation
,
still without providing a solid justification on the reasons why other than
“we've been leeching off a
FLOSS
library for which we've finally found enough security bugs to use as an excuse”.
They do not explain why they haven't decided to fix the security issues in the library instead,
or adopt a more modern library written in a safe language, taking the opportunity to upgrade
XSLT
support
to a more recent, powerful and easier-to-use revision of the standard.
Instead, what they do is to provide a “polyfill”, a piece of
JavaScript
that can allegedly used to supplant the functionality.
Curiously, however, they do
not
plan to ship such alternative in-browser,
which would allow a transparent transition without even a need to talk about XSLT
at all
.
No, they specifically refuse to do it, and instead are requesting anyone still relying on XSLT to
replace the invocation of the XSLT with a
non-standard
invocation of the JavaScript polyfill that should replace it.
This means that at least one of these two things are true:
the polyfill is not, in fact, sufficient to cover all the use cases previously covered by the built-in support for XSLT,
and insofar as it's not, they (Google) do not intend to invest resources in maintaining it,
meaning that the task is being dumped on web developers
(
IOW
, Google is removing a feature that is going to create
more work
for web developers just to provide the same functionality that they used to have from the browsers);
insofar as the polyfill is sufficient to replace the XSLT support in the browser,
the policy to not ship it as a replacement confirms that the security issues in the XSLT library used in Chrome
were nothing more than excuses to give the final blow to
RSS
and any other
XML
format
that is still the backbone of an independent web.
Actions, as they say, speak louder than words.
When a company claims that a service or feature they are removing can be still accessed by other means,
but do not streamline such access said alternative,
and instead require their users to do the work necessary to access it,
you can rest assured that beyond any word of support they may coat their actions with
there is a plain and direct intent at sabotaging said feature,
and you can rest assured that any of the excuses brought forward to defend the choice
are nothing but lies to cover a vested interest in sabotaging the adoption of the service or feature:
the intent is for you to
not
use that feature at all, because they have a financial interest in you
not
using it.
And the best defense against that is to attack, and push the use of that feature even harder.
Do not install the polyfill.
Do not change your XML files to load it.
Instead, flood their issue tracker with requests to bring back in-browser XSLT support.
Report failed support for XSLT as a broken in browsers, because this is not a website issue.
I will not comply.
As I have for
years
continued using
MathML
,
SVG
and
SMIL
(sometimes
even all together
) despite Google's intent on their deprecation,
I will keep using XSLT, and in fact will look for new opportunities to rely on it.
At most, I'll set up an infobox warning users reading my site about their browser's potential brokenness
and inability to follow standards, just like I've done for MathML and SMIL
(you can see such infoboxes in the page I linked above).
And just like ultimately I was proven right
(after several years, Google ended up fixing both their SMIL and their MathML support in Chrome),
my expectation is that, particularly with more of us pushing through,
the standards will once again prevail.
Remember: there is not technical justification for Google's choice.
This is not about a lone free software developer donating their free time to the community
and finding they do not have the mental or financial resources to provide a particular feature.
This is a trillion-dollar ad company who has been actively destroying the open web
for over a decade
and finally admitting to it
as a consequence of the
LLM
push
and
intentional
[enshittification of web search]
404mediaSearch
.
The deprecation of XSLT is entirely political,
fitting within the same grand scheme of the parasitic corporation killing the foundations of its own success
in an effort to grasp more and more control of it.
And the fact that the
WebKit
team at
Apple
and the
Firefox
team at
Mozilla
are intentioned to follow along on the same destructive path is not a counterpoint,
but rather an endorsement of the analysis,
as neither of those companies is interested in providing a
User Agent
as much as a
surveillance capitalism
tool that you happen to use.
If you have to spend any time at all to confront the Chrome push to deprecate XSLT,
your time is much better spent inventing better uses of XSLT
and reporting broken rendering if/when they start disabling it,
than caving to their destructive requests.
The WHATWG is not a good steward of the open web
I've mentioned it
before
,
but the
WHATWG
, even assuming the best of intentions at the time it was founded,
is not a good steward of the open web.
It is more akin to the corrupt takeover you see in
regulatory capture
,
except that instead of taking over the
W3C
they just decided to get the ball and run with it,
taking advantage of the fact that, as implementors,
they had the final say on what counted as “standard”
(
de facto
if not
de jure
):
exactly the same attitude with which
Microsoft
tried taking over the web
through
Internet Explorer
at the time of the
First browser war
,
an attitude that was rightly condemned at the time
—even as many of those who did, have so far failed to acknowledge the problem
with Google's no less detrimental approach.
The key point here is that,
whatever the
WHATWG
was (or was intended to be)
when it was founded by
Opera
and
Mozilla
developers,
it is now manifestly a corporate monster.
Their corporate stakeholder have a very different vision of what the Web should be
compared to the vision on which the Web was founded, the vision promoted by the
W3C
,
and the vision that underlies a truly open and independent web.
The WHATWG aim is to turn the Web into an
application delivery platform
,
a profit-making machine for corporations where the computer
(and the browser through it)
are a means for
them
to make money off
you
rather than for
you
to gain access to services you may be interested in.
Because of this, the browser in their vision is not a User Agent anymore,
but a tool that sacrifices privacy, actual security and user control
at the behest of the corporations “on the other side of the wire”
—and of their political interests
(refs. for
Apple
,
Google
,
and
a more recent list with all of them together
).
Such vision is in direct contrast with that of the Web as a
repository of knowledge
, a vast vault of interconnected
documents
whose value emerges from organic connections, personalization,
variety, curation and
user control
.
But who in the WHATWG today would defend such vision?
A new browser war?
Maybe what we need is a new browser war.
Not one of corporation versus corporation
—doubly more so when all currently involved parties are allied
in their efforts to enclose the Web than in fostering an open and independent one—
but one of users versus corporations,
a war to take
back
control of the Web and its tools.
It's kind of ironic that in a time when
hosting
has become almost trivial,
the fight we're going to have to fight is going to be on the
client
side.
But the biggest question is: who do we have as champions on our side?
I would have liked to see browsers like
Vivaldi
,
the spiritual successor to
my beloved classic Opera browser
,
amongst our ranks,
but with their dependency on the
Blink
rendering engine, controlled by Google,
they won't be able to do anything but cave,
as will all other FLOSS browsers relying on Google's or Apple's engines,
none of which I foresee spending any significant efforts rolling back the extensive changes that these deprecations will involve.
(We see this already when it comes to JPEG XL support,
but it's also true that e.g. Vivaldi has made RSS feeds first-class documents,
so who knows, maybe they'll find a way for XSLT through the polyfill that was mentioned above,
or something like that?)
Who else is there?
There is
Servo
, the rendering engine that was being developed at Mozilla to replace Gecko,
and that turned into an independent project when its team was fired
en masse
in 2020
;
but they don't support XSLT yet,
and I don't see why they would prioritize its implementation over, say, stuff like MathML or SVG animations with SMIL
(just to name two of my pet peeves), or optimizing browsing speed
(seriously, try opening
the home page of this site
and scrolling through).
What we're left with at the moment is basically just Firefox forks,
and two of these (
LibreWolf
and
WaterFox
)
are basically just “Firefox without the most egregious privacy-invasive misfeatures”,
which leaves the question open about what they will be willing to do when Mozilla helps Google kill XSLT,
and only the other one,
Pale Moon
, has grown into its own independent fork
(since such an old version of Firefox, in fact, that it doesn't support WebExtensions-based plugins,
such as the most recent versions of crucial plugins like
uBlock Origin
or
Privacy Badger
,
although it's possible to install community-supported forks of these plugins designed
for legacy versions of Firefox and forks like Pale Moon).
(Yes, I am aware that there are other minor independent browser projects,
like
Dillo
and
Ladybird
,
but the former is in no shape of being a serious contender for general use
on more sophisticated pages —just see it in action on this site, as always—
and the latter is not even in alpha phase,
just in case the questionable “no politics” policies
—which consistently prove to be weasel words for “we're right-wingers but too chicken to come out as such”—
weren't enough to stay away from it.)
Periodically, I go through them (the Firefox forks, that is) to check if they are good enough for me to become my daily drivers.
Just for you (not really: just for me, actually), I just tested them again.
They're not ready yet, at least not for me, although I must say that I'm seeing clear improvements since my last foray into the matter,
that wasn't even that long ago.
In some cases, I can attest that they are even better than Firefox:
for example, Pale Moon and WaterFox have good JPEG XL support
(including transparency and animation support,
which break in LibreWolf as they do in the latest nightly version of Firefox I tried),
and Pale Moon still has first-class support for RSS,
from address bar indicator to rendering even in the absence of a stylesheet
(be it CSS or XSLT).
An interesting difference is that the user interface of these browsers is perceivably less refined than Firefox'.
It's a bit surprising, given the common roots,
but it emerges in several more and less apparent details,
from the spacing between menu items to overlapping text and icons in context menus,
passing through incomplete support for dark themes and other little details that all add up,
giving these otherwise quite valid browsers and amateurish feeling.
And I get it: UI design is hard, and I myself suck at it, so I'm the last person that should be giving recommendations,
but I'm still able to differentiate between more curated interfaces and ones that need some work;
and if even someone like me who distinctly prefers function over form finds these little details annoying,
I can imagine how much worse this may feel to users who care less about the former and more about the latter.
Sadly, if a new browser war is to be fought to wrestle control from the corporate-controlled WHATWG,
this matters
.
In the end, I find myself in a “waiting” position.
How long will it take for Firefox to kill their XSLT support?
What will its closest forks
(WaterFox in particular is the one I'm eyeing)
be able to do about it?
Or will Pale Moon remain the only modern broser with support for it,
as a hard fork that has since long gone its own way?
Will they have matured enough to become my primary browsers?
We'll see in time.
Another web?
There's more to the Internet than the World Wide Web built around the
HTTP
protocol and the
HTML
file format.
There used to be
a lot
of the Internet beyond the Web,
and while much of it still remains as little more than a shadow of the past,
largely eclipsed by the Web and what has been built on top of it
(not all of it good) outside of some modest revivals,
there's also new parts of it that have tried to learn from the past,
and build towards something different.
This is the case for example of the so-called “
Gemini
Space”,
a small corner of the Internet that has nothing to do with the LLM Google is trying to shove down everyone's throat,
and in fact not only predates it,
as I've mentioned already
,
but is intentionally built around dfferent technology to
stay away
from the influence of Google and the like.
The Gemini protocol is designed to be conceptually simpler than HTTP,
while providing modern features like built-in transport-level security
and certificate-based client-side authentication,
and its own “native” document format, the so-called
gemtext
.
There's something to be said about not wanting to share your environment with the poison that a large part of the web has become,
but at the same time, there's also something to be said about throwing away the baby with the bathwater.
The problem with the web isn't technical, it's social. The tech itself is fine.
I'm not going to sing the praises of the Gemini protocol or gemtext either,
even though I do like the idea of a web built on lightweight markup formats:
I would love it if browsers had native support for formats like
Markdown
or
AsciiDoc
(and gemtext, for the matter):
it's why I keep the
AsciiDoctor Browser Extension
installed.
But more in general, the Web (or at least its user agents) should not differentiate.
It should not differentiate by protocol, and it should not differentiate by format.
We've seen it with image formats like
MNG
being unfairly excluded,
with [motivations based on alleged code bloat][nomng] that today are manifest in their idiocy
(and yes, it hasn't escaped my that even Pale Moon doesn't support the format),
and we're seeing it today with JPEG XL threatened with a similar fate,
without even gracing us with a ridiculous excuse.
On the upside, we have browsers shipping with
a full-fledged PDF reader
,
which is a good step towards the integration of this format with the greater Web.
In an ideal world, browsers would have not deprecated older protocols
like
Gopher
or
FTP
,
and would just add support for new ones like Gemini,
as they would have introduced support for new (open) document formats as they came along.
It shouldn't be up to the User Agent to determine which formats the user is able to access, and through which protocol.
(If I had any artistic prowess (and willpower), I'd hack the “myth of consensual X” meme
representing the user and the server saying “I consent”, and the browser saying “I don't”.)
I do appreciate that there is a non-trivial maintenance cost that grows with the number of formats and protocols,
but
we know from classic Opera
that it is indeed quite possible
to
ship a full Internet suite
in a browser packaging.
In the old days, browser developers were well-aware that a single vendor couldn't “cover all bases”,
which is how interfaces like the once ubiquituous
NPAPI
were born.
The plug-in interface has been since removed from most browsers,
an initiative
again promoted by Google
, announced in 2013 and completed in 2015
(I should really add this to
my previous post on Google killing the open web
,
but I also really don't feel like touching that anymore; here will have to suffice),
with the other major browsers quickly following suit,
and its support is now relegated only to independent browsers like Pale Moon.
And even if it can be argued that the NPAPI specifically was indeed mired with unfixable security and portability issues and it had to go,
its removal without a clear cross-browser upgrade path has been
a significant loss for the evolution of the web
,
destroying the primary “escape hatch” to solve the chicken-and-egg problem of client-side format support versus server-side format adoption.
By the way, it was also responsible for the biggest
W3C
blunder,
the standardization of
DRM
for the web through the so-called
Encrypted Media Extensions
, a betrayal of the W3C own mission statement.
The role of multimedia streaming in the death of the User Agent
The timeline here is quite interesting, and correlates with
the already briefly mentioned history of Flash
,
and its short-lived
Microsoft Silverlight
competitor, that were largely responsible for
the early expansive growth of multimedia streaming services
in the early years of the XXI century:
with the tension between Apple's effort to kill Flash
and the need of emerging streaming services like
Netflix
' and
Hulu
's
to support in-browser multimedia streaming,
there was a need to improve support for multimedia formats in the nascent
HTML5
specification,
but also a requirement from the
MAFIAA
partners
that such a support would allow enforcing the necessary restrictions that would, among other things, prevent users from saving a local copy of the stream,
something that could be more easily enforced within the Flash players the industries had control over
than in a
User Agent
controlled by
the user
.
This is where the development of
EME
came in in 2013:
this finally allowed a relatively quick phasing out of the Flash plugin,
and
a posteriori
of the plugin interface that allowed its integration with the browsers:
by that time, the Flash plugin was by and large
the
plugin the API existed for,
and the plugin itself was indeed still supported by the browsers for some time after support for the API was otherwise discontinued
(sometimes through alternative interfaces such as the
PPAPI
,
other times by keeping the NPAPI support around, but only enabled for the Flash plugin).
There are several interesting consideration that emerge from this little glimpse at the history of Flash and the EME.
First of all,
this is one more piece of history that goes to show how pivotal the year 2013 was for the enshittification of the World Wide Web,
as discussed already
.
Secondly,
it shows how the developers of major browsers are more than willing to provide a smooth transition path with no user intervention,
at least when catering to the interests of major industries.
This indicates that when they don't, it's not because they
can't
:
it's because
they have a vested interest in not doing it
.
Major browser development is now (and has been for over a decade at least)
beholden not to the needs and wants of
their own users
,
but to those of other industries.
But I repeat myself
.
And thirdly, it's an excellent example, for the good and the bad, of how the plugin interface has helped drive the evolution of the web,
as I was saying
.
Controlled evolution
The removal of NPAPI support,
followed a few years later by the removal of the (largely Chrome-specific) PPAPI interface
(that was supposed to be the “safer, more portable” evolution of NPAPI),
without providing
any
alternative,
is a very strong indication of the path that browser development has taken in the last “decade plus”:
a path where the Web is entirely controlled by what Google, Apple and Microsoft
(hey look, it's
GAFAM
all over again!) decide about what is allowed on it,
and what is
not
allowed to
not
be on it (to wit, ads and other user tracking implements).
With plugins,
anything
could be integrated in the World Wide Web,
and such integration would be close to as efficient as could be.
Without plugins, such integration, when possible at all,
becomes clumsier and more expensive.
As an example, there are browser extensions that can introduce support for JPEG XL to browsers that don't have native support.
This provides a workaround to display such images in said browsers,
but when a picture with multiple formats is offered
(which is what I do e.g. to provide a PNG fallback for the JXL images I provide),
this results in
both
the PNG
and
JXL formats being downloaded,
increasing
the amount of data transferred instead of decreasing it
(one of the many benefits of JXL over PNG).
By contrast, a plugin could register itself a handler for the JPEG XL format,
and the browser would then be able to delegate rendering of the image to the plugin,
only falling back to the PNG in case of failure,
thus maximizing the usefulness of the format pending a built-in implementation.
The poster child of this lack of efficiency is arguably
MathJax
,
that has been carrying for nearly two decades the burden of bringing math to the web while browser implementors slacked off on their MathML support.
And while MathJax
does
offer more than just MathML support for browers without native implementations,
there is little doubt that it would be more effective in delivering the services it delivers
if it could be a plugin rather than a multi-megabyte
(any efforts to minimize its size notwithstanding)
JavaScript library each math-oriented website needs to load.
(In fact, it is somewhat surprising that there isn't a browser extesion version of MathJax that I can find
other than a
GreaseMonkey
user script with convoluted usage requirements
,
but I guess this is the cost we have to pay for the library flexibility,
and the sandboxing requirements enforced on JavaScript in modern browsers.)
Since apparently “defensive writing” is a thing we need when jotting down an article such as this
(as if it even matters, given how little attention people give to what they read —if they read it at all— before commenting),
I should clarify that I'm not necessarily for a return to NPAPI advocating.
We have decades of experience about what could be considered the actual technical issues with that interface,
and how they can be improved upon
(which is for example what PPAPI allegedly did,
before Google decided it would be better off to kill plugins entirely
and thus gain full control of the Web as a platform),
as we do about sandboxing external code running in browsers
(largely through the efforts to sandbox JavaScript).
A better plugin API could be designed.
It's not going to happen.
It is now apparent that the major browsers explicitly and intentionally do not want to allow the kind of flexibility that plugins would allow,
hiding their controlling efforts behind security excuses.
It would thus be up to the minority browsers to come up with such an interface
(or actually multiple ones, at least one for protocols and one for document types),
but with most of them beholden to the rendering engines controlled by Google
(for the most part), Apple (some, still using WebKit over Blink), and Mozilla (the few Firefox forks),
they are left with very little leeway, if any at all, in terms of what they can support.
But even if, by some miraculous convergence, they did manage to agree on and implement support for such an API,
would there actually be an interest by third party to develop plugins for it?
I can envision this as a way for browsers to share coding efforts in supporting new protocols and formats before integrating them as first-class
(for example, the
already mentioned
Gemini protocol and gemtext format could be implemented first as a plugin
to the benefit of any browsers supporting such hypothetical interfaces)
but there be any interest in developing for it, rather tha just trying to get the feature implemented in the browsers themselves?
A mesh of building blocks
Still, let me dream a bit of something like this,
a browser made up of composable components,
protocol handlers separate from primary document renderers separate from attachment handlers.
A new protocol comes out?
Implement a plugin to handle that, and you can test it by delivering the same content over it,
and see it rendered just the same from the other components in the chain.
A new document format comes out?
Implement a plugin to handle that, and it will be used to render documents in the new format.
A new image format comes out?
Implement a plugin to handle that, and any image in the new format will be visible.
A new scripting language comes out?
You guessed it: implement a plugin to handle that …
How much tech would have had a real chance at proving itself in the field if this had been the case,
or would have survived being ousted not by technical limitations,
but by unfriendly corporate control?
Who knows, maybe
RSS and Atom integration would still be trivially at everybody's hand;
nobody would have had to fight with the long-standing bugs in PNG rendering from Internet Explorer,
MNG would have flourished, JPEG XL would have become ubiquituous six months after the specification had been finalized;
we would have seen HTML+SMIL provide declarative interactive documents without JavaScript as far back as 2008;
XSLT 2 and 3 would have long superseded XSLT 1 as
the
templating languages for the web,
or XSLT would have been supplanted by the considerably more accessible XQuery;
XHTML2 would have lived and grown alongside HTML5,
offering more sensible markup for many common features,
and much-wanted capabilities such as client-side includes.
The web would have been very different from what it is today,
and most importantly we would never would have had to worry
about a single corporation getting to dictate what is and what isn't allowed on the Web.
But the reality is much harsher and darker.
Google has control, and we do need to wrestle it out of their hands.
Resist
So, do not comply.
Resist.
Force the unwanted tech through.
Use RSS.
Use XSLT.
Adopt JPEG XL as your primary image format.
And report newly broken sites for what they are:
a browser fault, not a content issue.
Post scriptum
I would like to add here any
pièces de résistance
for XSLT.
xslt.rip
(best viewed with a browser that supports XSLT; viewing the source is highly recommended);
and last but not least (yeah I know, doesn't make much sense with the current short list, but still),
a shameless plug of
my own website
, of course,
because of the idea to use XSLT not to produce HTML, but to produce SVG.
Show HN: Bsub.io – zero-setup batch execution for command-line tools
I built bsub because I was tired of wiring up Docker images, Python environments, GPUs, sandboxing, and resource limits every time I needed to run heavy command-line tools from web apps. I wanted: send files -> run job in the cloud -> get output -> done.
bsub lets you execute tools like Whisper, Typst, Pandoc, Docling, and FFmpeg as remote batch jobs with no environment setup. You can try them locally via the CLI or integrate via a simple REST API.
Example (PDF extraction):
bsubio submit -w pdf/extract \*.pdf
Works like running the tool locally, but the compute and isolation happen in the cloud.
Technical details: - Each job runs in an isolated container with defined CPU/GPU/RAM limits. - Files are stored ephemerally for the duration of the job and deleted after completion. - REST API returns job status, logs, and results. - Cold start for light processors (Typst, Pandoc) is low; Whisper/FFmpeg take longer due to model load/encoding time. - Backend scales horizontally; more workers can be added during load spikes.
Current processors:
SST/Whisper -- speech-to-text
Typography -- Typst, Pandoc
PDF extraction -- Docling
Video transcoding -- FFmpeg
More coming; suggestions welcome for tools that are painful to set up locally.
Looking for testers! CLI is open source:
https://github.com/bsubio/cli
. Installers available for Linux/macOS; Windows testing is in progress. Free during early testing; pricing TBD.
If you’re on Windows, feedback is especially helpful: contact@bsub.io
If you try it, I’d appreciate feedback on API design, latency, missing processors, or anything rough around the edges.
Josefsson: Introducing the Debian Libre Live Images
Linux Weekly News
lwn.net
2025-11-17 15:07:16
Debian developer Simon Josefsson has announced
the Debian
Libre Live Images project, to allow installing Debian without any
non-free software:
Since the 2022 decision on non-free firmware, the official images
for bookworm and trixie contains non-free software.
The Debian Libre Live Images project...
The Debian Libre Live Images project provides Live ISO images for
Intel/AMD-compatible 64-bit x86 CPUs (amd64) built without any
non-free software, suitable for running and installing Debian. The
images are similar to the
Debian Live Images
distributed as
Debian
live images
.
He does warn that this is a first public release, so there may be
problems. See the
current
list of known issues
before trying the images out.
WeatherNext 2: Our most advanced weather forecasting model
The new AI model delivers more efficient, more accurate and higher-resolution global weather predictions.
General summary
Google's WeatherNext 2 is here, giving you faster and more detailed weather forecasts using AI. This new model predicts hundreds of weather scenarios in under a minute. You can now access WeatherNext 2 forecast data in Earth Engine and BigQuery, or join the early access program on Google Cloud's Vertex AI.
Summaries were generated by Google AI. Generative AI is experimental.
Bullet points
"WeatherNext 2" is Google's new AI weather model, forecasting faster and more efficiently than ever before.
WeatherNext 2 generates hundreds of possible weather scenarios in under a minute, using just one TPU.
This model surpasses the previous WeatherNext model on 99.9% of variables and lead times.
WeatherNext 2 data is now available in Earth Engine and BigQuery, with Vertex AI early access.
WeatherNext 2 upgrades weather forecasts in Search, Gemini, Pixel Weather, Maps Platform, and Maps.
Summaries were generated by Google AI. Generative AI is experimental.
Basic explainer
Google made a super smart weather tool called WeatherNext 2. It uses computers to guess the weather faster and better than before. It can even show many different weather possibilities. Now, people can use it to help make important choices about the weather.
Summaries were generated by Google AI. Generative AI is experimental.
Explore other styles:
The weather affects important decisions we make everyday — from global supply chains and flight paths to your daily commute. In recent years, artificial intelligence (AI) has dramatically enhanced what’s possible in weather forecasting and the ways in which we can use it.
Today, Google DeepMind and Google Research are introducing
WeatherNext 2
, our most advanced and efficient forecasting model. WeatherNext 2 can generate forecasts 8x faster and with resolution up to 1-hour. This breakthrough is enabled by a new model that can provide hundreds of possible scenarios. Using this technology, we’ve supported weather agencies in making decisions based on a range of scenarios through our
experimental cyclone predictions
.
We're now taking our research out of the lab and putting it into the hands of users. WeatherNext 2's forecast data is now available in
Earth Engine
and
BigQuery
. We’re also launching an
early access program
on Google Cloud’s Vertex AI platform for custom model inference.
By incorporating WeatherNext technology, we’ve now upgraded weather forecasts in Search, Gemini, Pixel Weather and Google Maps Platform’s
Weather API
. In the coming weeks, it will also help power weather information in Google Maps.
Predicting more possible scenarios
From a single input, we use independently trained neural networks and inject noise in function space to create coherent variability in weather forecast predictions.
Weather predictions need to capture the full range of possibilities — including worst case scenarios, which are the most important to plan for.
WeatherNext 2 can predict hundreds of possible weather outcomes from a single starting point. Each prediction takes less than a minute on a single TPU; it would take hours on a supercomputer using physics-based models.
Our model is also highly skillful and capable of higher-resolution predictions, down to the hour. Overall, WeatherNext 2 surpasses our previous state-of-the-art WeatherNext model on 99.9% of variables (e.g. temperature, wind, humidity) and lead times (0-15 days), enabling more useful and accurate forecasts.
This improved performance is enabled by a new AI modelling approach called a
Functional Generative Network
(FGN), which injects ‘noise’ directly into the model architecture so the forecasts it generates remain physically realistic and interconnected.
This approach is particularly useful for predicting what meteorologists refer to as “marginals” and “joints.” Marginals are individual, standalone weather elements: the precise temperature at a specific location, the wind speed at a certain altitude or the humidity. What's novel about our approach is that the model is only trained on these marginals. Yet, from that training, it learns to skillfully forecast 'joints' — large, complex, interconnected systems that depend on how all those individual pieces fit together. This 'joint' forecasting is required for our most useful predictions, such as identifying entire regions affected by high heat, or expected power output across a wind farm.
Continuous Ranked Probability Score (CRPS) comparing WeatherNext 2 to WeatherNext Gen
From research to reality
With WeatherNext 2, we're translating cutting edge research into high-impact applications. We’re committed to advancing the state of the art of this technology and making our latest tools available to the global community.
Looking ahead, we’re actively researching capabilities to improve our models, including integrating new data sources, and expanding access even further. By providing powerful tools and open data, we hope to accelerate scientific discovery and empower a global ecosystem of researchers, developers and businesses to make decisions on today’s most complex problems and build for the future.
The Video Game Industry’s Existential Crisis (with Jason Schreier)
403 Media
www.404media.co
2025-11-17 15:00:04
Video games are more popular than ever, but many of the biggest companies in the business seem like they are struggling to adapt and convert that popularity into stability and sustainability....
The video game industry has had a turbulent few years. The pandemic made people play more and caused a small boom, which then subsided, resulting in wave after wave of massive layoffs. Microsoft, one of the major console manufacturers, is shifting its strategy for Xbox as the company shifts its focus to AI. And now, Electronic Arts, once a load-bearing publisher for the industry with brands like
The Sims
and
Madden
, is going private via a leveraged buyout in a deal involving Saudi Arabia’s Public Investment Fund and Jared Kushner.
Video games are more popular than ever, but many of the biggest companies in the business seem like they are struggling to adapt and convert that popularity into stability and sustainability. To try and understand what the hell is going on, this week we have a conversation between Emanuel and Jason Schreier, who reports about video games for Bloomberg and one of the best journalists on this beat.
Jason helps us unpack why Microsoft is now aiming for higher-than-average profit margins at Xbox and why the company is seemingly bowing out of the console business despite a massive acquisition spree. We also talk about what the EA deal tells us about other game publishers, and what all these problems tell us about changing player habits and the future of big budget video games.
Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co
Typechecking is undecideable when 'type' is a type
Lobsters
dspace.mit.edu
2025-11-17 14:58:38
A function has a dependent type when the type of its result depends upon the value of its argument. The type of all types is the type of every type, including itself. In a typed lambda calculus, these two features synergize in a conceptually clean and uniform way to yield enormous expressive power a...
If delivered today, the
last
of Huxley’s
7-part lecture series at MIT
would probably be categorised under motivational talks or self-help strategies. It surveys the various under-explored non-pharmacological means to realise the best versions of ourselves. Or, as he calls it, actualising our desirable potentialities.
Some fairly well known means for self-actualisation that Huxley discusses are
Alexander technique
and
Gestalt therapy
. While the former is considered a pseudoscientific therapy
I am not using this as a pejorative, for a change.
, Huxley tells us of the influential educator
John Dewey
’s admiration of
F.M. Alexander
’s work. He paraphrases Dewey’s foreword in one of Alexander’s books:
Alexander’s technique is to education what education is to life, in general. It proposes an ideal and provides means whereby that ideal can be realised.
While this is mentioned much later in his lecture, a Huxley-like figure today might need to lead their lecture with this part to convey the import and validity of such approaches. Huxley doesn’t really go into the details of why or how this is true but admittedly admires Alexander’s contribution. I have a friend who swears by it for enhancing their dance practice though I have not been able to grok what it does so far—it sounds a lot like what meditation does in terms of raising awareness.
Huxley believes that such practices are effective at
psychologically breeding in
desirable qualities in a person instead of:
genetically breeding out
undesirable ones; or
pharmacologically enhancing
our intellectual abilities—i.e., improved attention spans or reduced sleep—to increase our
mental efficiency
. Here, Huxley predicts the emergence of
Adderall
though I was less impressed by his forecasting euphoric pharmaceuticals. After all, this lecture was delivered several years after the publication of
The Doors of Perception
.
The underlying efficiency gains from these psychological approaches happen, he claims, because they train humans into being fundamentally happier; something he felt pharma-euphorics might also achieve one day. The reason such therapies are effective is that they do not provide a homogeneous training; instead, they can be adapted to individual personalities and their intrinsic differences, allowing each individual to actualise their latent potentialities via different means. This recognition that there is no single ideal version of a human is quite old; Huxley finds the most realistic (or complete) ideals in the
Bhagavad Gita
’s
Three Yogas
. The ways of devotion (Bhakti), selfless action (Karma), and contemplation (Jnana) can all lead to enlightenment, i.e., the actualisation of desirable qualities. He sees a correspondence between these yogas and the more recent Western categorisation of human beings by William Sheldon’s somatotypes—
quite a problematic take when I read the traits listed in this table
. While I do admire his capacity to form connections through history
Whether I see them or not is less important.
, I don’t see the relationship between these two beyond the fact that these are categories. They’re by no means comparable so maybe I missed the point of this comparison.
He highlights parallels between the positive outcomes of training one’s imagination via Gestalt therapy and those seen in
Richard DeMille
’s strategies in
Children’s Imagination Games
: children get more fun out of life by, for example, visualising adversarial or intimidating situations with adults in a more playful manner so that things feel less serious than they need to be
That is how I understood this section.
. The examples Huxley gives here reminded me of those given to nervous interviewees and public speakers, like “
Imagine your audience is naked
”, to take the edge off.
As an educator I am very sympathetic to Huxley’s grand idea in this lecture that we must develop new methods of education that adapt to personality variations; the current strategy of pigeonholing students into the identical training-and-testing modalities remains inappropriate, especially as technological advancements—which academia struggles to keep up with—could enable more personalised and expressive learning. He doesn’t imagine one-to-one therapy as the scalable solution to actualisation; instead, he suggests building upon the pre-existing categories of humans into three or more groups to test out other means and potentially develop new ones based on past practices.
While I think the whole lecture is delivered eloquently, I am unsure if it has more of a thesis than that; it’s more a survey of techniques that rely on anecdotal evidence or name-dropping to convey their effectiveness.
Tomorrow’s post
will unpack how he sees the role of the humanities in helping us actualise our desirable potentialities, which Huxley discussed in his lecture. It will also include my own concluding thoughts on his lecture. Maybe I will have some semblance of a thesis from it as I contemplate his words overnight.
Microsoft is working to resolve a known issue preventing users from installing the Microsoft 365 desktop apps on Windows devices.
In a Friday service alert seen by BleepingComputer, Microsoft said that this bug is caused by misconfigured authentication components and may affect any customer attempting to install Microsoft 365 desktop apps version 2508 (Build 19127.20358) and version 2507 (Build 19029.20294).
The Microsoft 365 team is now reconfiguring the impacted authentication components and estimates that a final fix will be rolled out later today.
"A newly released set of authentication components contain a misconfiguration that prevents users from installing Microsoft 365 desktop apps on Windows devices," Microsoft said in a Friday update.
"We're continuing to develop a set of two builds that address the authentication component misconfigurations, with the build for version 2508 validated and in the process of deploying. We anticipate that our second build for version 2507 will be ready for final validation by our next scheduled update, after which we'll deploy to fully remediate impact."
While Microsoft has yet to disclose the number of customers and the regions impacted by this known issue, it has tagged it as an incident (OP1186186), a designation commonly used to describe a critical service issue typically involving noticeable user impact.
Microsoft is also working to address a separate issue (tracked as MO1176905) that affects a limited number of admins and users, preventing them from accessing multiple Microsoft 365 services.
As Redmond explains, this impacts only customers who have the Microsoft 365 Group SecurityEnabled set to false after it was changed to false as the default value following a recent misconfiguration.
Last week, it resolved a Microsoft Intune bug (IT1185063) that prevented some users from successfully enrolling new AOSP (Android Open Source Project) or Android Personal Work Profile devices.
In October, it also
mitigated a DNS outage
that impacted customers worldwide, preventing them from logging into company networks and accessing Microsoft 365 and Microsoft Azure services.
⚠️
Disclaimer
: This is an experimental project for educational and research purposes. The author assumes no responsibility for misuse or damage resulting from the use of this system. Use responsibly and in compliance with applicable laws.
What it does
: Detects movement at home using Wi-Fi (no cameras, no microphones)
What you need
: A ~€10 device (ESP32-S3) + Home Assistant or MQTT server + ESP-IDF development tools
Setup time
: 30-45 minutes (first time, including ESP-IDF setup)
🔬 Mathematical Approach
This project currently does NOT use Machine Learning models.
Instead, it employs a
mathematical approach
that extracts
10 features
from CSI (Channel State Information) data using statistical and signal processing techniques.
Key Points
✅
No ML training required
: Works out-of-the-box with mathematical algorithms
✅
10 extracted features
: Statistical, spatial, and temporal features
✅
Real-time processing
: Low latency detection on ESP32-S3 hardware
✅
Foundation for ML
: These features can serve as the basis for collecting labeled datasets to train ML models for advanced tasks (people counting, activity recognition, gesture detection)
The mathematical approach provides excellent movement detection without the complexity of ML model training, while the extracted features offer a solid foundation for future ML-based enhancements.
🛒 What You Need
Hardware (Total: ~€10)
✅
2.4GHz Wi-Fi Router
(the one you already have at home works fine)
✅
ESP32-S3 DevKit bundle with external antennas
(~€10) - Available on Amazon, AliExpress, or electronics stores
ESP32-S3 DevKit with external antennas (recommended for better reception)
Software (All Free)
✅
MQTT Broker
(required for operation):
Home Assistant
with built-in MQTT broker (on Raspberry Pi, PC, NAS, or cloud)
OR standalone
Mosquitto
MQTT server (can run on any device, including Raspberry Pi)
✅
ESP-IDF v6.1
(development framework for building firmware)
Required Skills
✅
Basic command line knowledge
required for building and flashing firmware
Setup & Installation
: Follow the complete guide in
SETUP.md
Calibration & Tuning
: Optimize for your environment with
CALIBRATION.md
📖 How It Works (Simple Version)
When someone moves in a room, they "disturb" the Wi-Fi waves traveling between the router and the sensor. It's like when you move your hand in front of a flashlight and see the shadow change.
The ESP32-S3 device "listens" to these changes and understands if there's movement.
Advantages
✅
No cameras
(total privacy)
✅
No wearables needed
(no bracelets or sensors to wear)
✅
Works through walls
(Wi-Fi passes through walls)
✅
Very cheap
(~€10 total)
📚 Technical Explanation (click to expand)
What is CSI (Channel State Information)?
Channel State Information (CSI)
represents the physical characteristics of the wireless communication channel between transmitter and receiver. Unlike simple RSSI (Received Signal Strength Indicator), CSI provides rich, multi-dimensional data about the radio channel.
What CSI Captures
Per-subcarrier information:
Amplitude
: Signal strength for each OFDM subcarrier (up to 64)
Phase
: Phase shift of each subcarrier
Frequency response
: How the channel affects different frequencies
Environmental effects:
Multipath propagation
: Reflections from walls, furniture, objects
Doppler shifts
: Changes caused by movement
Temporal variations
: How the channel evolves over time
Spatial patterns
: Signal distribution across antennas/subcarriers
Why It Works for Movement Detection
When a person moves in an environment, they:
Alter multipath reflections (new signal paths)
Change signal amplitude and phase
Create temporal variations in CSI patterns
Modify the electromagnetic field structure
These changes are detectable even through walls, enabling
privacy-preserving presence detection
without cameras, microphones, or wearable devices.
💡 What You Can Do With It
Practical Examples
🏠
Home security
: Get an alert if someone enters while you're away
👴
Elderly care
: Monitor activity to detect falls or prolonged inactivity
💡
Smart automation
: Turn on lights/heating only when someone is present
⚡
Energy saving
: Automatically turn off devices in empty rooms
👶
Child monitoring
: Alert if they leave the room during the night
🌡️
Climate control
: Heat/cool only occupied zones
📍 Where to Place the Sensor
Optimal sensor placement is crucial for reliable movement detection.
Recommended Distance from Router
Optimal range: 3-8 meters
Distance
Signal
Multipath
Sensitivity
Noise
Recommendation
< 2m
Too strong
Minimal
Low
Low
❌ Too close
3-8m
Strong
Good
High
Low
✅
Optimal
> 10-15m
Weak
Variable
Low
High
❌ Too far
Placement Tips
✅
Position sensor in the area to monitor
(not necessarily in direct line with router)
✅
Height: 1-1.5 meters
from ground (desk/table height)
✅
External antenna
: Use IPEX connector for better reception
❌
Avoid metal obstacles
between router and sensor (refrigerators, metal cabinets)
❌
Avoid corners
or enclosed spaces (reduces multipath diversity)
⚙️ System Architecture
Processing Pipeline
ESPectre uses a streamlined processing pipeline:
┌─────────────┐
│ CSI Data │ Raw Wi-Fi Channel State Information
└──────┬──────┘
│
▼
┌─────────────┐
│Segmentation │ Moving Variance Segmentation (MVS)
│ (2-state) │ IDLE ↔ MOTION (operates on RAW CSI)
└──────┬──────┘
│
├─────────────────────┐
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐
│ IDLE │ │ MOTION │
│ (no feat.) │ │ (optional │
│ │ │ features) │
└─────────────┘ └──────┬───────┘
│
▼
┌─────────────┐
│ Filters │ Butterworth, Wavelet,
│ │ Hampel, Savitzky-Golay
│ │ (applied to features only)
└──────┬──────┘
│
▼
┌─────────────┐
│ Features │ 10 mathematical features
│ (if enabled)│ (filtered CSI data)
└──────┬──────┘
│
┌────────────────────┴────────────────────┐
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐
│ MQTT │ Publish state + metrics │ MQTT │
│ (IDLE) │ │ (MOTION) │
└─────────────┘ └─────────────┘
Key Points:
2-state system
: IDLE or MOTION (no intermediate states)
Segmentation-based
: Uses Moving Variance Segmentation (MVS) on
raw CSI data
Filters applied to features only
: Segmentation uses unfiltered data to preserve motion sensitivity
Optional features
: Feature extraction only during MOTION state (configurable)
Q: Do I need programming knowledge to use it?
A: Basic command line skills are needed to build and flash the firmware using ESP-IDF. Follow the step-by-step guide in SETUP.md.
Q: Does it work with my router?
A: Yes, if your router has 2.4GHz Wi-Fi (virtually all modern routers have it).
Q: How much does it cost in total?
A: Hardware: ~€10 for the ESP32-S3 device. Software: All free and open source. You'll also need a device to run the MQTT broker (Home Assistant or Mosquitto), which can be a Raspberry Pi (~€35-50) or any existing PC/NAS you already have (free).
Q: Do I need to modify anything on the router?
A: No! The router works normally. The sensor "listens" to Wi-Fi signals without modifying anything.
Q: Can I try it without Home Assistant?
A: Yes, you can use any MQTT server (e.g., Mosquitto) or even just view data via serial port.
Q: Does it work through walls?
A: Yes, the 2.4GHz Wi-Fi signal penetrates drywall. Reinforced concrete walls reduce sensitivity but detection remains possible at reduced distances.
Q: How many sensors are needed for a house?
A: It depends on size. One sensor can monitor ~50 m². For larger homes, use multiple sensors (1 sensor every 50-70 m² for optimal coverage).
Q: Can it distinguish between people and pets?
A: The system uses a 2-state segmentation model (IDLE/MOTION) that identifies generic movement without distinguishing between people, pets, or other moving objects. For more sophisticated classification (people vs pets, activity recognition, gesture detection), trained AI/ML models would be required (see Future Evolutions section).
Q: Does it consume a lot of Wi-Fi bandwidth?
A: No, MQTT traffic is minimal. With smart publishing disabled (default), the system publishes all detection updates. When smart publishing is enabled, the system only sends data on significant changes or every 5 seconds as a heartbeat, resulting in ~0.2-0.5 KB/s per sensor during idle periods and up to ~1 KB/s during active movement. Network impact is negligible.
Q: Does it work with mesh Wi-Fi networks?
A: Yes, it works normally. Make sure the ESP32 connects to the 2.4 GHz band.
Q: Is a dedicated server necessary?
A: No, Home Assistant can run on Raspberry Pi, NAS, or cloud. Alternatively, just an MQTT broker (Mosquitto) on any device is sufficient.
Q: How accurate is the detection?
A: Detection accuracy is highly environment-dependent and requires proper tuning. Factors affecting performance include: room layout, wall materials, furniture placement, distance from router (optimal: 3-8m), and interference levels. In optimal conditions with proper tuning, the system provides reliable movement detection. Adjust the
segmentation_threshold
parameter to tune sensitivity for your specific environment.
Q: What's the power consumption?
A: ~500mW typical during continuous operation. The firmware includes support for power optimization, and deep sleep modes can be implemented for battery-powered deployments, though this would require custom modifications to the code.
Q: If it doesn't work, can I get help?
A: Yes, open an
Issue on GitHub
or contact me via email.
🔒 Security and Privacy
🔐 Privacy, Security & Ethical Considerations (click to expand)
Nature of Collected Data
The system collects
anonymous data
related to the physical characteristics of the Wi-Fi radio channel:
Amplitudes and phases of OFDM subcarriers
Statistical signal variances
NOT collected
: personal identities, communication contents, images, audio
CSI data represents only the properties of the transmission medium and does not contain direct identifying information.
Privacy Advantages
✅
No cameras
: Respect for visual privacy
✅
No microphones
: No audio recording
✅
No wearables
: Doesn't require wearable devices
✅
Aggregated data
: Only statistical metrics, not raw identifying data
⚠️
Disclaimer and Ethical Considerations
WARNING
: Despite the intrinsic anonymity of CSI data, this system can be used for:
Non-consensual monitoring
: Detecting presence/movement of people without their explicit consent
Behavioral profiling
: With advanced AI models, inferring daily life patterns
Domestic privacy violation
: Tracking activities inside private homes
Usage Responsibility
The user is solely responsible for using this system and must:
✅
Obtain explicit consent
from all monitored persons
✅
Respect local regulations
(GDPR in EU, local privacy laws)
✅
Clearly inform
about the presence of the sensing system
✅
Limit use
to legitimate purposes (home security, personal home automation)
✅
Protect data
with encryption and controlled access
❌
DO NOT use
for illegal surveillance, stalking, or violation of others' privacy
� Technical Deep Dive
Moving Variance Segmentation (MVS) analysis: baseline graphs (top) show quiet state, while bottom graphs show motion detection with turbulence signal, adaptive threshold, and state transitions
🔬 Signal Processing Pipeline (click to expand)
Data Flow
1️⃣
CSI Acquisition
(ESP32-S3)
Native ESP32 CSI API
captures Wi-Fi Channel State Information via callback
Extracts amplitude and phase data from OFDM subcarriers (up to 64 subcarriers)
Typical capture rate: ~10-100 packets/second depending on Wi-Fi traffic
2️⃣
Motion Segmentation
(ESP32-S3)
Spatial turbulence calculation
: Standard deviation of subcarrier amplitudes (raw CSI data)
Adaptive threshold
: Based on moving variance of turbulence signal
Segment features
: Duration, average turbulence, maximum turbulence
Circular buffer
: Maintains up to 10 recent segments for analysis
Foundation for ML
: Segments can be labeled and used for activity classification
Note
: Segmentation operates on
raw, unfiltered CSI data
to preserve motion sensitivity. Filters are not applied to the turbulence signal used for segmentation.
3️⃣
Optional Signal Processing Filters
(ESP32-S3)
Advanced filters applied to CSI data
before feature extraction
(configurable via MQTT):
Characteristics across OFDM subcarriers (frequency domain):
Spatial Variance
- Variability across subcarriers, indicates multipath diversity
Spatial Correlation
- Correlation between adjacent subcarriers, affected by movement
Spatial Gradient
- Rate of change across subcarriers, highly sensitive to movement
Temporal (2 features)
Changes between consecutive CSI packets:
Temporal Delta Mean
- Average absolute difference from previous packet
Temporal Delta Variance
- Variance of differences from previous packet
Usage
Feature extraction is
enabled by default
but can be disabled to reduce CPU usage.
Note
: Features are only extracted during MOTION state, not during IDLE, to optimize performance.
📋 Technical Specifications (click to expand)
Hardware Requirements
Board
: ESP32-S3-DevKitC-1 N16R8
Flash
: 16MB
PSRAM
: 8MB
Wi-Fi
: 802.11 a/g/n (2.4 GHz only)
Antenna
: Built-in PCB antenna + IPEX connector for external
Works only on 2.4 GHz band (ESP32-S3 hardware limitation)
Sensitivity dependent on: wall materials, antenna placement, distances, interference
Not suitable for environments with very high Wi-Fi traffic
Cannot distinguish between people, pets, or objects (generic motion detection)
Cannot count people or recognize specific activities (without ML models)
Reduced performance through metal obstacles or thick concrete walls
🤖 Future Evolutions: AI Approach
📚 Machine Learning and Deep Learning (click to expand)
The current implementation uses an
advanced mathematical approach
with 10 features and multi-criteria detection to identify movement patterns. While this provides excellent results without requiring ML training, scientific research has shown that
Machine Learning
and
Deep Learning
techniques can extract even richer information from CSI data for complex tasks like people counting, activity recognition, and gesture detection.
Advanced Applications
1.
People Counting
Classification or regression models can estimate the number of people present in an environment by analyzing complex patterns in CSI.
References:
Wang et al.
(2017) - "Device-Free Crowd Counting Using WiFi Channel State Information" - IEEE INFOCOM
Xi et al.
(2016) - "Electronic Frog Eye: Counting Crowd Using WiFi" - IEEE INFOCOM
2.
Activity Recognition
Neural networks (CNN, LSTM, Transformer) can classify human activities like walking, falling, sitting, sleeping.
References:
Wang et al.
(2015) - "Understanding and Modeling of WiFi Signal Based Human Activity Recognition" - ACM MobiCom
Yousefi et al.
(2017) - "A Survey on Behavior Recognition Using WiFi Channel State Information" - IEEE Communications Magazine
Zhang et al.
(2019) - "WiFi-Based Indoor Robot Positioning Using Deep Neural Networks" - IEEE Access
3.
Localization and Tracking
Deep learning algorithms can estimate position and trajectory of moving people.
References:
Wang et al.
(2016) - "CSI-Based Fingerprinting for Indoor Localization: A Deep Learning Approach" - IEEE Transactions on Vehicular Technology
Chen et al.
(2018) - "WiFi CSI Based Passive Human Activity Recognition Using Attention Based BLSTM" - IEEE Transactions on Mobile Computing
4.
Gesture Recognition
Models trained on CSI temporal sequences can recognize hand gestures for touchless control.
References:
Abdelnasser et al.
(2015) - "WiGest: A Ubiquitous WiFi-based Gesture Recognition System" - IEEE INFOCOM
Jiang et al.
(2020) - "Towards Environment Independent Device Free Human Activity Recognition" - ACM MobiCom
Available Public Datasets
UT-HAR
: Human Activity Recognition dataset (University of Texas)
Widar 3.0
: Gesture recognition dataset with CSI
SignFi
: Sign language recognition dataset
FallDeFi
: Fall detection dataset
🛜 Standardized Wi-Fi Sensing (IEEE 802.11bf) (click to expand)
Currently, only a limited number of Wi-Fi chipsets support CSI extraction, which restricts hardware options for Wi-Fi sensing applications. However, the
IEEE 802.11bf (Wi-Fi Sensing)
standard should significantly improve this situation by making CSI extraction a standardized feature.
🔹
Native sensing
: Detection of movements, gestures, presence, and vital signs
🔹
Interoperability
: Standardized support across different vendors
🔹
Optimizations
: Specific protocols to reduce overhead and power consumption
🔹
Privacy by design
: Privacy protection mechanisms integrated into the standard
🔹
Greater precision
: Improvements in temporal and spatial granularity
🔹
Existing infrastructure
: Works with already present Wi-Fi infrastructure
Adoption Status (2025)
Market
: The Wi-Fi Sensing market is in its early stages and is expected to experience significant growth in the coming years as the 802.11bf standard enables native sensing capabilities in consumer devices.
Hardware availability
:
⚠️
Consumer routers
: Currently
there are no widely available consumer routers
with native 802.11bf support
🏢
Commercial/industrial
: Experimental devices and integrated solutions already in use
🔧
Hardware requirements
: Requires multiple antennas, Wi-Fi 6/6E/7 support, and AI algorithms for signal processing
Expected timeline
:
2025-2026
: First implementations in enterprise and premium smart home devices
2027-2028
: Diffusion in high-end consumer routers
2029+
: Mainstream adoption in consumer devices
Future Benefits for Wi-Fi Sensing
When 802.11bf is widely adopted, applications like this project will become:
More accessible
: No need for specialized hardware or modified firmware
More reliable
: Standardization ensures predictable behavior
More efficient
: Protocols optimized for continuous sensing
More secure
: Privacy mechanisms integrated at the standard level
More powerful
: Ability to detect even vital signs (breathing, heartbeat)
Perspective
: In the next 3-5 years, routers and consumer devices will natively support Wi-Fi Sensing, making projects like this implementable without specialized hardware or firmware modifications. This will open new possibilities for smart home, elderly care, home security, health monitoring, and advanced IoT applications.
For now
: Solutions like this project based on
ESP32 CSI API
remain the most accessible and economical way to experiment with Wi-Fi Sensing.
📚 References
This project builds upon extensive research in Wi-Fi sensing and CSI-based movement detection. The following academic works and theses provide valuable insights into mathematical signal processing approaches for human activity recognition using Wi-Fi Channel State Information:
Wi-Fi Sensing per Human Identification attraverso CSI
University thesis (in Italian) covering CSI data collection for human recognition through Wi-Fi signal analysis, with in-depth exploration of mathematical signal processing methods.
📄
Read thesis
Channel State Information (CSI) Features Collection in Wi-Fi
Detailed analysis of CSI feature collection and processing in Wi-Fi environments, with methods for extraction and analysis suitable for mathematical processing.
📄
Read thesis
Indoor Motion Detection Using Wi-Fi Channel State Information (2018)
Scientific article describing indoor movement detection using CSI with approaches based on signal mathematics and physics, minimizing the use of machine learning models.
📄
Read paper
WiFi Motion Detection: A Study into Efficacy and Performance (2019)
Study using CSI data collected from standard devices to detect movements, with analysis of signal processing methods to extract movement events without relying on ML.
📄
Read paper
CSI-HC: A WiFi-Based Indoor Complex Human Motion Recognition Using Channel State Information (2020)
Recognition of complex indoor movements through CSI with methods based on mathematical signal features, ideal for projects with signal-based analysis without advanced ML.
📄
Read paper
Location Intelligence System for People Estimation in Indoor Environment During Emergency Operation (2022)
Demonstrates the use of ESP32 with wavelet filtering (Daubechies db4) for people detection in emergency scenarios. This paper directly influenced ESPectre's wavelet filter implementation, showing that wavelet denoising outperforms traditional filters on ESP32 hardware.
📄
Read paper
These references demonstrate that effective Wi-Fi sensing can be achieved through mathematical and statistical approaches, which is the foundation of ESPectre's design philosophy.
📋 Changelog
For a detailed history of changes, new features, and improvements, see the
CHANGELOG.md
.
📄 License
This project is released under the
GNU General Public License v3.0 (GPLv3)
.
GPLv3 ensures that:
✅ The software remains free and open source
✅ Anyone can use, study, modify, and distribute it
✅ Modifications must be shared under the same license
Got yourself a dreaded case of the Mondays? Start your week off right by catching up on last week's episode of the Hell Gate Podcast. Listen
here
, or wherever you get your podcasts.
But first, a message from this edition's sponsor:
The 2026 Other Almanac
is the perfect holiday gift. It’s a yearly publication inspired by
The Old Farmer’s Almanac
and reimagined for NYC. Unlike the OFA, which eschews anything political and caters to rural white farmers, TOA and our forty+ contributors tackle issues ranging from capitalism to pigeons.
On a blustery fall Sunday in Union Square, hundreds of members of the New York City chapter of the Democratic Socialists of America gathered to preview their two-pronged plan to help Mayor-elect Zohran Mamdani achieve the lofty promises he made during the campaign.
The first prong: Keep the pressure on Governor Kathy Hochul, who has maintained that she's not going
to raise taxes on New York's wealthiest
, despite New Yorkers from
Forest Hills, Queens
to
San Juan, Puerto Rico
telling her to her face that she must do just that, to pay for Mamdani's multi-billion dollar vision of universal child care and free buses.
The second prong: Run candidates to challenge the incumbents who stand in the way of this vision.
This App Lets ICE Track Vehicles and Owners Across the Country
403 Media
www.404media.co
2025-11-17 14:28:00
Material viewed by 404 Media shows data giant Thomson Reuters enriches license plate data with marriage, voter, and ownership records. The tool can predict where a car may be in the future....
Immigration and Customs Enforcement (ICE) recently invited staff to demos of an app that lets officers instantly scan a license plate, adding it to a database of billions of records that shows where else that vehicle has been spotted around the country, according to internal agency material viewed by 404 Media. That data can then be combined with other information such as driver license data, credit header data, marriage records, vehicle ownership, and voter registrations, the material shows.
The capability is powered by both Motorola Solutions and Thomson Reuters, the massive data broker and media conglomerate, which besides running the Reuters news service, also sells masses of personal data to private industry and government agencies. The material notes that the capabilities allow for predicting where a car may travel in the future, and also can collect face scans for facial recognition.
Do you work at ICE or CBP? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
The tool “includes a feature that enables your phone to function as a license plate recognition camera. This capability allows ERO [Enforcement and Removal Operations] officers to quickly identify and process license plate information,” a message sent to all ERO staff, and viewed by 404 Media, reads. The mobile app also integrates with a desktop application called Vehicle Manager, which is “designed to assist ERO personnel in searching, analyzing, and managing license plate data to support a wide range of operations across ERO.”
The material sent to ERO personnel shows both Motorola and Thomson Reuters are involved in the capability. Thomson Reuters has previously
faced criticism
for selling data
to ICE during the first Trump administration, when the government was forcibly separating families at the border.
Motorola, through
two acquired companies
called Vigilant Solutions and Digital Recognition Network (DRN), has license plate reading cameras spread all across the U.S. Vigilant cameras are either installed at a fixed location or placed in a police officer’s roaming patrol vehicle, which constantly scan vehicles they drive past. DRN’s tech is much the same, but its scans are
crowdsourced by hundreds of repo men
who have the cameras installed in their vehicles. Motorola says it has
“billions” of detections
.
A screenshot of Mobile Companion's marketing material available online.
That data then feeds into Motorola’s product that customers can run their own searches against, allowing them to see where a vehicle previously was “and determine where it may be located in the future,” according to
Motorola marketing material
available online. “Convoy Analysis” is a tool that “helps identify vehicles traveling together,” according to
a Department of Homeland Security (DHS) report
which looked at various license plate reader tools available on the market.
The Mobile Companion app lets users contribute to that dataset while on the move,
according to other marketing material
available online. Users can get push notifications when the Motorola surveillance network detects a hot listed vehicle (meaning a specific license plate or vehicle law enforcement is looking for), and can look at license plate results in a specific location across time, to see what other vehicles had been there. The mobile app is also capable of capturing faces and uploading them to the
Vigilant FaceSearch gallery
, which is the company’s facial recognition tool.
The material sent to ICE says users can further enhance their investigations by combining Motorola’s license plate reader network with Thomson Reuters’ data. “Thomson Reuters CLEAR combines comprehensive public and proprietary data with nationwide license plate data from Motorola Solutions’ secure shared data network to help take vehicle-involved investigations to a more precise level,” the material says.
CLEAR is Thomson Reuters’ primary analysis product, which combines data from across public records and the web. That can include details on phone numbers, addresses, associates, and social media activity, according to a video
on Thomson Reuters’ website
. A
document on Thomson Reuters’ website
says CLEAR also contains driver license data, credit header data from Experian (which is the personal information, such as addresses, at the top of a credit report), marriage records, vehicle registrations, voter registrations, and much more.
A screenshot from Thomson Reuters' website.
In an email, a Thomson Reuters spokesperson said “Mobile Companion has no relation to CLEAR,” despite the material explaining in detail how users can enrich Motorola’s license plate data with CLEAR’s. The spokesperson added “There is no data in Mobile Companion that requires a search warrant to access.” Motorola did not respond to multiple requests for comment.
On its website, Thomson Reuters markets CLEAR as a tool that has saved an abducted baby, identified a wanted man, and caught a sexual predator. The marketing makes no mention of its tech being specifically used by ICE’s deportation arm.
Thomson Reuters continues to sign multimillion dollar contracts with ICE. In May, for example, ICE paid the company nearly $5 million for access to “license plate reader data to enhance investigations for potential arrest, seizure, and forfeiture,” according to public procurement records.
The Department of Homeland Security (DHS) did not respond to a request for comment.
About the author
Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.